Linux 6.1-rc4 which should get my CI working on RPi3s again.
Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com>
Corey Minyard <minyard@acm.org>
Damian Hobson-Garcia <dhobsong@igel.co.jp>
+Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@iogearbox.net>
Daniel Borkmann <daniel@iogearbox.net> <daniel.borkmann@tik.ee.ethz.ch>
Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
Praveen BP <praveenbp@ti.com>
Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com>
-Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
+Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com>
+Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com>
Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
What: /sys/devices/virtual/memory_tiering/memory_tierN/
- /sys/devices/virtual/memory_tiering/memory_tierN/nodes
+ /sys/devices/virtual/memory_tiering/memory_tierN/nodelist
Date: August 2022
Contact: Linux memory management mailing list <linux-mm@kvack.org>
Description: Directory with details of a specific memory tier
A smaller value of N implies a higher (faster) memory tier in the
hierarchy.
- nodes: NUMA nodes that are part of this memory tier.
+ nodelist: NUMA nodes that are part of this memory tier.
:maxdepth: 1
initrd_table_override
- dsdt-override
ssdt-overlays
cppc_sysfs
fan_performance_states
also gain new certificates at run time if they are signed by a certificate
already in the secondary trusted keyring.
+try_verify_in_tasklet
+ If verity hashes are in cache, verify data blocks in kernel tasklet instead
+ of workqueue. This option can reduce IO latency.
+
Theory of operation
===================
$ v4l2-ctl -d2 -i2
$ v4l2-ctl -d2 -c horizontal_movement=4
$ v4l2-ctl -d1 --overlay=1
- $ v4l2-ctl -d1 -c loop_video=1
+ $ v4l2-ctl -d0 -c loop_video=1
$ v4l2-ctl -d2 --stream-mmap --overlay=1
And from another console:
- SMCR_EL2.LEN must be initialised to the same value for all CPUs the
kernel will execute on.
+ - HWFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
+
+ - HWFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
+
+ - HWFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
+
+ - HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
+
For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64)
- If EL3 is present:
The infrastructure emulates only the following system register space::
- Op0=3, Op1=0, CRn=0, CRm=0,4,5,6,7
+ Op0=3, Op1=0, CRn=0, CRm=0,2,3,4,5,6,7
(See Table C5-6 'System instruction encodings for non-Debug System
register accesses' in ARMv8 ARM DDI 0487A.h, for the list of
| WFXT | [3-0] | y |
+------------------------------+---------+---------+
+ 10) MVFR0_EL1 - AArch32 Media and VFP Feature Register 0
+
+ +------------------------------+---------+---------+
+ | Name | bits | visible |
+ +------------------------------+---------+---------+
+ | FPDP | [11-8] | y |
+ +------------------------------+---------+---------+
+
+ 11) MVFR1_EL1 - AArch32 Media and VFP Feature Register 1
+
+ +------------------------------+---------+---------+
+ | Name | bits | visible |
+ +------------------------------+---------+---------+
+ | SIMDFMAC | [31-28] | y |
+ +------------------------------+---------+---------+
+ | SIMDSP | [19-16] | y |
+ +------------------------------+---------+---------+
+ | SIMDInt | [15-12] | y |
+ +------------------------------+---------+---------+
+ | SIMDLS | [11-8] | y |
+ +------------------------------+---------+---------+
+
+ 12) ID_ISAR5_EL1 - AArch32 Instruction Set Attribute Register 5
+
+ +------------------------------+---------+---------+
+ | Name | bits | visible |
+ +------------------------------+---------+---------+
+ | CRC32 | [19-16] | y |
+ +------------------------------+---------+---------+
+ | SHA2 | [15-12] | y |
+ +------------------------------+---------+---------+
+ | SHA1 | [11-8] | y |
+ +------------------------------+---------+---------+
+ | AES | [7-4] | y |
+ +------------------------------+---------+---------+
+
Appendix I: Example
-------------------
For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
responsibility to save IO target specific info in userspace.
+- ``UBLK_CMD_START_USER_RECOVERY``
+
+ This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
+ command is accepted after the old process has exited, ublk device is quiesced
+ and ``/dev/ublkc*`` is released. User should send this command before he starts
+ a new process which re-opens ``/dev/ublkc*``. When this command returns, the
+ ublk device is ready for the new process.
+
+- ``UBLK_CMD_END_USER_RECOVERY``
+
+ This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
+ command is accepted after ublk device is quiesced and a new process has
+ opened ``/dev/ublkc*`` and get all ublk queues be ready. When this command
+ returns, ublk device is unquiesced and new I/O requests are passed to the
+ new process.
+
+- user recovery feature description
+
+ Two new features are added for user recovery: ``UBLK_F_USER_RECOVERY`` and
+ ``UBLK_F_USER_RECOVERY_REISSUE``.
+
+ With ``UBLK_F_USER_RECOVERY`` set, after one ubq_daemon(ublk server's io
+ handler) is dying, ublk does not delete ``/dev/ublkb*`` during the whole
+ recovery stage and ublk device ID is kept. It is ublk server's
+ responsibility to recover the device context by its own knowledge.
+ Requests which have not been issued to userspace are requeued. Requests
+ which have been issued to userspace are aborted.
+
+ With ``UBLK_F_USER_RECOVERY_REISSUE`` set, after one ubq_daemon(ublk
+ server's io handler) is dying, contrary to ``UBLK_F_USER_RECOVERY``,
+ requests which have been issued to userspace are requeued and will be
+ re-issued to the new process after handling ``UBLK_CMD_END_USER_RECOVERY``.
+ ``UBLK_F_USER_RECOVERY_REISSUE`` is designed for backends who tolerate
+ double-write since the driver may issue the same I/O request twice. It
+ might be useful to a read-only FS or a VM backend.
+
Data plane
----------
CRC and Math Functions in Linux
===============================
+Arithmetic Overflow Checking
+----------------------------
+
+.. kernel-doc:: include/linux/overflow.h
+ :internal:
+
CRC Functions
-------------
+++ /dev/null
-Dongwoon Anatech DW9714 camera voice coil lens driver
-
-DW9174 is a 10-bit DAC with current sink capability. It is intended
-for driving voice coil lenses in camera modules.
-
-Mandatory properties:
-
-- compatible: "dongwoon,dw9714"
-- reg: I²C slave address
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/media/i2c/dongwoon,dw9714.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Dongwoon Anatech DW9714 camera voice coil lens driver
+
+maintainers:
+ - Krzysztof Kozlowski <krzk@kernel.org>
+
+description:
+ DW9174 is a 10-bit DAC with current sink capability. It is intended for
+ driving voice coil lenses in camera modules.
+
+properties:
+ compatible:
+ const: dongwoon,dw9714
+
+ reg:
+ maxItems: 1
+
+ powerdown-gpios:
+ description:
+ XSD pin for shutdown (active low)
+
+ vcc-supply:
+ description: VDD power supply
+
+required:
+ - compatible
+ - reg
+
+additionalProperties: false
+
+examples:
+ - |
+ i2c {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ camera-lens@c {
+ compatible = "dongwoon,dw9714";
+ reg = <0x0c>;
+ vcc-supply = <®_csi_1v8>;
+ };
+ };
maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
- - Krzysztof Opasiak <k.opasiak@samsung.com>
properties:
compatible:
slew-rate:
enum: [0, 1]
- output-enable:
- description:
- This will internally disable the tri-state for MIO pins.
-
drive-strength:
description:
Selects the drive strength for MIO pins, in mA.
power-supply: true
+ power-domains:
+ maxItems: 1
+
resets:
description: |
A number of phandles to resets that need to be asserted during
.. kernel-doc:: kernel/panic.c
:export:
-.. kernel-doc:: include/linux/overflow.h
- :internal:
-
Device Resource Management
--------------------------
devm_gpio_request_one()
I2C
+ devm_i2c_add_adapter()
devm_i2c_new_dummy_device()
IIO
Pipelines and media streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^
+A media stream is a stream of pixels or metadata originating from one or more
+source devices (such as a sensors) and flowing through media entity pads
+towards the final sinks. The stream can be modified on the route by the
+devices (e.g. scaling or pixel format conversions), or it can be split into
+multiple branches, or multiple branches can be merged.
+
+A media pipeline is a set of media streams which are interdependent. This
+interdependency can be caused by the hardware (e.g. configuration of a second
+stream cannot be changed if the first stream has been enabled) or by the driver
+due to the software design. Most commonly a media pipeline consists of a single
+stream which does not branch.
+
When starting streaming, drivers must notify all entities in the pipeline to
prevent link states from being modified during streaming by calling
:c:func:`media_pipeline_start()`.
-The function will mark all entities connected to the given entity through
-enabled links, either directly or indirectly, as streaming.
+The function will mark all the pads which are part of the pipeline as streaming.
The struct media_pipeline instance pointed to by
-the pipe argument will be stored in every entity in the pipeline.
+the pipe argument will be stored in every pad in the pipeline.
Drivers should embed the struct media_pipeline
in higher-level pipeline structures and can then access the
-pipeline through the struct media_entity
+pipeline through the struct media_pad
pipe field.
Calls to :c:func:`media_pipeline_start()` can be nested.
Corsair HX1200i
+ Corsair HX1500i
+
Corsair RM550i
Corsair RM650i
kernel versions by including an arbitrary string of "salt" in it.
This is specified by the Kconfig symbol ``CONFIG_BUILD_SALT``.
+Git
+---
+
+Uncommitted changes or different commit ids in git can also lead
+to different compilation results. For example, after executing
+``git reset HEAD^``, even if the code is the same, the
+``include/config/kernel.release`` generated during compilation
+will be different, which will eventually lead to binary differences.
+See ``scripts/setlocalversion`` for details.
+
.. _KBUILD_BUILD_TIMESTAMP: kbuild.html#kbuild-build-timestamp
.. _KBUILD_BUILD_USER and KBUILD_BUILD_HOST: kbuild.html#kbuild-build-user-kbuild-build-host
.. _KCFLAGS: kbuild.html#kcflags
.. warning::
Beware that this will return a false positive if a
- :ref:`botton half lock <local_bh_disable>` is held.
+ :ref:`bottom half lock <local_bh_disable>` is held.
Some Basic Rules
================
5.2.21 was the final stable update of the 5.2 release.
Some kernels are designated "long term" kernels; they will receive support
-for a longer period. As of this writing, the current long term kernels
-and their maintainers are:
-
- ====== ================================ =======================
- 3.16 Ben Hutchings (very long-term kernel)
- 4.4 Greg Kroah-Hartman & Sasha Levin (very long-term kernel)
- 4.9 Greg Kroah-Hartman & Sasha Levin
- 4.14 Greg Kroah-Hartman & Sasha Levin
- 4.19 Greg Kroah-Hartman & Sasha Levin
- 5.4 Greg Kroah-Hartman & Sasha Levin
- ====== ================================ =======================
+for a longer period. Please refer to the following link for the list of active
+long term kernel versions and their maintainers:
+
+ https://www.kernel.org/category/releases.html
The selection of a kernel for long-term support is purely a matter of a
maintainer having the need and the time to maintain that release. There
- "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
The kernel is written using GNU C and the GNU toolchain. While it
-adheres to the ISO C89 standard, it uses a number of extensions that are
+adheres to the ISO C11 standard, it uses a number of extensions that are
not featured in the standard. The kernel is a freestanding C
environment, with no reliance on the standard C library, so some
portions of the C standard are not supported. Arbitrary long long
Finally, go back and read
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
to be sure you are not repeating some common mistake documented there.
+
+My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback?
+---------------------------------------------------------------------------------------------------------
+
+Yes, especially if you spend significant amount of time reviewing code
+and go out of your way to improve shared infrastructure.
+
+The feedback must be requested by you, the contributor, and will always
+be shared with you (even if you request for it to be submitted to your
+manager).
will use the event's kernel stacktrace as the key. The keywords
'keys' or 'key' can be used to specify keys, and the keywords
'values', 'vals', or 'val' can be used to specify values. Compound
- keys consisting of up to two fields can be specified by the 'keys'
+ keys consisting of up to three fields can be specified by the 'keys'
keyword. Hashing a compound key produces a unique entry in the
table for each unique combination of component keys, and can be
useful for providing more fine-grained summaries of event data.
- "C: A Reference Manual" di Harbison and Steele [Prentice Hall]
Il kernel è stato scritto usando GNU C e la toolchain GNU.
-Sebbene si attenga allo standard ISO C89, esso utilizza una serie di
+Sebbene si attenga allo standard ISO C11, esso utilizza una serie di
estensioni che non sono previste in questo standard. Il kernel è un
ambiente C indipendente, che non ha alcuna dipendenza dalle librerie
C standard, così alcune parti del C standard non sono supportate.
- 『新・詳説 C 言語 H&S リファレンス』 (サミュエル P ハービソン/ガイ L スティール共著 斉藤 信男監訳)[ソフトバンク]
カーネルは GNU C と GNU ツールチェインを使って書かれています。カーネル
-は ISO C89 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って
+は ISO C11 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って
います。カーネルは標準 C ライブラリに依存しない、C 言語非依存環境です。
そのため、C の標準の中で使えないものもあります。特に任意の long long
の除算や浮動小数点は使えません。カーネルがツールチェインや C 言語拡張
- "Practical C Programming" by Steve Oualline [O'Reilly]
- "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
-커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C89 표준을
+커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C11 표준을
따르는 반면 표준에 있지 않은 많은 확장기능도 가지고 있다. 커널은 표준 C
라이브러리와는 관계없이 freestanding C 환경이어서 C 표준의 일부는
지원되지 않는다. 임의의 long long 나누기나 floating point는 지원되지 않는다.
- "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
《C语言参考手册(原书第5版)》(邱仲潘 等译)[机械工业出版社]
-Linux内核使用GNU C和GNU工具链开发。虽然它遵循ISO C89标准,但也用到了一些
+Linux内核使用GNU C和GNU工具链开发。虽然它遵循ISO C11标准,但也用到了一些
标准中没有定义的扩展。内核是自给自足的C环境,不依赖于标准C库的支持,所以
并不支持C标准中的部分定义。比如long long类型的大数除法和浮点运算就不允许
使用。有时候确实很难弄清楚内核对工具链的要求和它所使用的扩展,不幸的是目
- "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
《C語言參考手冊(原書第5版)》(邱仲潘 等譯)[機械工業出版社]
-Linux內核使用GNU C和GNU工具鏈開發。雖然它遵循ISO C89標準,但也用到了一些
+Linux內核使用GNU C和GNU工具鏈開發。雖然它遵循ISO C11標準,但也用到了一些
標準中沒有定義的擴展。內核是自給自足的C環境,不依賴於標準C庫的支持,所以
並不支持C標準中的部分定義。比如long long類型的大數除法和浮點運算就不允許
使用。有時候確實很難弄清楚內核對工具鏈的要求和它所使用的擴展,不幸的是目
ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE
ignore define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX
ignore define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX
+ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_MSG_GIVE_FEATURES
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_STATUS
+ignore define CEC_MSG_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_OP_AUD_FMT_ID_CEA861
ignore define CEC_OP_AUD_FMT_ID_CEA861_CXT
operates like the :c:func:`read()` function.
-.. c:function:: void v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
+.. c:function:: void *v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
- operates like the :c:func:`munmap()` function.
+ operates like the :c:func:`mmap()` function.
.. c:function:: int v4l2_munmap(void *_start, size_t length);
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: Documentation/devicetree/bindings/arm/bcm/brcm,bcmbca.yaml
F: arch/arm64/boot/dts/broadcom/bcmbca/*
N: bcmbca
L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
F: drivers/pci/controller/pcie-brcmstb.c
F: drivers/staging/vc04_services
M: Scott Branden <sbranden@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
S: Maintained
-T: git git://github.com/broadcom/mach-bcm
+T: git https://github.com/broadcom/mach-bcm
F: arch/arm/mach-bcm/
N: bcm281*
N: bcm113*
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
F: arch/arm/boot/dts/bcm7*.dts*
F: arch/arm/include/asm/hardware/cache-b15-rac.h
N: bcm7120
BROADCOM BDC DRIVER
+M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
L: linux-usb@vger.kernel.org
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-mips@vger.kernel.org
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: arch/mips/bmips/*
F: arch/mips/boot/dts/brcm/bcm*.dts*
F: arch/mips/include/asm/mach-bmips/*
F: drivers/tty/serial/8250/8250_bcm7271.c
BROADCOM BRCMSTB USB EHCI DRIVER
+M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-usb@vger.kernel.org
F: drivers/usb/misc/brcmstb-usb-pinmap.c
BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER
+M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-kernel@vger.kernel.org
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: arch/arm64/boot/dts/broadcom/northstar2/*
F: arch/arm64/boot/dts/broadcom/stingray/*
F: drivers/clk/bcm/clk-ns*
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-pm@vger.kernel.org
S: Maintained
-T: git git://github.com/broadcom/stblinux.git
+T: git https://github.com/broadcom/stblinux.git
F: drivers/soc/bcm/bcm63xx/bcm-pmb.c
F: include/dt-bindings/soc/bcm-pmb.h
M: David Sterba <dsterba@suse.com>
L: linux-btrfs@vger.kernel.org
S: Maintained
-W: http://btrfs.wiki.kernel.org/
-Q: http://patchwork.kernel.org/project/linux-btrfs/list/
+W: https://btrfs.readthedocs.io
+W: https://btrfs.wiki.kernel.org/
+Q: https://patchwork.kernel.org/project/linux-btrfs/list/
C: irc://irc.libera.chat/btrfs
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git
F: Documentation/filesystems/btrfs.rst
F: fs/btrfs/
F: include/linux/btrfs*
+F: include/trace/events/btrfs.h
F: include/uapi/linux/btrfs*
BTTV VIDEO4LINUX DRIVER
CISCO VIC ETHERNET NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
-M: Govindarajulu Varadarajan <_govind@gmx.com>
+M: Satish Kharat <satishkh@cisco.com>
S: Supported
F: drivers/net/ethernet/cisco/enic/
CONTROL GROUP - BLOCK IO CONTROLLER (BLKIO)
M: Tejun Heo <tj@kernel.org>
+M: Josef Bacik <josef@toxicpanda.com>
M: Jens Axboe <axboe@kernel.dk>
L: cgroups@vger.kernel.org
L: linux-block@vger.kernel.org
F: Documentation/admin-guide/cgroup-v1/blkio-controller.rst
F: block/bfq-cgroup.c
F: block/blk-cgroup.c
+F: block/blk-iocost.c
F: block/blk-iolatency.c
F: block/blk-throttle.c
F: include/linux/blk-cgroup.h
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
-F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
+F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
F: drivers/media/i2c/dw9714.c
DONGWOON DW9768 LENS VOICE COIL DRIVER
F: drivers/i2c/busses/i2c-hisi.c
HISILICON LPC BUS DRIVER
-M: john.garry@huawei.com
+M: Jay Fang <f.fangjian@huawei.com>
S: Maintained
W: http://www.hisilicon.com
F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml
F: drivers/pci/hotplug/rpaphp*
IBM Power SRIOV Virtual NIC Device Driver
-M: Dany Madden <drt@linux.ibm.com>
+M: Haren Myneni <haren@linux.ibm.com>
+M: Rick Lindsley <ricklind@linux.ibm.com>
+R: Nick Child <nnac123@linux.ibm.com>
+R: Dany Madden <danymadden@us.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org
S: Supported
L: kvm-riscv@lists.infradead.org
L: linux-riscv@lists.infradead.org
S: Maintained
-T: git git://github.com/kvm-riscv/linux.git
+T: git https://github.com/kvm-riscv/linux.git
F: arch/riscv/include/asm/kvm*
F: arch/riscv/include/uapi/asm/kvm*
F: arch/riscv/kvm/
R: David Hildenbrand <david@redhat.com>
L: kvm@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git
F: Documentation/virt/kvm/s390*
F: arch/s390/include/asm/gmap.h
S: Supported
W: https://nilfs.sourceforge.io/
W: https://nilfs.osdn.jp/
-T: git git://github.com/konis/nilfs2.git
+T: git https://github.com/konis/nilfs2.git
F: Documentation/filesystems/nilfs2.rst
F: fs/nilfs2/
F: include/trace/events/nilfs2.h
F: drivers/nvme/target/fabrics-cmd-auth.c
F: include/linux/nvme-auth.h
+NVM EXPRESS HARDWARE MONITORING SUPPORT
+M: Guenter Roeck <linux@roeck-us.net>
+L: linux-nvme@lists.infradead.org
+S: Supported
+F: drivers/nvme/host/hwmon.c
+
NVM EXPRESS FC TRANSPORT DRIVERS
M: James Smart <james.smart@broadcom.com>
L: linux-nvme@lists.infradead.org
W: http://openvswitch.org
F: include/uapi/linux/openvswitch.h
F: net/openvswitch/
+F: tools/testing/selftests/net/openvswitch/
OPERATING PERFORMANCE POINTS (OPP)
M: Viresh Kumar <vireshk@kernel.org>
F: drivers/input/serio/hp_sdc*
F: drivers/parisc/
F: drivers/parport/parport_gsc.*
-F: drivers/tty/serial/8250/8250_gsc.c
+F: drivers/tty/serial/8250/8250_parisc.c
F: drivers/video/console/sti*
F: drivers/video/fbdev/sti*
F: drivers/video/logo/logo_parisc*
F: drivers/pci/controller/dwc/*designware*
PCI DRIVER FOR TI DRA7XX/J721E
-M: Kishon Vijay Abraham I <kishon@ti.com>
+M: Vignesh Raghavendra <vigneshr@ti.com>
L: linux-omap@vger.kernel.org
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
F: drivers/pci/controller/pci-v3-semi.c
PCI ENDPOINT SUBSYSTEM
-M: Kishon Vijay Abraham I <kishon@ti.com>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
R: Krzysztof Wilczyński <kw@linux.com>
R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+R: Kishon Vijay Abraham I <kishon@kernel.org>
L: linux-pci@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-pci/list/
F: drivers/net/phy/dp83640*
F: drivers/ptp/*
F: include/linux/ptp_cl*
+K: (?:\b|_)ptp(?:\b|_)
PTP VIRTUAL CLOCK SUPPORT
M: Yangbo Lu <yangbo.lu@nxp.com>
F: drivers/tty/serial/rp2.*
ROHM BD99954 CHARGER IC
-R: Matti Vaittinen <mazziesaccount@gmail.com>
+M: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/power/supply/bd99954-charger.c
F: drivers/power/supply/bd99954-charger.h
F: include/linux/mfd/bd9571mwv.h
ROHM POWER MANAGEMENT IC DEVICE DRIVERS
-R: Matti Vaittinen <mazziesaccount@gmail.com>
+M: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/clk/clk-bd718x7.c
F: drivers/gpio/gpio-bd71815.c
R: Sven Schnelle <svens@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git
F: Documentation/driver-api/s390-drivers.rst
F: Documentation/s390/
M: Peter Oberparleiter <oberpar@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/cio/
S390 DASD DRIVER
M: Jan Hoeppner <hoeppner@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: block/partitions/ibm.c
F: drivers/s390/block/dasd*
F: include/linux/dasd_mod.h
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/iommu/s390-iommu.c
S390 IUCV NETWORK LAYER
L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/net/*iucv*
F: include/net/iucv/
F: net/iucv/
L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/net/
S390 PCI SUBSYSTEM
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: arch/s390/pci/
F: drivers/pci/hotplug/s390_pci_hpc.c
F: Documentation/s390/pci.rst
M: Jason Herne <jjherne@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: Documentation/s390/vfio-ap*
F: drivers/s390/crypto/vfio_ap*
M: Harald Freudenberger <freude@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/crypto/
S390 ZFCP DRIVER
M: Benjamin Block <bblock@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/scsi/zfcp_*
S3C ADC BATTERY DRIVER
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: drivers/staging/media/deprecated/saa7146/
-F: include/media/drv-intf/saa7146*
SAFESETID SECURITY MODULE
M: Micah Morton <mortonm@chromium.org>
SAMSUNG S3FWRN5 NFC DRIVER
M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
-M: Krzysztof Opasiak <k.opasiak@samsung.com>
L: linux-nfc@lists.01.org (subscribers-only)
S: Maintained
F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
M: Jan Karcher <jaka@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
-W: http://www.ibm.com/developerworks/linux/linux390/
F: net/smc/
SHARP GP2AP002A00F/GP2AP002S00F SENSOR DRIVER
M: Paul Walmsley <paul.walmsley@sifive.com>
L: linux-riscv@lists.infradead.org
S: Supported
-T: git git://github.com/sifive/riscv-linux.git
+T: git https://github.com/sifive/riscv-linux.git
N: sifive
K: [^@]sifive
F: Documentation/usb/ehci.rst
F: drivers/usb/host/ehci*
-USB GADGET/PERIPHERAL SUBSYSTEM
-M: Felipe Balbi <balbi@kernel.org>
-L: linux-usb@vger.kernel.org
-S: Maintained
-W: http://www.linux-usb.org/gadget
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
-F: drivers/usb/gadget/
-F: include/linux/usb/gadget*
-
USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...)
M: Jiri Kosina <jikos@kernel.org>
M: Benjamin Tissoires <benjamin.tissoires@redhat.com>
L: netdev@vger.kernel.org
S: Maintained
W: https://github.com/petkan/pegasus
-T: git git://github.com/petkan/pegasus.git
+T: git https://github.com/petkan/pegasus.git
F: drivers/net/usb/pegasus.*
-USB PHY LAYER
-M: Felipe Balbi <balbi@kernel.org>
-L: linux-usb@vger.kernel.org
-S: Maintained
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
-F: drivers/usb/phy/
-
USB PRINTER DRIVER (usblp)
M: Pete Zaitcev <zaitcev@redhat.com>
L: linux-usb@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
W: https://github.com/petkan/rtl8150
-T: git git://github.com/petkan/rtl8150.git
+T: git https://github.com/petkan/rtl8150.git
F: drivers/net/usb/rtl8150.c
USB SERIAL SUBSYSTEM
F: drivers/watchdog/
F: include/linux/watchdog.h
F: include/uapi/linux/watchdog.h
+F: include/trace/events/watchdog.h
WHISKEYCOVE PMIC GPIO DRIVER
M: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
W: http://mjpeg.sourceforge.net/driver-zoran/
Q: https://patchwork.linuxtv.org/project/linux-media/list/
F: Documentation/driver-api/media/drivers/zoran.rst
-F: drivers/staging/media/zoran/
+F: drivers/media/pci/zoran/
ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER
M: Minchan Kim <minchan@kernel.org>
VERSION = 6
PATCHLEVEL = 1
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc4
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*
cmd_ar_vmlinux.a = \
rm -f $@; \
$(AR) cDPrST $@ $(KBUILD_VMLINUX_OBJS); \
- $(AR) mPiT $$($(AR) t $@ | head -n1) $@ $$($(AR) t $@ | grep -F --file=$(srctree)/scripts/head-object-list.txt)
+ $(AR) mPiT $$($(AR) t $@ | sed -n 1p) $@ $$($(AR) t $@ | grep -F -f $(srctree)/scripts/head-object-list.txt)
targets += vmlinux.a
vmlinux.a: $(KBUILD_VMLINUX_OBJS) scripts/head-object-list.txt autoksyms_recursive FORCE
dma-coherent;
};
- ehci@40000 {
+ usb@40000 {
dma-coherent;
};
- ohci@60000 {
+ usb@60000 {
dma-coherent;
};
dma-coherent;
};
- ehci@40000 {
+ usb@40000 {
dma-coherent;
};
- ohci@60000 {
+ usb@60000 {
dma-coherent;
};
mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
};
- ehci@40000 {
+ usb@40000 {
compatible = "generic-ehci";
reg = < 0x40000 0x100 >;
interrupts = < 8 >;
};
- ohci@60000 {
+ usb@60000 {
compatible = "generic-ohci";
reg = < 0x60000 0x100 >;
interrupts = < 8 >;
};
};
- ohci@60000 {
+ usb@60000 {
compatible = "snps,hsdk-v1.0-ohci", "generic-ohci";
reg = <0x60000 0x100>;
interrupts = <15>;
dma-coherent;
};
- ehci@40000 {
+ usb@40000 {
compatible = "snps,hsdk-v1.0-ehci", "generic-ehci";
reg = <0x40000 0x100>;
interrupts = <15>;
clock-names = "stmmaceth";
};
- ehci@40000 {
+ usb@40000 {
compatible = "generic-ehci";
reg = < 0x40000 0x100 >;
interrupts = < 8 >;
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
-# CONFIG_INET_XFRM_MODE_TUNNEL is not set
-# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
-# CONFIG_INET_XFRM_MODE_TUNNEL is not set
-# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
-# CONFIG_INET_XFRM_MODE_TUNNEL is not set
-# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
-# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
-# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_UNIX_DIAG=y
CONFIG_NET_KEY=y
CONFIG_INET=y
-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
-# CONFIG_INET_XFRM_MODE_TUNNEL is not set
-# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y
# CONFIG_BLK_DEV is not set
CONFIG_NETDEVICES=y
# CONFIG_NET_VENDOR_ARC is not set
-# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
CONFIG_EZCHIP_NPS_MANAGEMENT_ENET=y
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FTRACE=y
+# CONFIG_NET_VENDOR_CADENCE is not set
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
-# CONFIG_INET_XFRM_MODE_TUNNEL is not set
-# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y
CONFIG_NETDEVICES=y
-# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_FS=y
CONFIG_HEADERS_INSTALL=y
-CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_SCHEDSTATS=y
-CONFIG_TIMER_STATS=y
# CONFIG_CRYPTO_HW is not set
+# CONFIG_NET_VENDOR_CADENCE is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
CONFIG_FB=y
-CONFIG_ARCPGU_RGB888=y
-CONFIG_ARCPGU_DISPTYPE=0
# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_SOFTLOCKUP_DETECTOR=y
/*
* __fls: Similar to fls, but zero based (0-31)
*/
-static inline __attribute__ ((const)) int __fls(unsigned long x)
+static inline __attribute__ ((const)) unsigned long __fls(unsigned long x)
{
if (!x)
return 0;
/*
* __fls: Similar to fls, but zero based (0-31). Also 0 if no bit set
*/
-static inline __attribute__ ((const)) int __fls(unsigned long x)
+static inline __attribute__ ((const)) unsigned long __fls(unsigned long x)
{
/* FLS insn has exactly same semantics as the API */
return __builtin_arc_fls(x);
* r25 contains the kernel current task ptr
* - Defined Stack Switching Macro to be reused in all intr/excp hdlrs
* - Shaved off 11 instructions from RESTORE_ALL_INT1 by using the
- * address Write back load ld.ab instead of seperate ld/add instn
+ * address Write back load ld.ab instead of separate ld/add instn
*
* Amit Bhor, Sameer Dhavale: Codito Technologies 2004
*/
{
}
-extern void iounmap(const void __iomem *addr);
+extern void iounmap(const volatile void __iomem *addr);
/*
* io{read,write}{16,32}be() macros
#define pmd_pfn(pmd) ((pmd_val(pmd) & PAGE_MASK) >> PAGE_SHIFT)
#define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd))
#define set_pmd(pmdp, pmd) (*(pmdp) = pmd)
-#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd))
+#define pmd_pgtable(pmd) ((pgtable_t) pmd_page(pmd))
/*
* 4th level paging: pte
* API called by platform code to hookup arch-common ISR to their IPI IRQ
*
* Note: If IPI is provided by platform (vs. say ARC MCIP), their intc setup/map
- * function needs to call call irq_set_percpu_devid() for IPI IRQ, otherwise
+ * function needs to call irq_set_percpu_devid() for IPI IRQ, otherwise
* request_percpu_irq() below will fail
*/
static DEFINE_PER_CPU(int, ipi_dev);
* -In SMP, if hardware caches are coherent
*
* There's a corollary case, where kernel READs from a userspace mapped page.
- * If the U-mapping is not congruent to to K-mapping, former needs flushing.
+ * If the U-mapping is not congruent to K-mapping, former needs flushing.
*/
void flush_dcache_page(struct page *page)
{
* @vaddr is typically user vaddr (breakpoint) or kernel vaddr (vmalloc)
* However in one instance, when called by kprobe (for a breakpt in
* builtin kernel code) @vaddr will be paddr only, meaning CDU operation will
- * use a paddr to index the cache (despite VIPT). This is fine since since a
+ * use a paddr to index the cache (despite VIPT). This is fine since a
* builtin kernel page will not have any virtual mappings.
* kprobe on loadable module will be kernel vaddr.
*/
EXPORT_SYMBOL(ioremap_prot);
-void iounmap(const void __iomem *addr)
+void iounmap(const volatile void __iomem *addr)
{
/* weird double cast to handle phys_addr_t > 32 bits */
if (arc_uncached_addr_space((phys_addr_t)(u32)addr))
status = "okay";
};
+®_pu {
+ regulator-always-on;
+};
+
®_usb_h1_vbus {
status = "okay";
};
user-pb {
label = "user_pb";
- gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
+ gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
linux,code = <BTN_0>;
};
user-pb {
label = "user_pb";
- gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
+ gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
linux,code = <BTN_0>;
};
status = "okay";
};
+®_pu {
+ regulator-always-on;
+};
+
®_usb_h1_vbus {
status = "okay";
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <0>;
polling-delay-passive = <0>;
thermal-sensors = <&bat_therm>;
+
+ trips {
+ battery-crit-hi {
+ temperature = <70000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
};
polling-delay = <1000>;
polling-delay-passive = <100>;
thermal-sensors = <&scpi_sensors0 0>;
+ trips {
+ pmic_crit0: trip0 {
+ temperature = <90000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
soc {
polling-delay = <1000>;
polling-delay-passive = <100>;
thermal-sensors = <&scpi_sensors0 3>;
+ trips {
+ soc_crit0: trip0 {
+ temperature = <80000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
};
big_cluster_thermal_zone: big-cluster {
little-endian;
#address-cells = <1>;
#size-cells = <0>;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(1)>;
status = "disabled";
};
little-endian;
#address-cells = <1>;
#size-cells = <0>;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(1)>;
status = "disabled";
};
little-endian;
#address-cells = <1>;
#size-cells = <0>;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
little-endian;
#address-cells = <1>;
#size-cells = <0>;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
#address-cells = <1>;
#size-cells = <0>;
little-endian;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
little-endian;
#address-cells = <1>;
#size-cells = <0>;
+ clock-frequency = <2500000>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
interrupts = <GIC_SPI 232 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b010000 0x10000>;
clocks = <&sdhc0_lpcg IMX_LPCG_CLK_4>,
- <&sdhc0_lpcg IMX_LPCG_CLK_5>,
- <&sdhc0_lpcg IMX_LPCG_CLK_0>;
- clock-names = "ipg", "per", "ahb";
+ <&sdhc0_lpcg IMX_LPCG_CLK_0>,
+ <&sdhc0_lpcg IMX_LPCG_CLK_5>;
+ clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_0>;
status = "disabled";
};
interrupts = <GIC_SPI 233 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b020000 0x10000>;
clocks = <&sdhc1_lpcg IMX_LPCG_CLK_4>,
- <&sdhc1_lpcg IMX_LPCG_CLK_5>,
- <&sdhc1_lpcg IMX_LPCG_CLK_0>;
- clock-names = "ipg", "per", "ahb";
+ <&sdhc1_lpcg IMX_LPCG_CLK_0>,
+ <&sdhc1_lpcg IMX_LPCG_CLK_5>;
+ clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_1>;
fsl,tuning-start-tap = <20>;
fsl,tuning-step = <2>;
interrupts = <GIC_SPI 234 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b030000 0x10000>;
clocks = <&sdhc2_lpcg IMX_LPCG_CLK_4>,
- <&sdhc2_lpcg IMX_LPCG_CLK_5>,
- <&sdhc2_lpcg IMX_LPCG_CLK_0>;
- clock-names = "ipg", "per", "ahb";
+ <&sdhc2_lpcg IMX_LPCG_CLK_0>,
+ <&sdhc2_lpcg IMX_LPCG_CLK_5>;
+ clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_2>;
status = "disabled";
};
/* SODIMM 96 */
MX8MM_IOMUXC_SAI1_RXD2_GPIO4_IO4 0x1c4
/* CPLD_D[7] */
- MX8MM_IOMUXC_SAI1_RXD3_GPIO4_IO5 0x1c4
+ MX8MM_IOMUXC_SAI1_RXD3_GPIO4_IO5 0x184
/* CPLD_D[6] */
- MX8MM_IOMUXC_SAI1_RXFS_GPIO4_IO0 0x1c4
+ MX8MM_IOMUXC_SAI1_RXFS_GPIO4_IO0 0x184
/* CPLD_D[5] */
- MX8MM_IOMUXC_SAI1_TXC_GPIO4_IO11 0x1c4
+ MX8MM_IOMUXC_SAI1_TXC_GPIO4_IO11 0x184
/* CPLD_D[4] */
- MX8MM_IOMUXC_SAI1_TXD0_GPIO4_IO12 0x1c4
+ MX8MM_IOMUXC_SAI1_TXD0_GPIO4_IO12 0x184
/* CPLD_D[3] */
- MX8MM_IOMUXC_SAI1_TXD1_GPIO4_IO13 0x1c4
+ MX8MM_IOMUXC_SAI1_TXD1_GPIO4_IO13 0x184
/* CPLD_D[2] */
- MX8MM_IOMUXC_SAI1_TXD2_GPIO4_IO14 0x1c4
+ MX8MM_IOMUXC_SAI1_TXD2_GPIO4_IO14 0x184
/* CPLD_D[1] */
- MX8MM_IOMUXC_SAI1_TXD3_GPIO4_IO15 0x1c4
+ MX8MM_IOMUXC_SAI1_TXD3_GPIO4_IO15 0x184
/* CPLD_D[0] */
- MX8MM_IOMUXC_SAI1_TXD4_GPIO4_IO16 0x1c4
+ MX8MM_IOMUXC_SAI1_TXD4_GPIO4_IO16 0x184
/* KBD_intK */
MX8MM_IOMUXC_SAI2_MCLK_GPIO4_IO27 0x1c4
/* DISP_reset */
assigned-clocks = <&clk IMX8MM_CLK_USB_PHY_REF>;
assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_100M>;
clock-names = "main_clk";
+ power-domains = <&pgc_otg1>;
};
usbphynop2: usbphynop2 {
assigned-clocks = <&clk IMX8MM_CLK_USB_PHY_REF>;
assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_100M>;
clock-names = "main_clk";
+ power-domains = <&pgc_otg2>;
};
soc: soc@0 {
pgc_otg1: power-domain@2 {
#power-domain-cells = <0>;
reg = <IMX8MM_POWER_DOMAIN_OTG1>;
- power-domains = <&pgc_hsiomix>;
};
pgc_otg2: power-domain@3 {
#power-domain-cells = <0>;
reg = <IMX8MM_POWER_DOMAIN_OTG2>;
- power-domains = <&pgc_hsiomix>;
};
pgc_gpumix: power-domain@4 {
assigned-clock-parents = <&clk IMX8MM_SYS_PLL2_500M>;
phys = <&usbphynop1>;
fsl,usbmisc = <&usbmisc1 0>;
- power-domains = <&pgc_otg1>;
+ power-domains = <&pgc_hsiomix>;
status = "disabled";
};
assigned-clock-parents = <&clk IMX8MM_SYS_PLL2_500M>;
phys = <&usbphynop2>;
fsl,usbmisc = <&usbmisc2 0>;
- power-domains = <&pgc_otg2>;
+ power-domains = <&pgc_hsiomix>;
status = "disabled";
};
pgc_otg1: power-domain@1 {
#power-domain-cells = <0>;
reg = <IMX8MN_POWER_DOMAIN_OTG1>;
- power-domains = <&pgc_hsiomix>;
};
pgc_gpumix: power-domain@2 {
assigned-clock-parents = <&clk IMX8MN_SYS_PLL2_500M>;
phys = <&usbphynop1>;
fsl,usbmisc = <&usbmisc1 0>;
- power-domains = <&pgc_otg1>;
+ power-domains = <&pgc_hsiomix>;
status = "disabled";
};
assigned-clocks = <&clk IMX8MN_CLK_USB_PHY_REF>;
assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_100M>;
clock-names = "main_clk";
+ power-domains = <&pgc_otg1>;
};
};
"SODIMM_82",
"SODIMM_70",
"SODIMM_72";
-
- ctrl-sleep-moci-hog {
- gpio-hog;
- /* Verdin CTRL_SLEEP_MOCI# (SODIMM 256) */
- gpios = <29 GPIO_ACTIVE_HIGH>;
- line-name = "CTRL_SLEEP_MOCI#";
- output-high;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_ctrl_sleep_moci>;
- };
};
&gpio3 {
"SODIMM_256",
"SODIMM_48",
"SODIMM_44";
+
+ ctrl-sleep-moci-hog {
+ gpio-hog;
+ /* Verdin CTRL_SLEEP_MOCI# (SODIMM 256) */
+ gpios = <29 GPIO_ACTIVE_HIGH>;
+ line-name = "CTRL_SLEEP_MOCI#";
+ output-high;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_ctrl_sleep_moci>;
+ };
};
/* On-module I2C */
clocks = <&clk IMX93_CLK_GPIO2_GATE>,
<&clk IMX93_CLK_GPIO2_GATE>;
clock-names = "gpio", "port";
- gpio-ranges = <&iomuxc 0 32 32>;
+ gpio-ranges = <&iomuxc 0 4 30>;
};
gpio3: gpio@43820080 {
clocks = <&clk IMX93_CLK_GPIO3_GATE>,
<&clk IMX93_CLK_GPIO3_GATE>;
clock-names = "gpio", "port";
- gpio-ranges = <&iomuxc 0 64 32>;
+ gpio-ranges = <&iomuxc 0 84 8>, <&iomuxc 8 66 18>,
+ <&iomuxc 26 34 2>, <&iomuxc 28 0 4>;
};
gpio4: gpio@43830080 {
clocks = <&clk IMX93_CLK_GPIO4_GATE>,
<&clk IMX93_CLK_GPIO4_GATE>;
clock-names = "gpio", "port";
- gpio-ranges = <&iomuxc 0 96 32>;
+ gpio-ranges = <&iomuxc 0 38 28>, <&iomuxc 28 36 2>;
};
gpio1: gpio@47400080 {
clocks = <&clk IMX93_CLK_GPIO1_GATE>,
<&clk IMX93_CLK_GPIO1_GATE>;
clock-names = "gpio", "port";
- gpio-ranges = <&iomuxc 0 0 32>;
+ gpio-ranges = <&iomuxc 0 92 16>;
};
s4muap: mailbox@47520000 {
reg = <0x47520000 0x10000>;
interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
- interrupt-names = "txirq", "rxirq";
+ interrupt-names = "tx", "rx";
#mbox-cells = <2>;
};
#ifdef CONFIG_EFI
extern void efi_init(void);
+
+bool efi_runtime_fixup_exception(struct pt_regs *regs, const char *msg);
#else
#define efi_init()
+
+static inline
+bool efi_runtime_fixup_exception(struct pt_regs *regs, const char *msg)
+{
+ return false;
+}
#endif
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
#define KVM_PGTABLE_MAX_LEVELS 4U
+/*
+ * The largest supported block sizes for KVM (no 52-bit PA support):
+ * - 4K (level 1): 1GB
+ * - 16K (level 2): 32MB
+ * - 64K (level 2): 512MB
+ */
+#ifdef CONFIG_ARM64_4K_PAGES
+#define KVM_PGTABLE_MIN_BLOCK_LEVEL 1U
+#else
+#define KVM_PGTABLE_MIN_BLOCK_LEVEL 2U
+#endif
+
static inline u64 kvm_get_parange(u64 mmfr0)
{
u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
static inline bool kvm_level_supports_block_mapping(u32 level)
{
- /*
- * Reject invalid block mappings and don't bother with 4TB mappings for
- * 52-bit PAs.
- */
- return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
+ return level >= KVM_PGTABLE_MIN_BLOCK_LEVEL;
}
/**
#include <linux/pgtable.h>
-/*
- * PGDIR_SHIFT determines the size a top-level page table entry can map
- * and depends on the number of levels in the page table. Compute the
- * PGDIR_SHIFT for a given number of levels.
- */
-#define pt_levels_pgdir_shift(lvls) ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (lvls))
-
/*
* The hardware supports concatenation of up to 16 tables at stage2 entry
* level and we use the feature whenever possible, which means we resolve 4
#define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4)
#define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr)
-/* stage2_pgdir_shift() is the size mapped by top-level stage2 entry for the VM */
-#define stage2_pgdir_shift(kvm) pt_levels_pgdir_shift(kvm_stage2_levels(kvm))
-#define stage2_pgdir_size(kvm) (1ULL << stage2_pgdir_shift(kvm))
-#define stage2_pgdir_mask(kvm) ~(stage2_pgdir_size(kvm) - 1)
-
/*
* kvm_mmmu_cache_min_pages() is the number of pages required to install
* a stage-2 translation. We pre-allocate the entry level page table at
*/
#define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1)
-static inline phys_addr_t
-stage2_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
-{
- phys_addr_t boundary = (addr + stage2_pgdir_size(kvm)) & stage2_pgdir_mask(kvm);
-
- return (boundary - 1 < end - 1) ? boundary : end;
-}
-
#endif /* __ARM64_S2_PGTABLE_H_ */
ARM64_FTR_END,
};
+static const struct arm64_ftr_bits ftr_mvfr0[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPROUND_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSHVEC_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSQRT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDIVIDE_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPTRAP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_SIMD_SHIFT, 4, 0),
+ ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_mvfr1[] = {
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDFMAC_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPHP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDHP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDSP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDINT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDLS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPDNAN_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPFTZ_SHIFT, 4, 0),
+ ARM64_FTR_END,
+};
+
static const struct arm64_ftr_bits ftr_mvfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
static const struct arm64_ftr_bits ftr_id_isar5[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
ARM64_FTR_END,
};
* Common ftr bits for a 32bit register with all hidden, strict
* attributes, with 4bit feature fields and a default safe value of
* 0. Covers the following 32bit registers:
- * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
+ * id_isar[1-3], id_mmfr[1-3]
*/
static const struct arm64_ftr_bits ftr_generic_32bits[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
ARM64_FTR_REG(SYS_ID_ISAR6_EL1, ftr_id_isar6),
/* Op1 = 0, CRn = 0, CRm = 3 */
- ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_generic_32bits),
- ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_generic_32bits),
+ ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_mvfr0),
+ ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_mvfr1),
ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2),
ARM64_FTR_REG(SYS_ID_PFR2_EL1, ftr_id_pfr2),
ARM64_FTR_REG(SYS_ID_DFR1_EL1, ftr_id_dfr1),
/*
* We emulate only the following system register space.
- * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0, 4 - 7]
+ * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0, 2 - 7]
* See Table C5-6 System instruction encodings for System register accesses,
* ARMv8 ARM(ARM DDI 0487A.f) for more details.
*/
sys_reg_CRn(id) == 0x0 &&
sys_reg_Op1(id) == 0x0 &&
(sys_reg_CRm(id) == 0 ||
- ((sys_reg_CRm(id) >= 4) && (sys_reg_CRm(id) <= 7))));
+ ((sys_reg_CRm(id) >= 2) && (sys_reg_CRm(id) <= 7))));
}
/*
#include <linux/linkage.h>
SYM_FUNC_START(__efi_rt_asm_wrapper)
- stp x29, x30, [sp, #-32]!
+ stp x29, x30, [sp, #-112]!
mov x29, sp
/*
*/
stp x1, x18, [sp, #16]
+ /*
+ * Preserve all callee saved registers and record the stack pointer
+ * value in a per-CPU variable so we can recover from synchronous
+ * exceptions occurring while running the firmware routines.
+ */
+ stp x19, x20, [sp, #32]
+ stp x21, x22, [sp, #48]
+ stp x23, x24, [sp, #64]
+ stp x25, x26, [sp, #80]
+ stp x27, x28, [sp, #96]
+
+ adr_this_cpu x8, __efi_rt_asm_recover_sp, x9
+ str x29, [x8]
+
/*
* We are lucky enough that no EFI runtime services take more than
* 5 arguments, so all are passed in registers rather than via the
ldp x1, x2, [sp, #16]
cmp x2, x18
- ldp x29, x30, [sp], #32
+ ldp x29, x30, [sp], #112
b.ne 0f
ret
0:
mov x18, x2
b efi_handle_corrupted_x18 // tail call
SYM_FUNC_END(__efi_rt_asm_wrapper)
+
+SYM_FUNC_START(__efi_rt_asm_recover)
+ ldr_this_cpu x8, __efi_rt_asm_recover_sp, x9
+ mov sp, x8
+
+ ldp x0, x18, [sp, #16]
+ ldp x19, x20, [sp, #32]
+ ldp x21, x22, [sp, #48]
+ ldp x23, x24, [sp, #64]
+ ldp x25, x26, [sp, #80]
+ ldp x27, x28, [sp, #96]
+ ldp x29, x30, [sp], #112
+
+ b efi_handle_runtime_exception
+SYM_FUNC_END(__efi_rt_asm_recover)
#include <linux/efi.h>
#include <linux/init.h>
+#include <linux/percpu.h>
#include <asm/efi.h>
pr_err_ratelimited(FW_BUG "register x18 corrupted by EFI %s\n", f);
return s;
}
+
+asmlinkage DEFINE_PER_CPU(u64, __efi_rt_asm_recover_sp);
+
+asmlinkage efi_status_t __efi_rt_asm_recover(void);
+
+asmlinkage efi_status_t efi_handle_runtime_exception(const char *f)
+{
+ pr_err(FW_BUG "Synchronous exception occurred in EFI runtime service %s()\n", f);
+ clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
+ return EFI_ABORTED;
+}
+
+bool efi_runtime_fixup_exception(struct pt_regs *regs, const char *msg)
+{
+ /* Check whether the exception occurred while running the firmware */
+ if (current_work() != &efi_rts_work.work || regs->pc >= TASK_SIZE_64)
+ return false;
+
+ pr_err(FW_BUG "Unable to handle %s in EFI runtime service\n", msg);
+ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+ dump_stack();
+
+ regs->pc = (u64)__efi_rt_asm_recover;
+ return true;
+}
__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 0);
}
-static bool cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
+static __always_inline bool
+cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
{
if (!__this_cpu_read(__in_cortex_a76_erratum_1463225_wa))
return false;
*/
#include <linux/linkage.h>
+#include <linux/cfi_types.h>
#include <asm/asm-offsets.h>
#include <asm/assembler.h>
#include <asm/ftrace.h>
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
-SYM_FUNC_START(ftrace_stub)
+SYM_TYPED_FUNC_START(ftrace_stub)
ret
SYM_FUNC_END(ftrace_stub)
+SYM_TYPED_FUNC_START(ftrace_stub_graph)
+ ret
+SYM_FUNC_END(ftrace_stub_graph)
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* void return_to_handler(void)
incdir := $(srctree)/$(src)/include
subdir-asflags-y := -I$(incdir)
-subdir-ccflags-y := -I$(incdir) \
- -fno-stack-protector \
- -DDISABLE_BRANCH_PROFILING \
- $(DISABLE_STACKLEAK_PLUGIN)
+subdir-ccflags-y := -I$(incdir)
obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o
#include <hyp/adjust_pc.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#if !defined (__KVM_NVHE_HYPERVISOR__) && !defined (__KVM_VHE_HYPERVISOR__)
#error Hypervisor code only!
new |= (old & PSR_C_BIT);
new |= (old & PSR_V_BIT);
- if (kvm_has_mte(vcpu->kvm))
+ if (kvm_has_mte(kern_hyp_va(vcpu->kvm)))
new |= PSR_TCO_BIT;
new |= (old & PSR_DIT_BIT);
vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+
+ if (cpus_have_final_cap(ARM64_SME)) {
+ sysreg_clear_set_s(SYS_HFGRTR_EL2,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK,
+ 0);
+ sysreg_clear_set_s(SYS_HFGWTR_EL2,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK,
+ 0);
+ }
}
static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
write_sysreg(0, hstr_el2);
if (kvm_arm_support_pmu_v3())
write_sysreg(0, pmuserenr_el0);
+
+ if (cpus_have_final_cap(ARM64_SME)) {
+ sysreg_clear_set_s(SYS_HFGRTR_EL2, 0,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK);
+ sysreg_clear_set_s(SYS_HFGWTR_EL2, 0,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK);
+ }
}
static inline void ___activate_traps(struct kvm_vcpu *vcpu)
# will explode instantly (Words of Marc Zyngier). So introduce a generic flag
# __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM.
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
+ccflags-y += -fno-stack-protector \
+ -DDISABLE_BRANCH_PROFILING \
+ $(DISABLE_STACKLEAK_PLUGIN)
hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include
# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
+# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
+# when profile optimization is applied. gen-hyprel does not support SHT_REL and
+# causes a build failure. Remove profile optimization flags.
+KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS))
# KVM nVHE code is run at a different exception code with a different map, so
# compiler instrumentation that inserts callbacks or checks into the code may
if (!kvm_pte_valid(pte))
return PKVM_NOPAGE;
- return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte));
+ return pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte));
}
static int __hyp_check_page_state_range(u64 addr, u64 size,
write_sysreg(val, cptr_el2);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2);
- if (cpus_have_final_cap(ARM64_SME)) {
- val = read_sysreg_s(SYS_HFGRTR_EL2);
- val &= ~(HFGxTR_EL2_nTPIDR2_EL0_MASK |
- HFGxTR_EL2_nSMPRI_EL1_MASK);
- write_sysreg_s(val, SYS_HFGRTR_EL2);
-
- val = read_sysreg_s(SYS_HFGWTR_EL2);
- val &= ~(HFGxTR_EL2_nTPIDR2_EL0_MASK |
- HFGxTR_EL2_nSMPRI_EL1_MASK);
- write_sysreg_s(val, SYS_HFGWTR_EL2);
- }
-
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
- if (cpus_have_final_cap(ARM64_SME)) {
- u64 val;
-
- val = read_sysreg_s(SYS_HFGRTR_EL2);
- val |= HFGxTR_EL2_nTPIDR2_EL0_MASK |
- HFGxTR_EL2_nSMPRI_EL1_MASK;
- write_sysreg_s(val, SYS_HFGRTR_EL2);
-
- val = read_sysreg_s(SYS_HFGWTR_EL2);
- val |= HFGxTR_EL2_nTPIDR2_EL0_MASK |
- HFGxTR_EL2_nSMPRI_EL1_MASK;
- write_sysreg_s(val, SYS_HFGWTR_EL2);
- }
-
cptr = CPTR_EL2_DEFAULT;
if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED))
cptr |= CPTR_EL2_TZ;
__activate_traps_fpsimd32(vcpu);
}
- if (cpus_have_final_cap(ARM64_SME))
- write_sysreg(read_sysreg(sctlr_el2) & ~SCTLR_ELx_ENTP2,
- sctlr_el2);
-
write_sysreg(val, cpacr_el1);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
*/
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
- if (cpus_have_final_cap(ARM64_SME))
- write_sysreg(read_sysreg(sctlr_el2) | SCTLR_ELx_ENTP2,
- sctlr_el2);
-
write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
if (!arm64_kernel_unmapped_at_el0())
static unsigned long io_map_base;
+static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL);
+ phys_addr_t boundary = ALIGN_DOWN(addr + size, size);
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
/*
* Release kvm_mmu_lock periodically if the memory region is large. Otherwise,
if (!pgt)
return -EINVAL;
- next = stage2_pgd_addr_end(kvm, addr, end);
+ next = stage2_range_addr_end(addr, end);
ret = fn(pgt, addr, next - addr);
if (ret)
break;
memset(entry, 0, esz);
- while (len > 0) {
+ while (true) {
int next_offset;
size_t byte_offset;
return next_offset;
byte_offset = next_offset * esz;
+ if (byte_offset >= len)
+ break;
+
id += next_offset;
gpa += byte_offset;
len -= byte_offset;
#include <asm/bug.h>
#include <asm/cmpxchg.h>
#include <asm/cpufeature.h>
+#include <asm/efi.h>
#include <asm/exception.h>
#include <asm/daifflags.h>
#include <asm/debug-monitors.h>
msg = "paging request";
}
+ if (efi_runtime_fixup_exception(regs, msg))
+ return;
+
die_kernel_fault(msg, addr, esr, regs);
}
unsigned long __get_wchan(struct task_struct *p);
#define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
- THREAD_SIZE - 32 - sizeof(struct pt_regs))
+ THREAD_SIZE - sizeof(struct pt_regs))
#define task_pt_regs(tsk) ((struct pt_regs *)__KSTK_TOS(tsk))
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->csr_era)
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[3])
unsigned long csr_euen;
unsigned long csr_ecfg;
unsigned long csr_estat;
- unsigned long __last[0];
+ unsigned long __last[];
} __aligned(8);
static inline int regs_irqs_disabled(struct pt_regs *regs)
#define current_pt_regs() \
({ \
unsigned long sp = (unsigned long)__builtin_frame_address(0); \
- (struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1 - 32) - 1; \
+ (struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1) - 1; \
})
/* Helpers for working with the user stack pointer */
la.pcrel tp, init_thread_union
/* Set the SP after an empty pt_regs. */
- PTR_LI sp, (_THREAD_SIZE - 32 - PT_SIZE)
+ PTR_LI sp, (_THREAD_SIZE - PT_SIZE)
PTR_ADD sp, sp, tp
set_saved_sp sp, t0, t1
- PTR_ADDI sp, sp, -4 * SZREG # init stack pointer
bl start_kernel
ASM_BUG()
unsigned long clone_flags = args->flags;
struct pt_regs *childregs, *regs = current_pt_regs();
- childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE - 32;
+ childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE;
/* set up new TSS. */
childregs = (struct pt_regs *) childksp - 1;
struct stack_info *info)
{
unsigned long begin = (unsigned long)task_stack_page(task);
- unsigned long end = begin + THREAD_SIZE - 32;
+ unsigned long end = begin + THREAD_SIZE;
if (stack < begin || stack >= end)
return false;
move tp, a2
cpu_restore_nonscratch a1
- li.w t0, _THREAD_SIZE - 32
+ li.w t0, _THREAD_SIZE
PTR_ADD t0, t0, tp
set_saved_sp t0, t1, t2
const u8 t1 = LOONGARCH_GPR_T1;
const u8 t2 = LOONGARCH_GPR_T2;
const u8 t3 = LOONGARCH_GPR_T3;
+ const u8 r0 = regmap[BPF_REG_0];
const u8 src = regmap[insn->src_reg];
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
break;
/* r0 = atomic_cmpxchg(dst + off, r0, src); */
case BPF_CMPXCHG:
- u8 r0 = regmap[BPF_REG_0];
-
move_reg(ctx, t2, r0);
if (isdw) {
emit_insn(ctx, lld, r0, t1, 0);
static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass)
{
- const bool is32 = BPF_CLASS(insn->code) == BPF_ALU ||
- BPF_CLASS(insn->code) == BPF_JMP32;
+ u8 tm = -1;
+ u64 func_addr;
+ bool func_addr_fixed;
+ int i = insn - ctx->prog->insnsi;
+ int ret, jmp_offset;
const u8 code = insn->code;
const u8 cond = BPF_OP(code);
const u8 t1 = LOONGARCH_GPR_T1;
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
const s32 imm = insn->imm;
- int jmp_offset;
- int i = insn - ctx->prog->insnsi;
+ const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
+ const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
switch (code) {
/* dst = src */
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
- u8 t7 = -1;
jmp_offset = bpf2la_offset(i, off, ctx);
if (imm) {
move_imm(ctx, t1, imm, false);
- t7 = t1;
+ tm = t1;
} else {
/* If imm is 0, simply use zero register. */
- t7 = LOONGARCH_GPR_ZERO;
+ tm = LOONGARCH_GPR_ZERO;
}
move_reg(ctx, t2, dst);
if (is_signed_bpf_cond(BPF_OP(code))) {
- emit_sext_32(ctx, t7, is32);
+ emit_sext_32(ctx, tm, is32);
emit_sext_32(ctx, t2, is32);
} else {
- emit_zext_32(ctx, t7, is32);
+ emit_zext_32(ctx, tm, is32);
emit_zext_32(ctx, t2, is32);
}
- if (emit_cond_jmp(ctx, cond, t2, t7, jmp_offset) < 0)
+ if (emit_cond_jmp(ctx, cond, t2, tm, jmp_offset) < 0)
goto toofar;
break;
/* function call */
case BPF_JMP | BPF_CALL:
- int ret;
- u64 func_addr;
- bool func_addr_fixed;
-
mark_call(ctx);
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&func_addr, &func_addr_fixed);
/* dst = imm64 */
case BPF_LD | BPF_IMM | BPF_DW:
- u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
-
move_imm(ctx, dst, imm64, is32);
return 1;
#define SVERSION_ANY_ID PA_SVERSION_ANY_ID
struct hp_hardware {
- unsigned short hw_type:5; /* HPHW_xxx */
- unsigned short hversion;
- unsigned long sversion:28;
- unsigned short opt;
- const char name[80]; /* The hardware description */
-};
+ unsigned int hw_type:8; /* HPHW_xxx */
+ unsigned int hversion:12;
+ unsigned int sversion:12;
+ unsigned char opt;
+ unsigned char name[59]; /* The hardware description */
+} __packed;
struct parisc_device;
#if !defined(__ASSEMBLY__)
-/* flags of the device_path */
+/* flags for hardware_path */
#define PF_AUTOBOOT 0x80
#define PF_AUTOSEARCH 0x40
#define PF_TIMER 0x0F
-struct device_path { /* page 1-69 */
- unsigned char flags; /* flags see above! */
- unsigned char bc[6]; /* bus converter routing info */
- unsigned char mod;
- unsigned int layers[6];/* device-specific layer-info */
-} __attribute__((aligned(8))) ;
+struct hardware_path {
+ unsigned char flags; /* see bit definitions below */
+ signed char bc[6]; /* Bus Converter routing info to a specific */
+ /* I/O adaptor (< 0 means none, > 63 resvd) */
+ signed char mod; /* fixed field of specified module */
+};
+
+struct pdc_module_path { /* page 1-69 */
+ struct hardware_path path;
+ unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */
+} __attribute__((aligned(8)));
struct pz_device {
- struct device_path dp; /* see above */
+ struct pdc_module_path dp; /* see above */
/* struct iomod *hpa; */
unsigned int hpa; /* HPA base address */
/* char *spa; */
int mode;
};
-struct hardware_path {
- char flags; /* see bit definitions below */
- char bc[6]; /* Bus Converter routing info to a specific */
- /* I/O adaptor (< 0 means none, > 63 resvd) */
- char mod; /* fixed field of specified module */
-};
-
-/*
- * Device path specifications used by PDC.
- */
-struct pdc_module_path {
- struct hardware_path path;
- unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */
-};
-
/* Only used on some pre-PA2.0 boxes */
struct pdc_memory_map { /* PDC_MEMORY_MAP */
unsigned long hpa; /* mod's register set address */
&root);
}
-static void print_parisc_device(struct parisc_device *dev)
+static __init void print_parisc_device(struct parisc_device *dev)
{
- char hw_path[64];
- static int count;
+ static int count __initdata;
- print_pa_hwpath(dev, hw_path);
- pr_info("%d. %s at %pap [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }",
- ++count, dev->name, &(dev->hpa.start), hw_path, dev->id.hw_type,
- dev->id.hversion_rev, dev->id.hversion, dev->id.sversion);
+ pr_info("%d. %s at %pap { type:%d, hv:%#x, sv:%#x, rev:%#x }",
+ ++count, dev->name, &(dev->hpa.start), dev->id.hw_type,
+ dev->id.hversion, dev->id.sversion, dev->id.hversion_rev);
if (dev->num_addrs) {
int k;
-static int print_one_device(struct device * dev, void * data)
+static __init int print_one_device(struct device * dev, void * data)
{
struct parisc_device * pdev = to_parisc_device(dev);
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
+ select ARCH_SPLIT_ARG64 if PPC32
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x
#
config PPC_LONG_DOUBLE_128
- depends on PPC64
+ depends on PPC64 && ALTIVEC
def_bool $(success,test "$(shell,echo __LONG_DOUBLE_128__ | $(CC) -E -P -)" = 1)
config PPC_BARRIER_NOSPEC
if (radix_enabled())
return;
+ /*
+ * apply_to_page_range can call us this preempt enabled when
+ * operating on kernel page tables.
+ */
+ preempt_disable();
batch = this_cpu_ptr(&ppc64_tlb_batch);
batch->active = 1;
}
if (batch->index)
__flush_tlb_pending(batch);
batch->active = 0;
+ preempt_enable();
}
#define arch_flush_lazy_mmu_mode() do {} while (0)
unsigned long len1, unsigned long len2);
long sys_ppc32_fadvise64(int fd, u32 unused, u32 offset1, u32 offset2,
size_t len, int advice);
+long sys_ppc_sync_file_range2(int fd, unsigned int flags,
+ unsigned int offset1,
+ unsigned int offset2,
+ unsigned int nbytes1,
+ unsigned int nbytes2);
+long sys_ppc_fallocate(int fd, int mode, u32 offset1, u32 offset2,
+ u32 len1, u32 len2);
#endif
#ifdef CONFIG_COMPAT
long compat_sys_mmap2(unsigned long addr, size_t len,
EXCEPTION_COMMON(0x260)
CHECK_NAPPING()
addi r3,r1,STACK_FRAME_OVERHEAD
+ /*
+ * XXX: Returning from performance_monitor_exception taken as a
+ * soft-NMI (Linux irqs disabled) may be risky to use interrupt_return
+ * and could cause bugs in return or elsewhere. That case should just
+ * restore registers and return. There is a workaround for one known
+ * problem in interrupt_exit_kernel_prepare().
+ */
bl performance_monitor_exception
b interrupt_return
EXC_COMMON_BEGIN(performance_monitor_common)
GEN_COMMON performance_monitor
addi r3,r1,STACK_FRAME_OVERHEAD
- bl performance_monitor_exception
+ lbz r4,PACAIRQSOFTMASK(r13)
+ cmpdi r4,IRQS_ENABLED
+ bne 1f
+ bl performance_monitor_exception_async
b interrupt_return_srr
+1:
+ bl performance_monitor_exception_nmi
+ /* Clear MSR_RI before setting SRR0 and SRR1. */
+ li r9,0
+ mtmsrd r9,1
+ kuap_kernel_restore r9, r10
+
+ EXCEPTION_RESTORE_REGS hsrr=0
+ RFI_TO_KERNEL
/**
* Interrupt 0xf20 - Vector Unavailable Interrupt.
if (regs_is_unrecoverable(regs))
unrecoverable_exception(regs);
/*
- * CT_WARN_ON comes here via program_check_exception,
- * so avoid recursion.
+ * CT_WARN_ON comes here via program_check_exception, so avoid
+ * recursion.
+ *
+ * Skip the assertion on PMIs on 64e to work around a problem caused
+ * by NMI PMIs incorrectly taking this interrupt return path, it's
+ * possible for this to hit after interrupt exit to user switches
+ * context to user. See also the comment in the performance monitor
+ * handler in exceptions-64e.S
*/
- if (TRAP(regs) != INTERRUPT_PROGRAM)
+ if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) &&
+ TRAP(regs) != INTERRUPT_PROGRAM &&
+ TRAP(regs) != INTERRUPT_PERFMON)
CT_WARN_ON(ct_state() == CONTEXT_USER);
kuap = kuap_get_and_assert_locked();
* Returning to soft-disabled context.
* Check if a MUST_HARD_MASK interrupt has become pending, in which
* case we need to disable MSR[EE] in the return context.
+ *
+ * The MSR[EE] check catches among other things the short incoherency
+ * in hard_irq_disable() between clearing MSR[EE] and setting
+ * PACA_IRQ_HARD_DIS.
*/
ld r12,_MSR(r1)
andi. r10,r12,MSR_EE
beq .Lfast_kernel_interrupt_return_\srr\() // EE already disabled
lbz r11,PACAIRQHAPPENED(r13)
andi. r10,r11,PACA_IRQ_MUST_HARD_MASK
- beq .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending
+ bne 1f // HARD_MASK is pending
+ // No HARD_MASK pending, clear possible HARD_DIS set by interrupt
+ andi. r11,r11,(~PACA_IRQ_HARD_DIS)@l
+ stb r11,PACAIRQHAPPENED(r13)
+ b .Lfast_kernel_interrupt_return_\srr\()
+
- /* Must clear MSR_EE from _MSR */
+1: /* Must clear MSR_EE from _MSR */
#ifdef CONFIG_PPC_BOOK3S
li r10,0
/* Clear valid before changing _MSR */
advice);
}
-COMPAT_SYSCALL_DEFINE6(ppc_sync_file_range2,
+PPC32_SYSCALL_DEFINE6(ppc_sync_file_range2,
int, fd, unsigned int, flags,
unsigned int, offset1, unsigned int, offset2,
unsigned int, nbytes1, unsigned int, nbytes2)
return ksys_sync_file_range(fd, offset, nbytes, flags);
}
+
+#ifdef CONFIG_PPC32
+SYSCALL_DEFINE6(ppc_fallocate,
+ int, fd, int, mode,
+ u32, offset1, u32, offset2, u32, len1, u32, len2)
+{
+ return ksys_fallocate(fd, mode,
+ merge_64(offset1, offset2),
+ merge_64(len1, len2));
+}
+#endif
305 common signalfd sys_signalfd compat_sys_signalfd
306 common timerfd_create sys_timerfd_create
307 common eventfd sys_eventfd
-308 common sync_file_range2 sys_sync_file_range2 compat_sys_ppc_sync_file_range2
-309 nospu fallocate sys_fallocate compat_sys_fallocate
+308 32 sync_file_range2 sys_ppc_sync_file_range2 compat_sys_ppc_sync_file_range2
+308 64 sync_file_range2 sys_sync_file_range2
+308 spu sync_file_range2 sys_sync_file_range2
+309 32 fallocate sys_ppc_fallocate compat_sys_fallocate
+309 64 fallocate sys_fallocate
310 nospu subpage_prot sys_subpage_prot
311 32 timerfd_settime sys_timerfd_settime32
311 64 timerfd_settime sys_timerfd_settime
config KVM_BOOK3S_32
tristate "KVM support for PowerPC book3s_32 processors"
depends on PPC_BOOK3S_32 && !SMP && !PTE_64BIT
+ depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_BOOK3S_32_HANDLER
select KVM_BOOK3S_PR_POSSIBLE
config KVM_BOOK3S_64_PR
tristate "KVM support without using hypervisor mode in host"
depends on KVM_BOOK3S_64
+ depends on !CONTEXT_TRACKING_USER
select KVM_BOOK3S_PR_POSSIBLE
help
Support running guest kernels in virtual machines on processors
config KVM_E500V2
bool "KVM support for PowerPC E500v2 processors"
depends on PPC_E500 && !PPC_E500MC
+ depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_MMIO
select MMU_NOTIFIER
config KVM_E500MC
bool "KVM support for PowerPC E500MC/E5500/E6500 processors"
depends on PPC_E500MC
+ depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_MMIO
select KVM_BOOKE_HV
{
disable_kernel_altivec();
pagefault_enable();
- preempt_enable();
+ preempt_enable_no_resched();
+ /*
+ * Must never explicitly call schedule (including preempt_enable())
+ * while in a kuap-unlocked user copy, because the AMR register will
+ * not be saved and restored across context switch. However preempt
+ * kernels need to be preempted as soon as possible if need_resched is
+ * set and we are preemptible. The hack here is to schedule a
+ * decrementer to fire here and reschedule for us if necessary.
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
+ set_dec(1);
return 0;
}
static DEFINE_RAW_SPINLOCK(native_tlbie_lock);
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map hpte_lock_map =
+ STATIC_LOCKDEP_MAP_INIT("hpte_lock", &hpte_lock_map);
+
+static void acquire_hpte_lock(void)
+{
+ lock_map_acquire(&hpte_lock_map);
+}
+
+static void release_hpte_lock(void)
+{
+ lock_map_release(&hpte_lock_map);
+}
+#else
+static void acquire_hpte_lock(void)
+{
+}
+
+static void release_hpte_lock(void)
+{
+}
+#endif
+
static inline unsigned long ___tlbie(unsigned long vpn, int psize,
int apsize, int ssize)
{
{
unsigned long *word = (unsigned long *)&hptep->v;
+ acquire_hpte_lock();
while (1) {
if (!test_and_set_bit_lock(HPTE_LOCK_BIT, word))
break;
{
unsigned long *word = (unsigned long *)&hptep->v;
+ release_hpte_lock();
clear_bit_unlock(HPTE_LOCK_BIT, word);
}
{
struct hash_pte *hptep = htab_address + hpte_group;
unsigned long hpte_v, hpte_r;
+ unsigned long flags;
int i;
+ local_irq_save(flags);
+
if (!(vflags & HPTE_V_BOLTED)) {
DBG_LOW(" insert(group=%lx, vpn=%016lx, pa=%016lx,"
" rflags=%lx, vflags=%lx, psize=%d)\n",
hptep++;
}
- if (i == HPTES_PER_GROUP)
+ if (i == HPTES_PER_GROUP) {
+ local_irq_restore(flags);
return -1;
+ }
hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
* Now set the first dword including the valid bit
* NOTE: this also unlocks the hpte
*/
+ release_hpte_lock();
hptep->v = cpu_to_be64(hpte_v);
__asm__ __volatile__ ("ptesync" : : : "memory");
+ local_irq_restore(flags);
+
return i | (!!(vflags & HPTE_V_SECONDARY) << 3);
}
return -1;
/* Invalidate the hpte. NOTE: this also unlocks it */
+ release_hpte_lock();
hptep->v = 0;
return i;
struct hash_pte *hptep = htab_address + slot;
unsigned long hpte_v, want_v;
int ret = 0, local = 0;
+ unsigned long irqflags;
+
+ local_irq_save(irqflags);
want_v = hpte_encode_avpn(vpn, bpsize, ssize);
if (!(flags & HPTE_NOHPTE_UPDATE))
tlbie(vpn, bpsize, apsize, ssize, local);
+ local_irq_restore(irqflags);
+
return ret;
}
unsigned long vsid;
long slot;
struct hash_pte *hptep;
+ unsigned long flags;
+
+ local_irq_save(flags);
vsid = get_kernel_vsid(ea, ssize);
vpn = hpt_vpn(ea, vsid, ssize);
* actual page size will be same.
*/
tlbie(vpn, psize, psize, ssize, 0);
+
+ local_irq_restore(flags);
}
/*
unsigned long vsid;
long slot;
struct hash_pte *hptep;
+ unsigned long flags;
+
+ local_irq_save(flags);
vsid = get_kernel_vsid(ea, ssize);
vpn = hpt_vpn(ea, vsid, ssize);
/* Invalidate the TLB */
tlbie(vpn, psize, psize, ssize, 0);
+
+ local_irq_restore(flags);
+
return 0;
}
/* recheck with locks held */
hpte_v = hpte_get_old_v(hptep);
- if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID))
+ if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {
/* Invalidate the hpte. NOTE: this also unlocks it */
+ release_hpte_lock();
hptep->v = 0;
- else
+ } else
native_unlock_hpte(hptep);
}
/*
hpte_v = hpte_get_old_v(hptep);
if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {
- /*
- * Invalidate the hpte. NOTE: this also unlocks it
- */
-
+ /* Invalidate the hpte. NOTE: this also unlocks it */
+ release_hpte_lock();
hptep->v = 0;
} else
native_unlock_hpte(hptep);
if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
native_unlock_hpte(hptep);
- else
+ else {
+ release_hpte_lock();
hptep->v = 0;
+ }
} pte_iterate_hashed_end();
}
struct change_memory_parms {
unsigned long start, end, newpp;
- unsigned int step, nr_cpus, master_cpu;
+ unsigned int step, nr_cpus;
+ atomic_t master_cpu;
atomic_t cpu_counter;
};
{
struct change_memory_parms *parms = data;
- if (parms->master_cpu != smp_processor_id())
+ // First CPU goes through, all others wait.
+ if (atomic_xchg(&parms->master_cpu, 1) == 1)
return chmem_secondary_loop(parms);
// Wait for all but one CPU (this one) to call-in
chmem_parms.end = end;
chmem_parms.step = step;
chmem_parms.newpp = newpp;
- chmem_parms.master_cpu = smp_processor_id();
+ atomic_set(&chmem_parms.master_cpu, 0);
cpus_read_lock();
}
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
-static DEFINE_SPINLOCK(linear_map_hash_lock);
+static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
{
mmu_linear_psize, mmu_kernel_ssize);
BUG_ON (ret < 0);
- spin_lock(&linear_map_hash_lock);
+ raw_spin_lock(&linear_map_hash_lock);
BUG_ON(linear_map_hash_slots[lmi] & 0x80);
linear_map_hash_slots[lmi] = ret | 0x80;
- spin_unlock(&linear_map_hash_lock);
+ raw_spin_unlock(&linear_map_hash_lock);
}
static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
- spin_lock(&linear_map_hash_lock);
+ raw_spin_lock(&linear_map_hash_lock);
if (!(linear_map_hash_slots[lmi] & 0x80)) {
- spin_unlock(&linear_map_hash_lock);
+ raw_spin_unlock(&linear_map_hash_lock);
return;
}
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
- spin_unlock(&linear_map_hash_lock);
+ raw_spin_unlock(&linear_map_hash_lock);
if (hidx & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
#include <asm/drmem.h>
#include "pseries.h"
+#include "vas.h" /* pseries_vas_dlpar_cpu() */
/*
* This isn't a module but we expose that to userspace
return -EINVAL;
retval = update_ppp(new_entitled_ptr, NULL);
+
+ if (retval == H_SUCCESS || retval == H_CONSTRAINED) {
+ /*
+ * The hypervisor assigns VAS resources based
+ * on entitled capacity for shared mode.
+ * Reconfig VAS windows based on DLPAR CPU events.
+ */
+ if (pseries_vas_dlpar_cpu() != 0)
+ retval = H_HARDWARE;
+ }
} else if (!strcmp(kbuf, "capacity_weight")) {
char *endp;
*new_weight_ptr = (u8) simple_strtoul(tmp, &endp, 10);
struct vas_user_win_ref *tsk_ref;
int rc;
- rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));
- if (!rc) {
- tsk_ref = &txwin->vas_win.task_ref;
- vas_dump_crb(&crb);
- vas_update_csb(&crb, tsk_ref);
+ while (atomic_read(&txwin->pending_faults)) {
+ rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));
+ if (!rc) {
+ tsk_ref = &txwin->vas_win.task_ref;
+ vas_dump_crb(&crb);
+ vas_update_csb(&crb, tsk_ref);
+ }
+ atomic_dec(&txwin->pending_faults);
}
return IRQ_HANDLED;
}
+/*
+ * irq_default_primary_handler() can be used only with IRQF_ONESHOT
+ * which disables IRQ before executing the thread handler and enables
+ * it after. But this disabling interrupt sets the VAS IRQ OFF
+ * state in the hypervisor. If the NX generates fault interrupt
+ * during this window, the hypervisor will not deliver this
+ * interrupt to the LPAR. So use VAS specific IRQ handler instead
+ * of calling the default primary handler.
+ */
+static irqreturn_t pseries_vas_irq_handler(int irq, void *data)
+{
+ struct pseries_vas_window *txwin = data;
+
+ /*
+ * The thread hanlder will process this interrupt if it is
+ * already running.
+ */
+ atomic_inc(&txwin->pending_faults);
+
+ return IRQ_WAKE_THREAD;
+}
+
/*
* Allocate window and setup IRQ mapping.
*/
goto out_irq;
}
- rc = request_threaded_irq(txwin->fault_virq, NULL,
- pseries_vas_fault_thread_fn, IRQF_ONESHOT,
+ rc = request_threaded_irq(txwin->fault_virq,
+ pseries_vas_irq_handler,
+ pseries_vas_fault_thread_fn, 0,
txwin->name, txwin);
if (rc) {
pr_err("VAS-Window[%d]: Request IRQ(%u) failed with %d\n",
mutex_unlock(&vas_pseries_mutex);
return rc;
}
+
+int pseries_vas_dlpar_cpu(void)
+{
+ int new_nr_creds, rc;
+
+ rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,
+ vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,
+ (u64)virt_to_phys(&hv_cop_caps));
+ if (!rc) {
+ new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);
+ rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE, new_nr_creds);
+ }
+
+ if (rc)
+ pr_err("Failed reconfig VAS capabilities with DLPAR\n");
+
+ return rc;
+}
+
/*
* Total number of default credits available (target_credits)
* in LPAR depends on number of cores configured. It varies based on
struct of_reconfig_data *rd = data;
struct device_node *dn = rd->dn;
const __be32 *intserv = NULL;
- int new_nr_creds, len, rc = 0;
+ int len;
+
+ /*
+ * For shared CPU partition, the hypervisor assigns total credits
+ * based on entitled core capacity. So updating VAS windows will
+ * be called from lparcfg_write().
+ */
+ if (is_shared_processor())
+ return NOTIFY_OK;
if ((action == OF_RECONFIG_ATTACH_NODE) ||
(action == OF_RECONFIG_DETACH_NODE))
if (!intserv)
return NOTIFY_OK;
- rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,
- vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,
- (u64)virt_to_phys(&hv_cop_caps));
- if (!rc) {
- new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);
- rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE,
- new_nr_creds);
- }
-
- if (rc)
- pr_err("Failed reconfig VAS capabilities with DLPAR\n");
-
- return rc;
+ return pseries_vas_dlpar_cpu();
}
static struct notifier_block pseries_vas_nb = {
u64 flags;
char *name;
int fault_virq;
+ atomic_t pending_faults; /* Number of pending faults */
};
int sysfs_add_vas_caps(struct vas_cop_feat_caps *caps);
#ifdef CONFIG_PPC_VAS
int vas_migration_handler(int action);
+int pseries_vas_dlpar_cpu(void);
#else
static inline int vas_migration_handler(int action)
{
return 0;
}
+static inline int pseries_vas_dlpar_cpu(void)
+{
+ return 0;
+}
#endif
#endif /* _VAS_H */
If you don't know what to do here, say Y.
-config CC_HAS_ZICBOM
+config TOOLCHAIN_HAS_ZICBOM
bool
- default y if 64BIT && $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)
- default y if 32BIT && $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)
+ default y
+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)
+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)
+ depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800
config RISCV_ISA_ZICBOM
bool "Zicbom extension support for non-coherent DMA operation"
- depends on CC_HAS_ZICBOM
+ depends on TOOLCHAIN_HAS_ZICBOM
depends on !XIP_KERNEL && MMU
select RISCV_DMA_NONCOHERENT
select RISCV_ALTERNATIVE
If you don't know what to do here, say Y.
+config TOOLCHAIN_HAS_ZIHINTPAUSE
+ bool
+ default y
+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zihintpause)
+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause)
+ depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600
+
config FPU
bool "FPU support"
default y
riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
# Check if the toolchain supports Zicbom extension
-toolchain-supports-zicbom := $(call cc-option-yn, -march=$(riscv-march-y)_zicbom)
-riscv-march-$(toolchain-supports-zicbom) := $(riscv-march-y)_zicbom
+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom
# Check if the toolchain supports Zihintpause extension
-toolchain-supports-zihintpause := $(call cc-option-yn, -march=$(riscv-march-y)_zihintpause)
-riscv-march-$(toolchain-supports-zihintpause) := $(riscv-march-y)_zihintpause
+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
KBUILD_AFLAGS += -march=$(riscv-march-y)
#endif /* CONFIG_SMP */
-/*
- * The T-Head CMO errata internally probe the CBOM block size, but otherwise
- * don't depend on Zicbom.
- */
extern unsigned int riscv_cbom_block_size;
-#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void);
-#else
-static inline void riscv_init_cbom_blocksize(void) { }
-#endif
#ifdef CONFIG_RISCV_DMA_NONCOHERENT
void riscv_noncoherent_supported(void);
#define JUMP_LABEL_NOP_SIZE 4
-static __always_inline bool arch_static_branch(struct static_key *key,
- bool branch)
+static __always_inline bool arch_static_branch(struct static_key * const key,
+ const bool branch)
{
asm_volatile_goto(
" .option push \n\t"
return true;
}
-static __always_inline bool arch_static_branch_jump(struct static_key *key,
- bool branch)
+static __always_inline bool arch_static_branch_jump(struct static_key * const key,
+ const bool branch)
{
asm_volatile_goto(
" .option push \n\t"
int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu);
void kvm_riscv_guest_timer_init(struct kvm *kvm);
+void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu);
bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu);
* Reduce instruction retirement.
* This assumes the PC changes.
*/
-#ifdef __riscv_zihintpause
+#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE
__asm__ __volatile__ ("pause");
#else
/* Encoding of the pause instruction */
static void *c_start(struct seq_file *m, loff_t *pos)
{
+ if (*pos == nr_cpu_ids)
+ return NULL;
+
*pos = cpumask_next(*pos - 1, cpu_online_mask);
if ((*pos) < nr_cpu_ids)
return (void *)(uintptr_t)(1 + *pos);
clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
}
}
+
+ /* Sync-up timer CSRs */
+ kvm_riscv_vcpu_timer_sync(vcpu);
}
int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
kvm_riscv_vcpu_timer_unblocking(vcpu);
}
-void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
+void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
if (!t->sstc_enabled)
return;
- t = &vcpu->arch.timer;
#if defined(CONFIG_32BIT)
t->next_cycles = csr_read(CSR_VSTIMECMP);
t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
#else
t->next_cycles = csr_read(CSR_VSTIMECMP);
#endif
+}
+
+void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_timer *t = &vcpu->arch.timer;
+
+ if (!t->sstc_enabled)
+ return;
+
+ /*
+ * The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync()
+ * upon every VM exit so no need to save here.
+ */
+
/* timer should be enabled for the remaining operations */
if (unlikely(!t->init_done))
return;
* Copyright (C) 2017 SiFive
*/
+#include <linux/of.h>
#include <asm/cacheflush.h>
#ifdef CONFIG_SMP
flush_icache_all();
}
#endif /* CONFIG_MMU */
+
+unsigned int riscv_cbom_block_size;
+EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
+
+void riscv_init_cbom_blocksize(void)
+{
+ struct device_node *node;
+ unsigned long cbom_hartid;
+ u32 val, probed_block_size;
+ int ret;
+
+ probed_block_size = 0;
+ for_each_of_cpu_node(node) {
+ unsigned long hartid;
+
+ ret = riscv_of_processor_hartid(node, &hartid);
+ if (ret)
+ continue;
+
+ /* set block-size for cbom extension if available */
+ ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
+ if (ret)
+ continue;
+
+ if (!probed_block_size) {
+ probed_block_size = val;
+ cbom_hartid = hartid;
+ } else {
+ if (probed_block_size != val)
+ pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
+ cbom_hartid, hartid);
+ }
+ }
+
+ if (probed_block_size)
+ riscv_cbom_block_size = probed_block_size;
+}
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
#include <linux/mm.h>
-#include <linux/of.h>
-#include <linux/of_device.h>
#include <asm/cacheflush.h>
-unsigned int riscv_cbom_block_size;
-EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
-
static bool noncoherent_supported;
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
dev->dma_coherent = coherent;
}
-#ifdef CONFIG_RISCV_ISA_ZICBOM
-void riscv_init_cbom_blocksize(void)
-{
- struct device_node *node;
- unsigned long cbom_hartid;
- u32 val, probed_block_size;
- int ret;
-
- probed_block_size = 0;
- for_each_of_cpu_node(node) {
- unsigned long hartid;
-
- ret = riscv_of_processor_hartid(node, &hartid);
- if (ret)
- continue;
-
- /* set block-size for cbom extension if available */
- ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
- if (ret)
- continue;
-
- if (!probed_block_size) {
- probed_block_size = val;
- cbom_hartid = hartid;
- } else {
- if (probed_block_size != val)
- pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
- cbom_hartid, hartid);
- }
- }
-
- if (probed_block_size)
- riscv_cbom_block_size = probed_block_size;
-}
-#endif
-
void riscv_noncoherent_supported(void)
{
WARN(!riscv_cbom_block_size,
base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd)));
} else if (pgd_none(*pgd)) {
base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
+ memcpy(base_pud, (void *)kasan_early_shadow_pud,
+ sizeof(pud_t) * PTRS_PER_PUD);
} else {
base_pud = (pud_t *)pgd_page_vaddr(*pgd);
if (base_pud == lm_alias(kasan_early_shadow_pud)) {
base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd)));
} else {
base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d))
+ if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
+ memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
+ sizeof(p4d_t) * PTRS_PER_P4D);
+ }
}
p4dp = base_p4d + p4d_index(vaddr);
_compressed_start = .;
*(.vmlinux.bin.compressed)
_compressed_end = .;
- FILL(0xff);
- . = ALIGN(4096);
+ }
+
+#define SB_TRAILER_SIZE 32
+ /* Trailer needed for Secure Boot */
+ . += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */
+ . = ALIGN(4096) - SB_TRAILER_SIZE;
+ .sb.trailer : {
+ QUAD(0)
+ QUAD(0)
+ QUAD(0)
+ QUAD(0x000000207a49504c)
}
_end = .;
"3: jl 1b\n" \
" lhi %0,0\n" \
"4: sacf 768\n" \
- EX_TABLE(0b,4b) EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
+ EX_TABLE(0b,4b) EX_TABLE(1b,4b) \
+ EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
: "=d" (ret), "=&d" (oldval), "=&d" (newval), \
"=m" (*uaddr) \
: "0" (-EFAULT), "d" (oparg), "a" (uaddr), \
raw.frag.data = cpump->save;
raw.size = raw.frag.size;
data.raw = &raw;
+ data.sample_flags |= PERF_SAMPLE_RAW;
}
overflow = perf_event_overflow(event, &data, ®s);
asm volatile(
" lr 0,%[spec]\n"
"0: mvcos 0(%1),0(%4),%0\n"
- " jz 4f\n"
+ "6: jz 4f\n"
"1: algr %0,%2\n"
" slgr %1,%2\n"
" j 0b\n"
" clgr %0,%3\n" /* copy crosses next page boundary? */
" jnh 5f\n"
"3: mvcos 0(%1),0(%4),%3\n"
- " slgr %0,%3\n"
+ "7: slgr %0,%3\n"
" j 5f\n"
"4: slgr %0,%0\n"
"5:\n"
- EX_TABLE(0b,2b) EX_TABLE(3b,5b)
+ EX_TABLE(0b,2b) EX_TABLE(6b,2b) EX_TABLE(3b,5b) EX_TABLE(7b,5b)
: "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
: "a" (empty_zero_page), [spec] "d" (spec.val)
: "cc", "memory", "0");
asm volatile (
" sacf 256\n"
"0: llgc %[tmp],0(%[src])\n"
- " sllg %[val],%[val],8\n"
+ "4: sllg %[val],%[val],8\n"
" aghi %[src],1\n"
" ogr %[val],%[tmp]\n"
" brctg %[cnt],0b\n"
"2: ipm %[cc]\n"
" srl %[cc],28\n"
"3: sacf 768\n"
- EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
+ EX_TABLE(0b, 3b) EX_TABLE(4b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
:
[src] "+a" (src), [cnt] "+d" (cnt),
[val] "+d" (val), [tmp] "=d" (tmp),
"2: ahi %[shift],-8\n"
" srlg %[tmp],%[val],0(%[shift])\n"
"3: stc %[tmp],0(%[dst])\n"
- " aghi %[dst],1\n"
+ "5: aghi %[dst],1\n"
" brctg %[cnt],2b\n"
"4: sacf 768\n"
- EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
+ EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) EX_TABLE(5b, 4b)
:
[ioaddr_len] "+&d" (ioaddr_len.pair),
[cc] "+d" (cc), [val] "=d" (val),
config EFI_STUB
bool "EFI stub support"
depends on EFI
- depends on $(cc-option,-mabi=ms) || X86_32
select RELOCATABLE
help
This kernel feature allows a bzImage to be loaded directly
#define VE_GET_PORT_NUM(e) ((e) >> 16)
#define VE_IS_IO_STRING(e) ((e) & BIT(4))
+#define ATTR_SEPT_VE_DISABLE BIT(28)
+
/*
* Wrapper for standard use of __tdx_hypercall with no output aside from
* return code.
panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);
}
-static u64 get_cc_mask(void)
+static void tdx_parse_tdinfo(u64 *cc_mask)
{
struct tdx_module_output out;
unsigned int gpa_width;
+ u64 td_attr;
/*
* TDINFO TDX module call is used to get the TD execution environment
* information, etc. More details about the ABI can be found in TDX
* Guest-Host-Communication Interface (GHCI), section 2.4.2 TDCALL
* [TDG.VP.INFO].
+ */
+ tdx_module_call(TDX_GET_INFO, 0, 0, 0, 0, &out);
+
+ /*
+ * The highest bit of a guest physical address is the "sharing" bit.
+ * Set it for shared pages and clear it for private pages.
*
* The GPA width that comes out of this call is critical. TDX guests
* can not meaningfully run without it.
*/
- tdx_module_call(TDX_GET_INFO, 0, 0, 0, 0, &out);
-
gpa_width = out.rcx & GENMASK(5, 0);
+ *cc_mask = BIT_ULL(gpa_width - 1);
/*
- * The highest bit of a guest physical address is the "sharing" bit.
- * Set it for shared pages and clear it for private pages.
+ * The kernel can not handle #VE's when accessing normal kernel
+ * memory. Ensure that no #VE will be delivered for accesses to
+ * TD-private memory. Only VMM-shared memory (MMIO) will #VE.
*/
- return BIT_ULL(gpa_width - 1);
+ td_attr = out.rdx;
+ if (!(td_attr & ATTR_SEPT_VE_DISABLE))
+ panic("TD misconfiguration: SEPT_VE_DISABLE attibute must be set.\n");
}
/*
setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
cc_set_vendor(CC_VENDOR_INTEL);
- cc_mask = get_cc_mask();
+ tdx_parse_tdinfo(&cc_mask);
cc_set_mask(cc_mask);
/*
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
+#define POLYVAL_ALIGN 16
+#define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN)
+#define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+#define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA)
#define NUM_KEY_POWERS 8
struct polyval_tfm_ctx {
/*
* These powers must be in the order h^8, ..., h^1.
*/
- u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
+ u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR;
};
struct polyval_desc_ctx {
const u8 *in, size_t nblocks, u8 *accumulator);
asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);
+static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm)
+{
+ return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN);
+}
+
static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator)
{
static int polyval_x86_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
- struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(tfm);
int i;
if (keylen != POLYVAL_BLOCK_SIZE)
const u8 *src, unsigned int srclen)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
u8 *pos;
unsigned int nblocks;
unsigned int n;
static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
if (dctx->bytes) {
internal_polyval_mul(dctx->buffer,
.cra_driver_name = "polyval-clmulni",
.cra_priority = 200,
.cra_blocksize = POLYVAL_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_ctxsize = POLYVAL_CTX_SIZE,
.cra_module = THIS_MODULE,
},
};
/* Extension Memory */
if (ibs_caps & IBS_CAPS_ZEN4 &&
ibs_data_src == IBS_DATA_SRC_EXT_EXT_MEM) {
- data_src->mem_lvl_num = PERF_MEM_LVLNUM_EXTN_MEM;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_CXL;
if (op_data2->rmt_node) {
data_src->mem_remote = PERF_MEM_REMOTE_REMOTE;
/* IBS doesn't provide Remote socket detail */
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 5, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 6, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 7, 0x00000000),
+ INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 11, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_L, 3, 0x0000007c),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE, 3, 0x0000007c),
INTEL_CPU_DESC(INTEL_FAM6_KABYLAKE, 9, 0x0000004e),
INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL), /* SLOTS */
INTEL_PLD_CONSTRAINT(0x1cd, 0xff), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
- INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x1d0, 0xf), /* MEM_INST_RETIRED.LOAD */
- INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x2d0, 0xf), /* MEM_INST_RETIRED.STORE */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_INST_RETIRED.LOCK_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x41d0, 0xf), /* MEM_INST_RETIRED.SPLIT_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf), /* MEM_INST_RETIRED.SPLIT_STORES */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf), /* MEM_INST_RETIRED.ALL_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf), /* MEM_INST_RETIRED.ALL_STORES */
INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf), /* MEM_LOAD_*_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0xc0, 0xfe),
INTEL_PLD_CONSTRAINT(0x1cd, 0xfe),
INTEL_PSD_CONSTRAINT(0x2cd, 0x1),
- INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x1d0, 0xf),
- INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x2d0, 0xf),
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_INST_RETIRED.LOCK_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x41d0, 0xf), /* MEM_INST_RETIRED.SPLIT_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf), /* MEM_INST_RETIRED.SPLIT_STORES */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf), /* MEM_INST_RETIRED.ALL_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf), /* MEM_INST_RETIRED.ALL_STORES */
INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf),
return;
clear_arch_lbr:
- clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
+ setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
}
/**
case RAPL_UNIT_QUIRK_INTEL_HSW:
rapl_hw_unit[PERF_RAPL_RAM] = 16;
break;
- /*
- * SPR shares the same DRAM domain energy unit as HSW, plus it
- * also has a fixed energy unit for Psys domain.
- */
+ /* SPR uses a fixed energy unit for Psys domain. */
case RAPL_UNIT_QUIRK_INTEL_SPR:
- rapl_hw_unit[PERF_RAPL_RAM] = 16;
rapl_hw_unit[PERF_RAPL_PSYS] = 0;
break;
default:
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &model_skl),
+ X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &model_spr),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &model_skl),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &model_skl),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &model_skl),
{},
};
MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);
#define INTEL_FAM6_SAPPHIRERAPIDS_X 0x8F /* Golden Cove */
+#define INTEL_FAM6_EMERALDRAPIDS_X 0xCF
+
+#define INTEL_FAM6_GRANITERAPIDS_X 0xAD
+#define INTEL_FAM6_GRANITERAPIDS_D 0xAE
+
#define INTEL_FAM6_ALDERLAKE 0x97 /* Golden Cove / Gracemont */
#define INTEL_FAM6_ALDERLAKE_L 0x9A /* Golden Cove / Gracemont */
#define INTEL_FAM6_ALDERLAKE_N 0xBE
#define INTEL_FAM6_METEORLAKE 0xAC
#define INTEL_FAM6_METEORLAKE_L 0xAA
-/* "Small Core" Processors (Atom) */
+/* "Small Core" Processors (Atom/E-Core) */
#define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
#define INTEL_FAM6_ATOM_BONNELL_MID 0x26 /* Silverthorne, Lincroft */
#define INTEL_FAM6_ATOM_TREMONT 0x96 /* Elkhart Lake */
#define INTEL_FAM6_ATOM_TREMONT_L 0x9C /* Jasper Lake */
+#define INTEL_FAM6_SIERRAFOREST_X 0xAF
+
+#define INTEL_FAM6_GRANDRIDGE 0xB6
+
/* Xeon Phi */
#define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */
{
u64 start = rmrr->base_address;
u64 end = rmrr->end_address + 1;
+ int entry_type;
- if (e820__mapped_all(start, end, E820_TYPE_RESERVED))
+ entry_type = e820__get_entry_type(start, end);
+ if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS)
return 0;
pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",
/* Even with __builtin_ the compiler may decide to use the out of line
function. */
+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
+#include <linux/kmsan_string.h>
+#endif
+
#define __HAVE_ARCH_MEMCPY 1
-#if defined(__SANITIZE_MEMORY__)
+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
#undef memcpy
-void *__msan_memcpy(void *dst, const void *src, size_t size);
#define memcpy __msan_memcpy
#else
extern void *memcpy(void *to, const void *from, size_t len);
extern void *__memcpy(void *to, const void *from, size_t len);
#define __HAVE_ARCH_MEMSET
-#if defined(__SANITIZE_MEMORY__)
+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
extern void *__msan_memset(void *s, int c, size_t n);
#undef memset
#define memset __msan_memset
}
#define __HAVE_ARCH_MEMMOVE
-#if defined(__SANITIZE_MEMORY__)
+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
#undef memmove
void *__msan_memmove(void *dest, const void *src, size_t len);
#define memmove __msan_memmove
#ifndef _ASM_X86_SYSCALL_WRAPPER_H
#define _ASM_X86_SYSCALL_WRAPPER_H
-struct pt_regs;
+#include <asm/ptrace.h>
extern long __x64_sys_ni_syscall(const struct pt_regs *regs);
extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);
#define __put_user_size(x, ptr, size, label) \
do { \
__typeof__(*(ptr)) __x = (x); /* eval x once */ \
- __chk_user_ptr(ptr); \
+ __typeof__(ptr) __ptr = (ptr); /* eval ptr once */ \
+ __chk_user_ptr(__ptr); \
switch (size) { \
case 1: \
- __put_user_goto(__x, ptr, "b", "iq", label); \
+ __put_user_goto(__x, __ptr, "b", "iq", label); \
break; \
case 2: \
- __put_user_goto(__x, ptr, "w", "ir", label); \
+ __put_user_goto(__x, __ptr, "w", "ir", label); \
break; \
case 4: \
- __put_user_goto(__x, ptr, "l", "ir", label); \
+ __put_user_goto(__x, __ptr, "l", "ir", label); \
break; \
case 8: \
- __put_user_goto_u64(__x, ptr, label); \
+ __put_user_goto_u64(__x, __ptr, label); \
break; \
default: \
__put_user_bad(); \
} \
- instrument_put_user(__x, ptr, size); \
+ instrument_put_user(__x, __ptr, size); \
} while (0)
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
return ret;
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
- if (rev >= mc->hdr.patch_id)
+
+ /*
+ * Allow application of the same revision to pick up SMT-specific
+ * changes even if the revision of the other SMT thread is already
+ * up-to-date.
+ */
+ if (rev > mc->hdr.patch_id)
return ret;
if (!__apply_microcode_amd(mc)) {
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
- /* Check whether we have saved a new patch already: */
- if (*new_rev && rev < mc->hdr.patch_id) {
+ /*
+ * Check whether a new patch has been saved already. Also, allow application of
+ * the same revision in order to pick up SMT-thread-specific configuration even
+ * if the sibling SMT thread already has an up-to-date revision.
+ */
+ if (*new_rev && rev <= mc->hdr.patch_id) {
if (!__apply_microcode_amd(mc)) {
*new_rev = mc->hdr.patch_id;
return;
.rid = RDT_RESOURCE_L3,
.name = "L3",
.cache_level = 3,
- .cache = {
- .min_cbm_bits = 1,
- },
.domains = domain_init(RDT_RESOURCE_L3),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
.rid = RDT_RESOURCE_L2,
.name = "L2",
.cache_level = 2,
- .cache = {
- .min_cbm_bits = 1,
- },
.domains = domain_init(RDT_RESOURCE_L2),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
r->cache.arch_has_sparse_bitmaps = false;
r->cache.arch_has_empty_bitmaps = false;
r->cache.arch_has_per_cpu_cfg = false;
+ r->cache.min_cbm_bits = 1;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_THRTL_BASE;
hw_res->msr_update = mba_wrmsr_intel;
r->cache.arch_has_sparse_bitmaps = true;
r->cache.arch_has_empty_bitmaps = true;
r->cache.arch_has_per_cpu_cfg = true;
+ r->cache.min_cbm_bits = 0;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_BW_BASE;
hw_res->msr_update = mba_wrmsr_amd;
unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
unsigned int core_select_mask, core_level_siblings;
unsigned int die_select_mask, die_level_siblings;
+ unsigned int pkg_mask_width;
bool die_level_present = false;
int leaf;
core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
- die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
sub_index = 1;
- do {
+ while (true) {
cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
/*
die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
}
+ if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
+ pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ else
+ break;
+
sub_index++;
- } while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
+ }
- core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width;
+ core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
die_select_mask = (~(-1 << die_plus_mask_width)) >>
core_plus_mask_width;
}
c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
- die_plus_mask_width);
+ pkg_mask_width);
/*
* Reinit the apicid, now that we have extended initial_apicid.
*/
fpstate_reset(¤t->thread.fpu);
}
-static void __init fpu__init_init_fpstate(void)
-{
- /* Bring init_fpstate size and features up to date */
- init_fpstate.size = fpu_kernel_cfg.max_size;
- init_fpstate.xfeatures = fpu_kernel_cfg.max_features;
-}
-
/*
* Called on the boot CPU once per system bootup, to set up the initial
* FPU state that is later cloned into all processes:
fpu__init_system_xstate_size_legacy();
fpu__init_system_xstate(fpu_kernel_cfg.max_size);
fpu__init_task_struct_size();
- fpu__init_init_fpstate();
}
print_xstate_features();
- xstate_init_xcomp_bv(&init_fpstate.regs.xsave, fpu_kernel_cfg.max_features);
+ xstate_init_xcomp_bv(&init_fpstate.regs.xsave, init_fpstate.xfeatures);
/*
* Init all the features state with header.xfeatures being 0x0
return ebx;
}
-/*
- * Will the runtime-enumerated 'xstate_size' fit in the init
- * task's statically-allocated buffer?
- */
-static bool __init is_supported_xstate_size(unsigned int test_xstate_size)
-{
- if (test_xstate_size <= sizeof(init_fpstate.regs))
- return true;
-
- pr_warn("x86/fpu: xstate buffer too small (%zu < %d), disabling xsave\n",
- sizeof(init_fpstate.regs), test_xstate_size);
- return false;
-}
-
static int __init init_xstate_size(void)
{
/* Recompute the context size for enabled features: */
kernel_default_size =
xstate_calculate_size(fpu_kernel_cfg.default_features, compacted);
- /* Ensure we have the space to store all default enabled features. */
- if (!is_supported_xstate_size(kernel_default_size))
- return -EINVAL;
-
if (!paranoid_xstate_size_valid(kernel_size))
return -EINVAL;
update_regset_xstate_info(fpu_user_cfg.max_size,
fpu_user_cfg.max_features);
+ /*
+ * init_fpstate excludes dynamic states as they are large but init
+ * state is zero.
+ */
+ init_fpstate.size = fpu_kernel_cfg.default_size;
+ init_fpstate.xfeatures = fpu_kernel_cfg.default_features;
+
+ if (init_fpstate.size > sizeof(init_fpstate.regs)) {
+ pr_warn("x86/fpu: init_fpstate buffer too small (%zu < %d), disabling XSAVE\n",
+ sizeof(init_fpstate.regs), init_fpstate.size);
+ goto out_disable;
+ }
+
setup_init_fpu_buf();
/*
*/
mask = fpstate->user_xfeatures;
+ /*
+ * Dynamic features are not present in init_fpstate. When they are
+ * in an all zeros init state, remove those from 'mask' to zero
+ * those features in the user buffer instead of retrieving them
+ * from init_fpstate.
+ */
+ if (fpu_state_size_dynamic())
+ mask &= (header.xfeatures | xinit->header.xcomp_bv);
+
for_each_extended_xfeature(i, mask) {
/*
* If there was a feature or alignment gap, zero the space
*/
#include <linux/linkage.h>
+#include <linux/cfi_types.h>
#include <asm/ptrace.h>
#include <asm/ftrace.h>
#include <asm/export.h>
.endm
+SYM_TYPED_FUNC_START(ftrace_stub)
+ RET
+SYM_FUNC_END(ftrace_stub)
+
+SYM_TYPED_FUNC_START(ftrace_stub_graph)
+ RET
+SYM_FUNC_END(ftrace_stub_graph)
+
#ifdef CONFIG_DYNAMIC_FTRACE
SYM_FUNC_START(__fentry__)
*/
SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
-
- jmp ftrace_epilogue
+ RET
SYM_FUNC_END(ftrace_caller);
STACK_FRAME_NON_STANDARD_FP(ftrace_caller)
-SYM_FUNC_START(ftrace_epilogue)
-/*
- * This is weak to keep gas from relaxing the jumps.
- */
-SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
- UNWIND_HINT_FUNC
- ENDBR
- RET
-SYM_FUNC_END(ftrace_epilogue)
-
SYM_FUNC_START(ftrace_regs_caller)
/* Save the current flags before any operations that can change them */
pushfq
popfq
/*
- * As this jmp to ftrace_epilogue can be a short jump
- * it must not be copied into the trampoline.
- * The trampoline will add the code to jump
- * to the return.
+ * The trampoline will add the return.
*/
SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
- jmp ftrace_epilogue
+ RET
/* Swap the flags with orig_rax */
1: movq MCOUNT_REG_SIZE(%rsp), %rdi
/* Restore flags */
popfq
UNWIND_HINT_FUNC
- jmp ftrace_epilogue
+ RET
SYM_FUNC_END(ftrace_regs_caller)
STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
SYM_FUNC_START(__fentry__)
cmpq $ftrace_stub, ftrace_trace_function
jnz trace
-
-SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
- ENDBR
RET
trace:
/* Otherwise, skip ahead to the user-specified starting frame: */
while (!unwind_done(state) &&
(!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
- state->sp < (unsigned long)first_frame))
+ state->sp <= (unsigned long)first_frame))
unwind_next_frame(state);
return;
entry->eax = max(entry->eax, 0x80000021);
break;
case 0x80000001:
+ entry->ebx &= ~GENMASK(27, 16);
cpuid_entry_override(entry, CPUID_8000_0001_EDX);
cpuid_entry_override(entry, CPUID_8000_0001_ECX);
break;
case 0x80000006:
- /* L2 cache and TLB: pass through host info. */
+ /* Drop reserved bits, pass host L2 cache and TLB info. */
+ entry->edx &= ~GENMASK(17, 16);
break;
case 0x80000007: /* Advanced power management */
/* invariant TSC is CPUID.80000007H:EDX[8] */
g_phys_as = phys_as;
entry->eax = g_phys_as | (virt_as << 8);
+ entry->ecx &= ~(GENMASK(31, 16) | GENMASK(11, 8));
entry->edx = 0;
cpuid_entry_override(entry, CPUID_8000_0008_EBX);
break;
entry->ecx = entry->edx = 0;
break;
case 0x8000001a:
+ entry->eax &= GENMASK(2, 0);
+ entry->ebx = entry->ecx = entry->edx = 0;
+ break;
case 0x8000001e:
break;
case 0x8000001F:
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
} else {
cpuid_entry_override(entry, CPUID_8000_001F_EAX);
-
+ /* Clear NumVMPL since KVM does not support VMPL. */
+ entry->ebx &= ~GENMASK(31, 12);
/*
* Enumerate '0' for "PA bits reduction", the adjusted
* MAXPHYADDR is enumerated directly (see 0x80000008).
if (sanity_check_entries(entries, cpuid->nent, type))
return -EINVAL;
- array.entries = kvcalloc(sizeof(struct kvm_cpuid_entry2), cpuid->nent, GFP_KERNEL);
+ array.entries = kvcalloc(cpuid->nent, sizeof(struct kvm_cpuid_entry2), GFP_KERNEL);
if (!array.entries)
return -ENOMEM;
static int kvm_mmu_rmaps_stat_open(struct inode *inode, struct file *file)
{
struct kvm *kvm = inode->i_private;
+ int r;
if (!kvm_get_kvm_safe(kvm))
return -ENOENT;
- return single_open(file, kvm_mmu_rmaps_stat_show, kvm);
+ r = single_open(file, kvm_mmu_rmaps_stat_show, kvm);
+ if (r < 0)
+ kvm_put_kvm(kvm);
+
+ return r;
}
static int kvm_mmu_rmaps_stat_release(struct inode *inode, struct file *file)
ctxt->mode, linear);
}
-static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
- enum x86emul_mode mode)
+static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst)
{
ulong linear;
int rc;
if (ctxt->op_bytes != sizeof(unsigned long))
addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
- rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);
+ rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear);
if (rc == X86EMUL_CONTINUE)
ctxt->_eip = addr.ea;
return rc;
}
+static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt)
+{
+ u64 efer;
+ struct desc_struct cs;
+ u16 selector;
+ u32 base3;
+
+ ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+
+ if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) {
+ /* Real mode. cpu must not have long mode active */
+ if (efer & EFER_LMA)
+ return X86EMUL_UNHANDLEABLE;
+ ctxt->mode = X86EMUL_MODE_REAL;
+ return X86EMUL_CONTINUE;
+ }
+
+ if (ctxt->eflags & X86_EFLAGS_VM) {
+ /* Protected/VM86 mode. cpu must not have long mode active */
+ if (efer & EFER_LMA)
+ return X86EMUL_UNHANDLEABLE;
+ ctxt->mode = X86EMUL_MODE_VM86;
+ return X86EMUL_CONTINUE;
+ }
+
+ if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS))
+ return X86EMUL_UNHANDLEABLE;
+
+ if (efer & EFER_LMA) {
+ if (cs.l) {
+ /* Proper long mode */
+ ctxt->mode = X86EMUL_MODE_PROT64;
+ } else if (cs.d) {
+ /* 32 bit compatibility mode*/
+ ctxt->mode = X86EMUL_MODE_PROT32;
+ } else {
+ ctxt->mode = X86EMUL_MODE_PROT16;
+ }
+ } else {
+ /* Legacy 32 bit / 16 bit mode */
+ ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
+ }
+
+ return X86EMUL_CONTINUE;
+}
+
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
{
- return assign_eip(ctxt, dst, ctxt->mode);
+ return assign_eip(ctxt, dst);
}
-static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
- const struct desc_struct *cs_desc)
+static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst)
{
- enum x86emul_mode mode = ctxt->mode;
- int rc;
+ int rc = emulator_recalc_and_set_mode(ctxt);
-#ifdef CONFIG_X86_64
- if (ctxt->mode >= X86EMUL_MODE_PROT16) {
- if (cs_desc->l) {
- u64 efer = 0;
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
- ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
- if (efer & EFER_LMA)
- mode = X86EMUL_MODE_PROT64;
- } else
- mode = X86EMUL_MODE_PROT32; /* temporary value */
- }
-#endif
- if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)
- mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
- rc = assign_eip(ctxt, dst, mode);
- if (rc == X86EMUL_CONTINUE)
- ctxt->mode = mode;
- return rc;
+ return assign_eip(ctxt, dst);
}
static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
if (rc != X86EMUL_CONTINUE)
return rc;
- rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
+ rc = assign_eip_far(ctxt, ctxt->src.val);
/* Error handling is not implemented. */
if (rc != X86EMUL_CONTINUE)
return X86EMUL_UNHANDLEABLE;
&new_desc);
if (rc != X86EMUL_CONTINUE)
return rc;
- rc = assign_eip_far(ctxt, eip, &new_desc);
+ rc = assign_eip_far(ctxt, eip);
/* Error handling is not implemented. */
if (rc != X86EMUL_CONTINUE)
return X86EMUL_UNHANDLEABLE;
ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED;
ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0);
- for (i = 0; i < NR_EMULATOR_GPRS; i++)
+ for (i = 0; i < 8; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4);
val = GET_SMSTATE(u32, smstate, 0x7fcc);
u16 selector;
int i, r;
- for (i = 0; i < NR_EMULATOR_GPRS; i++)
+ for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8);
ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78);
* those side effects need to be explicitly handled for both success
* and shutdown.
*/
- return X86EMUL_CONTINUE;
+ return emulator_recalc_and_set_mode(ctxt);
emulate_shutdown:
ctxt->ops->triple_fault(ctxt);
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
ctxt->_eip = rdx;
+ ctxt->mode = usermode;
*reg_write(ctxt, VCPU_REGS_RSP) = rcx;
return X86EMUL_CONTINUE;
if (rc != X86EMUL_CONTINUE)
return rc;
- rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
+ rc = assign_eip_far(ctxt, ctxt->src.val);
if (rc != X86EMUL_CONTINUE)
goto fail;
static int em_cr_write(struct x86_emulate_ctxt *ctxt)
{
- if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val))
+ int cr_num = ctxt->modrm_reg;
+ int r;
+
+ if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val))
return emulate_gp(ctxt, 0);
/* Disable writeback. */
ctxt->dst.type = OP_NONE;
+
+ if (cr_num == 0) {
+ /*
+ * CR0 write might have updated CR0.PE and/or CR0.PG
+ * which can affect the cpu's execution mode.
+ */
+ r = emulator_recalc_and_set_mode(ctxt);
+ if (r != X86EMUL_CONTINUE)
+ return r;
+ }
+
return X86EMUL_CONTINUE;
}
#define PMU_CAP_FW_WRITES (1ULL << 13)
#define PMU_CAP_LBR_FMT 0x3f
-#define DEBUGCTLMSR_LBR_MASK (DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI)
-
struct nested_vmx_msrs {
/*
* We only store the "true" versions of the VMX capability MSRs. We
static inline u64 vmx_get_perf_capabilities(void)
{
u64 perf_cap = PMU_CAP_FW_WRITES;
+ struct x86_pmu_lbr lbr;
u64 host_perf_cap = 0;
if (!enable_pmu)
if (boot_cpu_has(X86_FEATURE_PDCM))
rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
- perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
+ if (x86_perf_get_lbr(&lbr) >= 0 && lbr.nr)
+ perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
if (vmx_pebs_supported()) {
perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK;
return perf_cap;
}
-static inline u64 vmx_supported_debugctl(void)
-{
- u64 debugctl = 0;
-
- if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
- debugctl |= DEBUGCTLMSR_BUS_LOCK_DETECT;
-
- if (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT)
- debugctl |= DEBUGCTLMSR_LBR_MASK;
-
- return debugctl;
-}
-
static inline bool cpu_has_notify_vmexit(void)
{
return vmcs_config.cpu_based_2nd_exec_ctrl &
return (unsigned long)data;
}
-static u64 vcpu_supported_debugctl(struct kvm_vcpu *vcpu)
+static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
{
- u64 debugctl = vmx_supported_debugctl();
+ u64 debugctl = 0;
- if (!intel_pmu_lbr_is_enabled(vcpu))
- debugctl &= ~DEBUGCTLMSR_LBR_MASK;
+ if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) &&
+ (host_initiated || guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)))
+ debugctl |= DEBUGCTLMSR_BUS_LOCK_DETECT;
- if (!guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT))
- debugctl &= ~DEBUGCTLMSR_BUS_LOCK_DETECT;
+ if ((vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT) &&
+ (host_initiated || intel_pmu_lbr_is_enabled(vcpu)))
+ debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
return debugctl;
}
vmcs_writel(GUEST_SYSENTER_ESP, data);
break;
case MSR_IA32_DEBUGCTLMSR: {
- u64 invalid = data & ~vcpu_supported_debugctl(vcpu);
+ u64 invalid;
+
+ invalid = data & ~vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
if (report_ignored_msrs)
vcpu_unimpl(vcpu, "%s: BTF|LBR in IA32_DEBUGCTLMSR 0x%llx, nop\n",
if (!cpu_has_virtual_nmis())
enable_vnmi = 0;
+#ifdef CONFIG_X86_SGX_KVM
+ if (!cpu_has_vmx_encls_vmexit())
+ enable_sgx = false;
+#endif
+
/*
* set_apic_access_page_addr() is used to reload apic access
* page upon invalidation. No need to do anything if not
/* we verify if the enable bit is set... */
if (system_time & 1) {
- kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.pv_time, vcpu,
- KVM_HOST_USES_PFN, system_time & ~1ULL,
- sizeof(struct pvclock_vcpu_time_info));
+ kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu,
+ KVM_HOST_USES_PFN, system_time & ~1ULL,
+ sizeof(struct pvclock_vcpu_time_info));
} else {
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time);
}
return;
static void kvmclock_reset(struct kvm_vcpu *vcpu)
{
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time);
vcpu->arch.time = 0;
}
return 0;
}
-static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
+ struct kvm_msr_filter *filter)
{
- struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_x86_msr_filter *new_filter, *old_filter;
- struct kvm_msr_filter filter;
bool default_allow;
bool empty = true;
int r = 0;
u32 i;
- if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
- return -EFAULT;
-
- if (filter.flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
+ if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
return -EINVAL;
- for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
- empty &= !filter.ranges[i].nmsrs;
+ for (i = 0; i < ARRAY_SIZE(filter->ranges); i++)
+ empty &= !filter->ranges[i].nmsrs;
- default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY);
+ default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY);
if (empty && !default_allow)
return -EINVAL;
if (!new_filter)
return -ENOMEM;
- for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
- r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
+ for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) {
+ r = kvm_add_msr_filter(new_filter, &filter->ranges[i]);
if (r) {
kvm_free_msr_filter(new_filter);
return r;
return 0;
}
+#ifdef CONFIG_KVM_COMPAT
+/* for KVM_X86_SET_MSR_FILTER */
+struct kvm_msr_filter_range_compat {
+ __u32 flags;
+ __u32 nmsrs;
+ __u32 base;
+ __u32 bitmap;
+};
+
+struct kvm_msr_filter_compat {
+ __u32 flags;
+ struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES];
+};
+
+#define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat)
+
+long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+ unsigned long arg)
+{
+ void __user *argp = (void __user *)arg;
+ struct kvm *kvm = filp->private_data;
+ long r = -ENOTTY;
+
+ switch (ioctl) {
+ case KVM_X86_SET_MSR_FILTER_COMPAT: {
+ struct kvm_msr_filter __user *user_msr_filter = argp;
+ struct kvm_msr_filter_compat filter_compat;
+ struct kvm_msr_filter filter;
+ int i;
+
+ if (copy_from_user(&filter_compat, user_msr_filter,
+ sizeof(filter_compat)))
+ return -EFAULT;
+
+ filter.flags = filter_compat.flags;
+ for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
+ struct kvm_msr_filter_range_compat *cr;
+
+ cr = &filter_compat.ranges[i];
+ filter.ranges[i] = (struct kvm_msr_filter_range) {
+ .flags = cr->flags,
+ .nmsrs = cr->nmsrs,
+ .base = cr->base,
+ .bitmap = (__u8 *)(ulong)cr->bitmap,
+ };
+ }
+
+ r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
+ break;
+ }
+ }
+
+ return r;
+}
+#endif
+
#ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
case KVM_SET_PMU_EVENT_FILTER:
r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
break;
- case KVM_X86_SET_MSR_FILTER:
- r = kvm_vm_ioctl_set_msr_filter(kvm, argp);
+ case KVM_X86_SET_MSR_FILTER: {
+ struct kvm_msr_filter __user *user_msr_filter = argp;
+ struct kvm_msr_filter filter;
+
+ if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
+ return -EFAULT;
+
+ r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
break;
+ }
default:
r = -ENOTTY;
}
kvm_x86_ops.nested_ops->has_events(vcpu))
*req_immediate_exit = true;
- WARN_ON(kvm_is_exception_pending(vcpu));
+ /*
+ * KVM must never queue a new exception while injecting an event; KVM
+ * is done emulating and should only propagate the to-be-injected event
+ * to the VMCS/VMCB. Queueing a new exception can put the vCPU into an
+ * infinite loop as KVM will bail from VM-Enter to inject the pending
+ * exception and start the cycle all over.
+ *
+ * Exempt triple faults as they have special handling and won't put the
+ * vCPU into an infinite loop. Triple fault can be queued when running
+ * VMX without unrestricted guest, as that requires KVM to emulate Real
+ * Mode events (see kvm_inject_realmode_interrupt()).
+ */
+ WARN_ON_ONCE(vcpu->arch.exception.pending ||
+ vcpu->arch.exception_vmexit.pending);
return 0;
out:
kvm->arch.apicv_inhibit_reasons = new;
if (new) {
unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
+ int idx = srcu_read_lock(&kvm->srcu);
+
kvm_zap_gfn_range(kvm, gfn, gfn+1);
+ srcu_read_unlock(&kvm->srcu, idx);
}
} else {
kvm->arch.apicv_inhibit_reasons = new;
vcpu->arch.regs_avail = ~0;
vcpu->arch.regs_dirty = ~0;
+ kvm_gpc_init(&vcpu->arch.pv_time);
+
if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
else
int idx = srcu_read_lock(&kvm->srcu);
if (gfn == GPA_INVALID) {
- kvm_gfn_to_pfn_cache_destroy(kvm, gpc);
+ kvm_gpc_deactivate(kvm, gpc);
goto out;
}
do {
- ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN,
- gpa, PAGE_SIZE);
+ ret = kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa,
+ PAGE_SIZE);
if (ret)
goto out;
offsetof(struct compat_vcpu_info, time));
if (data->u.gpa == GPA_INVALID) {
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);
r = 0;
break;
}
- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,
- &vcpu->arch.xen.vcpu_info_cache,
- NULL, KVM_HOST_USES_PFN, data->u.gpa,
- sizeof(struct vcpu_info));
+ r = kvm_gpc_activate(vcpu->kvm,
+ &vcpu->arch.xen.vcpu_info_cache, NULL,
+ KVM_HOST_USES_PFN, data->u.gpa,
+ sizeof(struct vcpu_info));
if (!r)
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO:
if (data->u.gpa == GPA_INVALID) {
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,
- &vcpu->arch.xen.vcpu_time_info_cache);
+ kvm_gpc_deactivate(vcpu->kvm,
+ &vcpu->arch.xen.vcpu_time_info_cache);
r = 0;
break;
}
- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,
- &vcpu->arch.xen.vcpu_time_info_cache,
- NULL, KVM_HOST_USES_PFN, data->u.gpa,
- sizeof(struct pvclock_vcpu_time_info));
+ r = kvm_gpc_activate(vcpu->kvm,
+ &vcpu->arch.xen.vcpu_time_info_cache,
+ NULL, KVM_HOST_USES_PFN, data->u.gpa,
+ sizeof(struct pvclock_vcpu_time_info));
if (!r)
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
break;
break;
}
if (data->u.gpa == GPA_INVALID) {
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,
- &vcpu->arch.xen.runstate_cache);
+ kvm_gpc_deactivate(vcpu->kvm,
+ &vcpu->arch.xen.runstate_cache);
r = 0;
break;
}
- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,
- &vcpu->arch.xen.runstate_cache,
- NULL, KVM_HOST_USES_PFN, data->u.gpa,
- sizeof(struct vcpu_runstate_info));
+ r = kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache,
+ NULL, KVM_HOST_USES_PFN, data->u.gpa,
+ sizeof(struct vcpu_runstate_info));
break;
case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT:
case EVTCHNSTAT_ipi:
/* IPI must map back to the same port# */
if (data->u.evtchn.deliver.port.port != data->u.evtchn.send_port)
- goto out; /* -EINVAL */
+ goto out_noeventfd; /* -EINVAL */
break;
case EVTCHNSTAT_interdomain:
if (data->u.evtchn.deliver.port.port) {
if (data->u.evtchn.deliver.port.port >= max_evtchn_port(kvm))
- goto out; /* -EINVAL */
+ goto out_noeventfd; /* -EINVAL */
} else {
eventfd = eventfd_ctx_fdget(data->u.evtchn.deliver.eventfd.fd);
if (IS_ERR(eventfd)) {
ret = PTR_ERR(eventfd);
- goto out;
+ goto out_noeventfd;
}
}
break;
out:
if (eventfd)
eventfd_ctx_put(eventfd);
+out_noeventfd:
kfree(evtchnfd);
return ret;
}
{
vcpu->arch.xen.vcpu_id = vcpu->vcpu_idx;
vcpu->arch.xen.poll_evtchn = 0;
+
timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0);
+
+ kvm_gpc_init(&vcpu->arch.xen.runstate_cache);
+ kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache);
+ kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache);
}
void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)
if (kvm_xen_timer_enabled(vcpu))
kvm_xen_stop_timer(vcpu);
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,
- &vcpu->arch.xen.runstate_cache);
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,
- &vcpu->arch.xen.vcpu_info_cache);
- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,
- &vcpu->arch.xen.vcpu_time_info_cache);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);
+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache);
+
del_timer_sync(&vcpu->arch.xen.poll_timer);
}
void kvm_xen_init_vm(struct kvm *kvm)
{
idr_init(&kvm->arch.xen.evtchn_ports);
+ kvm_gpc_init(&kvm->arch.xen.shinfo_cache);
}
void kvm_xen_destroy_vm(struct kvm *kvm)
struct evtchnfd *evtchnfd;
int i;
- kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);
+ kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache);
idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) {
if (!evtchnfd->deliver.port.port)
{
unsigned long end;
+ /* Kernel text is rw at boot up */
+ if (system_state == SYSTEM_BOOTING)
+ return new;
+
/*
* 32-bit has some unfixable W+X issues, like EFI code
* and writeable data being in the same page. Disable
#include <linux/bpf.h>
#include <linux/memory.h>
#include <linux/sort.h>
+#include <linux/init.h>
#include <asm/extable.h>
#include <asm/set_memory.h>
#include <asm/nospec-branch.h>
return ret;
}
+int __init bpf_arch_init_dispatcher_early(void *ip)
+{
+ const u8 *nop_insn = x86_nops[5];
+
+ if (is_endbr(*(u32 *)ip))
+ ip += ENDBR_INSN_SIZE;
+
+ if (memcmp(ip, nop_insn, X86_PATCH_SIZE))
+ text_poke_early(ip, nop_insn, X86_PATCH_SIZE);
+ return 0;
+}
+
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
void *old_addr, void *new_addr)
{
KASAN_SANITIZE := n
UBSAN_SANITIZE := n
KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
KCOV_INSTRUMENT := n
# These are adjustments to the compiler flags used for objects that
static bool pmu_msr_chk_emulated(unsigned int msr, uint64_t *val, bool is_read,
bool *emul)
{
- int type, index;
+ int type, index = 0;
if (is_amd_pmu_msr(msr))
*emul = xen_amd_pmu_emulate(msr, val, is_read);
void xen_enable_sysenter(void)
{
- int ret;
- unsigned sysenter_feature;
-
- sysenter_feature = X86_FEATURE_SYSENTER32;
-
- if (!boot_cpu_has(sysenter_feature))
- return;
-
- ret = register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat);
- if(ret != 0)
- setup_clear_cpu_cap(sysenter_feature);
+ if (cpu_feature_enabled(X86_FEATURE_SYSENTER32) &&
+ register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat))
+ setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
}
void xen_enable_syscall(void)
mechanism for syscalls. */
}
- if (boot_cpu_has(X86_FEATURE_SYSCALL32)) {
- ret = register_callback(CALLBACKTYPE_syscall32,
- xen_entry_SYSCALL_compat);
- if (ret != 0)
- setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
- }
+ if (cpu_feature_enabled(X86_FEATURE_SYSCALL32) &&
+ register_callback(CALLBACKTYPE_syscall32, xen_entry_SYSCALL_compat))
+ setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
}
static void __init xen_pvmmu_arch_setup(void)
unsigned long split_time; /* time of last split */
unsigned long first_IO_time; /* time of first I/O for this queue */
-
unsigned long creation_time; /* when this queue is created */
- /* max service rate measured so far */
- u32 max_service_rate;
-
/*
* Pointer to the waker queue for this queue, i.e., to the
* queue Q such that this queue happens to get new I/O right
return;
}
- if (bio->bi_opf & REQ_ALLOC_CACHE) {
+ if ((bio->bi_opf & REQ_ALLOC_CACHE) && !WARN_ON_ONCE(in_interrupt())) {
struct bio_alloc_cache *cache;
bio_uninit(bio);
.nr_tags = 1,
};
u64 alloc_time_ns = 0;
+ struct request *rq;
unsigned int cpu;
unsigned int tag;
int ret;
tag = blk_mq_get_tag(&data);
if (tag == BLK_MQ_NO_TAG)
goto out_queue_exit;
- return blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag,
+ rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag,
alloc_time_ns);
+ rq->__data_len = 0;
+ rq->__sector = (sector_t) -1;
+ rq->bio = rq->biotail = NULL;
+ return rq;
out_queue_exit:
blk_queue_exit(q);
(!blk_queue_nomerges(rq->q) &&
blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
blk_mq_flush_plug_list(plug, false);
+ last = NULL;
trace_block_plug(rq->q);
}
struct page *page;
unsigned long flags;
- /* There is no need to clear a driver tags own mapping */
- if (drv_tags == tags)
+ /*
+ * There is no need to clear mapping if driver tags is not initialized
+ * or the mapping belongs to the driver tags.
+ */
+ if (!drv_tags || drv_tags == tags)
return;
list_for_each_entry(page, &tags->page_list, lru) {
return 0;
err_hctxs:
- xa_destroy(&q->hctx_table);
- q->nr_hw_queues = 0;
- blk_mq_sysfs_deinit(q);
+ blk_mq_release(q);
err_poll:
blk_stat_free_callback(q->poll_cb);
q->poll_cb = NULL;
* Otherwise just allocate the device numbers for both the whole device
* and all partitions from the extended dev_t space.
*/
+ ret = -EINVAL;
if (disk->major) {
if (WARN_ON(!disk->minors))
- return -EINVAL;
+ goto out_exit_elevator;
if (disk->minors > DISK_MAX_PARTS) {
pr_err("block: can't allocate more than %d partitions\n",
disk->minors = DISK_MAX_PARTS;
}
if (disk->first_minor + disk->minors > MINORMASK + 1)
- return -EINVAL;
+ goto out_exit_elevator;
} else {
if (WARN_ON(disk->minors))
- return -EINVAL;
+ goto out_exit_elevator;
ret = blk_alloc_ext_minor();
if (ret < 0)
- return ret;
+ goto out_exit_elevator;
disk->major = BLOCK_EXT_MAJOR;
disk->first_minor = ret;
}
bdi_unregister(disk->bdi);
out_unregister_queue:
blk_unregister_queue(disk);
+ rq_qos_exit(disk->queue);
out_put_slave_dir:
kobject_put(disk->slave_dir);
out_put_holder_dir:
out_free_ext_minor:
if (disk->major == BLOCK_EXT_MAJOR)
blk_free_ext_minor(disk->first_minor);
+out_exit_elevator:
+ if (disk->queue->elevator)
+ elevator_exit(disk->queue);
return ret;
}
EXPORT_SYMBOL(device_add_disk);
#include <linux/ratelimit.h>
#include <linux/edac.h>
#include <linux/ras.h>
+#include <acpi/ghes.h>
#include <asm/cpu.h>
#include <asm/mce.h>
int cpu = mce->extcpu;
struct acpi_hest_generic_status *estatus, *tmp;
struct acpi_hest_generic_data *gdata;
- const guid_t *fru_id = &guid_null;
- char *fru_text = "";
+ const guid_t *fru_id;
+ char *fru_text;
guid_t *sec_type;
static u32 err_seq;
/* log event via trace */
err_seq++;
- gdata = (struct acpi_hest_generic_data *)(tmp + 1);
- if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
- fru_id = (guid_t *)gdata->fru_id;
- if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
- fru_text = gdata->fru_text;
- sec_type = (guid_t *)gdata->section_type;
- if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
- struct cper_sec_mem_err *mem = (void *)(gdata + 1);
- if (gdata->error_data_length >= sizeof(*mem))
- trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
- (u8)gdata->error_severity);
+ apei_estatus_for_each_section(tmp, gdata) {
+ if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
+ fru_id = (guid_t *)gdata->fru_id;
+ else
+ fru_id = &guid_null;
+ if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
+ fru_text = gdata->fru_text;
+ else
+ fru_text = "";
+ sec_type = (guid_t *)gdata->section_type;
+ if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
+ struct cper_sec_mem_err *mem = (void *)(gdata + 1);
+
+ if (gdata->error_data_length >= sizeof(*mem))
+ trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
+ (u8)gdata->error_severity);
+ }
}
out:
* Arbitrary retries in case the remote processor is slow to respond
* to PCC commands
*/
-#define PCC_CMD_WAIT_RETRIES_NUM 500
+#define PCC_CMD_WAIT_RETRIES_NUM 500ULL
struct pcc_data {
struct pcc_mbox_chan *pcc_chan;
clear_fixmap(fixmap_idx);
}
-int ghes_estatus_pool_init(int num_ghes)
+int ghes_estatus_pool_init(unsigned int num_ghes)
{
unsigned long addr, len;
int rc;
struct iommu_resv_region *region;
region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K,
- prot, IOMMU_RESV_MSI);
+ prot, IOMMU_RESV_MSI,
+ GFP_KERNEL);
if (region)
list_add_tail(®ion->list, head);
}
pr_warn("ACPI NUMA: Failed to add memblk for CFMWS node %d [mem %#llx-%#llx]\n",
node, start, end);
}
+ node_set(node, numa_nodes_parsed);
/* Set the next available fake_pxm value */
(*fake_pxm)++;
list_for_each_entry(pn, &adev->physical_node_list, node) {
if (dev_is_pci(pn->dev)) {
+ get_device(pn->dev);
pci_dev = to_pci_dev(pn->dev);
break;
}
DMI_MATCH(DMI_BOARD_NAME, "S5402ZA"),
},
},
+ {
+ .ident = "Asus Vivobook S5602ZA",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
+ { }
+};
+
+static const struct dmi_system_id lenovo_82ra[] = {
+ {
+ .ident = "LENOVO IdeaPad Flex 5 16ALC7",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "82RA"),
+ },
+ },
{ }
};
unsigned char triggering;
unsigned char polarity;
unsigned char shareable;
+ bool override;
};
-static const struct irq_override_cmp skip_override_table[] = {
- { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
- { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
+static const struct irq_override_cmp override_table[] = {
+ { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ { lenovo_82ra, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+ { lenovo_82ra, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
};
static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
{
int i;
+ for (i = 0; i < ARRAY_SIZE(override_table); i++) {
+ const struct irq_override_cmp *entry = &override_table[i];
+
+ if (dmi_check_system(entry->system) &&
+ entry->irq == gsi &&
+ entry->triggering == triggering &&
+ entry->polarity == polarity &&
+ entry->shareable == shareable)
+ return entry->override;
+ }
+
#ifdef CONFIG_X86
/*
* IRQ override isn't needed on modern AMD Zen systems and
return false;
#endif
- for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
- const struct irq_override_cmp *entry = &skip_override_table[i];
-
- if (dmi_check_system(entry->system) &&
- entry->irq == gsi &&
- entry->triggering == triggering &&
- entry->polarity == polarity &&
- entry->shareable == shareable)
- return false;
- }
-
return true;
}
u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
if (triggering != trig || polarity != pol) {
- pr_warn("ACPI: IRQ %d override to %s, %s\n", gsi,
- t ? "level" : "edge", p ? "low" : "high");
+ pr_warn("ACPI: IRQ %d override to %s%s, %s%s\n", gsi,
+ t ? "level" : "edge",
+ trig == triggering ? "" : "(!)",
+ p ? "low" : "high",
+ pol == polarity ? "" : "(!)");
triggering = trig;
polarity = pol;
}
static const char * const acpi_ignore_dep_ids[] = {
"PNP0D80", /* Windows-compatible System Power Management Controller */
"INT33BD", /* Intel Baytrail Mailbox Device */
+ "LATT2021", /* Lattice FW Update Client Driver */
NULL
};
goto out;
}
+ *map = r;
+
list_for_each_entry(rentry, &list, node) {
if (rentry->res->start >= rentry->res->end) {
- kfree(r);
+ kfree(*map);
+ *map = NULL;
ret = -EINVAL;
dev_dbg(dma_dev, "Invalid DMA regions configuration\n");
goto out;
r->offset = rentry->offset;
r++;
}
-
- *map = r;
}
out:
acpi_dev_free_resource_list(&list);
{ },
};
+static bool google_cros_ec_present(void)
+{
+ return acpi_dev_found("GOOG0004");
+}
+
/*
* Determine which type of backlight interface to use on this system,
* First check cmdline, then dmi quirks, then do autodetect.
return acpi_backlight_video;
}
+ /*
+ * Chromebooks that don't have backlight handle in ACPI table
+ * are supposed to use native backlight if it's available.
+ */
+ if (google_cros_ec_present() && native_available)
+ return acpi_backlight_native;
+
/* No ACPI video (old hw), use vendor specific fw methods. */
return acpi_backlight_vendor;
}
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 14 7425 2-in-1"),
}
},
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 16 5625"),
+ }
+ },
{}
};
PCS_7 = 0x94, /* 7+ port PCS (Denverton) */
/* em constants */
- EM_MAX_SLOTS = 8,
+ EM_MAX_SLOTS = SATA_PMP_MAX_PORTS,
EM_MAX_RETRY = 5,
/* em_ctl bits */
if (!of_id)
return -ENODEV;
- priv->version = (enum brcm_ahci_version)of_id->data;
+ priv->version = (unsigned long)of_id->data;
priv->dev = dev;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "top-ctrl");
imxpriv->ahci_pdev = pdev;
imxpriv->no_device = false;
imxpriv->first_time = true;
- imxpriv->type = (enum ahci_imx_type)of_id->data;
+ imxpriv->type = (unsigned long)of_id->data;
imxpriv->sata_clk = devm_clk_get(dev, "sata");
if (IS_ERR(imxpriv->sata_clk)) {
MODULE_DESCRIPTION("Freescale i.MX AHCI SATA platform driver");
MODULE_AUTHOR("Richard Zhu <Hong-Xing.Zhu@freescale.com>");
MODULE_LICENSE("GPL");
-MODULE_ALIAS("ahci:imx");
+MODULE_ALIAS("platform:" DRV_NAME);
return -ENOMEM;
if (of_id)
- qoriq_priv->type = (enum ahci_qoriq_type)of_id->data;
+ qoriq_priv->type = (unsigned long)of_id->data;
else
qoriq_priv->type = (enum ahci_qoriq_type)acpi_id->driver_data;
.driver = {
.name = DRV_NAME,
.pm = &st_ahci_pm_ops,
- .of_match_table = of_match_ptr(st_ahci_match),
+ .of_match_table = st_ahci_match,
},
.probe = st_ahci_probe,
.remove = ata_platform_remove_one,
of_devid = of_match_device(xgene_ahci_of_match, dev);
if (of_devid) {
if (of_devid->data)
- version = (enum xgene_ahci_version) of_devid->data;
+ version = (unsigned long) of_devid->data;
}
#ifdef CONFIG_ACPI
else {
outb(inb(0x1F4) & 0x07, 0x1F4);
rt = inb(0x1F3);
- rt &= 0x07 << (3 * adev->devno);
+ rt &= ~(0x07 << (3 * !adev->devno));
if (pio)
- rt |= (1 + 3 * pio) << (3 * adev->devno);
+ rt |= (1 + 3 * pio) << (3 * !adev->devno);
+ outb(rt, 0x1F3);
udelay(100);
outb(inb(0x1F2) | 0x01, 0x1F2);
/* remap drive's physical memory address */
mem = devm_platform_ioremap_resource(pdev, 0);
- if (!mem)
- return -ENOMEM;
+ if (IS_ERR(mem))
+ return PTR_ERR(mem);
/* request and activate power and reset GPIOs */
lda->power = devm_gpiod_get(dev, "power", GPIOD_OUT_HIGH);
if (!priv)
return -ENOMEM;
- priv->type = (enum sata_rcar_type)of_device_get_match_data(dev);
+ priv->type = (unsigned long)of_device_get_match_data(dev);
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
np = it.node;
if (!of_match_node(idle_state_match, np))
continue;
+
+ if (!of_device_is_available(np))
+ continue;
+
if (states) {
ret = genpd_parse_state(&states[i], np);
if (ret) {
* Find a given string in a string array and if it is found return the
* index back.
*
- * Return: %0 if the property was found (success),
+ * Return: index, starting from %0, if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of strings,
* Find a given string in a string array and if it is found return the
* index back.
*
- * Return: %0 if the property was found (success),
+ * Return: index, starting from %0, if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of strings,
definition isn't finalized yet, and might change according to future
requirement, so mark is as experimental now.
+ Say Y if you want to get better performance because task_work_add()
+ can be used in IO path for replacing io_uring cmd, which will become
+ shared between IO tasks and ubq daemon, meantime task_work_add() can
+ can handle batch more effectively, but task_work_add() isn't exported
+ for module, so ublk has to be built to kernel.
+
source "drivers/block/rnbd/Kconfig"
endif # BLK_DEV
return NULL;
memset(req, 0, sizeof(*req));
- req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio_src,
- GFP_NOIO, &drbd_io_bio_set);
- req->private_bio->bi_private = req;
- req->private_bio->bi_end_io = drbd_request_endio;
-
req->rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0)
| (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0)
| (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0);
/* Update disk stats */
req->start_jif = bio_start_io_acct(req->master_bio);
- if (!get_ldev(device)) {
- bio_put(req->private_bio);
- req->private_bio = NULL;
+ if (get_ldev(device)) {
+ req->private_bio = bio_alloc_clone(device->ldev->backing_bdev,
+ bio, GFP_NOIO,
+ &drbd_io_bio_set);
+ req->private_bio->bi_private = req;
+ req->private_bio->bi_end_io = drbd_request_endio;
}
/* process discards always from our submitter thread */
int ret;
ret = device_register(&rbd_root_dev);
- if (ret < 0)
+ if (ret < 0) {
+ put_device(&rbd_root_dev);
return ret;
+ }
ret = bus_register(&rbd_bus_type);
if (ret < 0)
#define UBLK_PARAM_TYPE_ALL (UBLK_PARAM_TYPE_BASIC | UBLK_PARAM_TYPE_DISCARD)
struct ublk_rq_data {
- struct callback_head work;
+ union {
+ struct callback_head work;
+ struct llist_node node;
+ };
};
struct ublk_uring_cmd_pdu {
- struct request *req;
+ struct ublk_queue *ubq;
};
/*
struct task_struct *ubq_daemon;
char *io_cmd_buf;
+ struct llist_head io_cmds;
+
unsigned long io_addr; /* mapped vm address */
unsigned int max_io_sz;
bool force_abort;
unsigned short nr_io_ready; /* how many ios setup */
struct ublk_device *dev;
- struct ublk_io ios[0];
+ struct ublk_io ios[];
};
#define UBLK_DAEMON_MONITOR_PERIOD (5 * HZ)
static void ublk_rq_task_work_cb(struct io_uring_cmd *cmd)
{
struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ struct ublk_queue *ubq = pdu->ubq;
+ struct llist_node *io_cmds = llist_del_all(&ubq->io_cmds);
+ struct ublk_rq_data *data;
- __ublk_rq_task_work(pdu->req);
+ llist_for_each_entry(data, io_cmds, node)
+ __ublk_rq_task_work(blk_mq_rq_from_pdu(data));
}
static void ublk_rq_task_work_fn(struct callback_head *work)
__ublk_rq_task_work(req);
}
+static void ublk_submit_cmd(struct ublk_queue *ubq, const struct request *rq)
+{
+ struct ublk_io *io = &ubq->ios[rq->tag];
+
+ /*
+ * If the check pass, we know that this is a re-issued request aborted
+ * previously in monitor_work because the ubq_daemon(cmd's task) is
+ * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore
+ * because this ioucmd's io_uring context may be freed now if no inflight
+ * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.
+ *
+ * Note: monitor_work sets UBLK_IO_FLAG_ABORTED and ends this request(releasing
+ * the tag). Then the request is re-started(allocating the tag) and we are here.
+ * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED
+ * guarantees that here is a re-issued request aborted previously.
+ */
+ if (unlikely(io->flags & UBLK_IO_FLAG_ABORTED)) {
+ struct llist_node *io_cmds = llist_del_all(&ubq->io_cmds);
+ struct ublk_rq_data *data;
+
+ llist_for_each_entry(data, io_cmds, node)
+ __ublk_abort_rq(ubq, blk_mq_rq_from_pdu(data));
+ } else {
+ struct io_uring_cmd *cmd = io->cmd;
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+
+ pdu->ubq = ubq;
+ io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);
+ }
+}
+
+static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq,
+ bool last)
+{
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);
+
+ if (ublk_can_use_task_work(ubq)) {
+ enum task_work_notify_mode notify_mode = last ?
+ TWA_SIGNAL_NO_IPI : TWA_NONE;
+
+ if (task_work_add(ubq->ubq_daemon, &data->work, notify_mode))
+ __ublk_abort_rq(ubq, rq);
+ } else {
+ if (llist_add(&data->node, &ubq->io_cmds))
+ ublk_submit_cmd(ubq, rq);
+ }
+}
+
static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
res = ublk_setup_iod(ubq, rq);
if (unlikely(res != BLK_STS_OK))
return BLK_STS_IOERR;
+
/* With recovery feature enabled, force_abort is set in
* ublk_stop_dev() before calling del_gendisk(). We have to
* abort all requeued and new rqs here to let del_gendisk()
blk_mq_start_request(bd->rq);
if (unlikely(ubq_daemon_is_dying(ubq))) {
- fail:
__ublk_abort_rq(ubq, rq);
return BLK_STS_OK;
}
- if (ublk_can_use_task_work(ubq)) {
- struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);
- enum task_work_notify_mode notify_mode = bd->last ?
- TWA_SIGNAL_NO_IPI : TWA_NONE;
-
- if (task_work_add(ubq->ubq_daemon, &data->work, notify_mode))
- goto fail;
- } else {
- struct ublk_io *io = &ubq->ios[rq->tag];
- struct io_uring_cmd *cmd = io->cmd;
- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
-
- /*
- * If the check pass, we know that this is a re-issued request aborted
- * previously in monitor_work because the ubq_daemon(cmd's task) is
- * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore
- * because this ioucmd's io_uring context may be freed now if no inflight
- * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.
- *
- * Note: monitor_work sets UBLK_IO_FLAG_ABORTED and ends this request(releasing
- * the tag). Then the request is re-started(allocating the tag) and we are here.
- * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED
- * guarantees that here is a re-issued request aborted previously.
- */
- if ((io->flags & UBLK_IO_FLAG_ABORTED))
- goto fail;
-
- pdu->req = rq;
- io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);
- }
+ ublk_queue_cmd(ubq, rq, bd->last);
return BLK_STS_OK;
}
}
static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,
- int tag, struct io_uring_cmd *cmd)
+ int tag)
{
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);
- if (ublk_can_use_task_work(ubq)) {
- struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
-
- /* should not fail since we call it just in ubq->ubq_daemon */
- task_work_add(ubq->ubq_daemon, &data->work, TWA_SIGNAL_NO_IPI);
- } else {
- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
-
- pdu->req = req;
- io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);
- }
+ ublk_queue_cmd(ubq, req, true);
}
static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
io->addr = ub_cmd->addr;
io->cmd = cmd;
io->flags |= UBLK_IO_FLAG_ACTIVE;
- ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag, cmd);
+ ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag);
break;
default:
goto out;
*/
ub->dev_info.flags &= UBLK_F_ALL;
+ if (!IS_BUILTIN(CONFIG_BLK_DEV_UBLK))
+ ub->dev_info.flags |= UBLK_F_URING_CMD_COMP_IN_TASK;
+
/* We are not ready to support zero copy */
ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY;
if (!skb)
return;
- skb->len = len;
+ skb_put(skb, len);
virtbt_rx_handle(vbt, skb);
if (virtbt_add_inbuf(vbt) < 0)
while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) {
if (!wait)
return 0;
- cpu_relax();
+ hwrng_msleep(rng, 1000);
}
num_words = rng_readl(priv, RNG_STATUS) >> 24;
#endif
for (i = 0, arch_bits = sizeof(entropy) * 8; i < ARRAY_SIZE(entropy);) {
- longs = arch_get_random_seed_longs(entropy, ARRAY_SIZE(entropy) - i);
+ longs = arch_get_random_seed_longs_early(entropy, ARRAY_SIZE(entropy) - i);
if (longs) {
_mix_pool_bytes(entropy, sizeof(*entropy) * longs);
i += longs;
continue;
}
- longs = arch_get_random_longs(entropy, ARRAY_SIZE(entropy) - i);
+ longs = arch_get_random_longs_early(entropy, ARRAY_SIZE(entropy) - i);
if (longs) {
_mix_pool_bytes(entropy, sizeof(*entropy) * longs);
i += longs;
.n_yes_ranges = ARRAY_SIZE(rs9_writeable_ranges),
};
+static int rs9_regmap_i2c_write(void *context,
+ unsigned int reg, unsigned int val)
+{
+ struct i2c_client *i2c = context;
+ const u8 data[3] = { reg, 1, val };
+ const int count = ARRAY_SIZE(data);
+ int ret;
+
+ ret = i2c_master_send(i2c, data, count);
+ if (ret == count)
+ return 0;
+ else if (ret < 0)
+ return ret;
+ else
+ return -EIO;
+}
+
+static int rs9_regmap_i2c_read(void *context,
+ unsigned int reg, unsigned int *val)
+{
+ struct i2c_client *i2c = context;
+ struct i2c_msg xfer[2];
+ u8 txdata = reg;
+ u8 rxdata[2];
+ int ret;
+
+ xfer[0].addr = i2c->addr;
+ xfer[0].flags = 0;
+ xfer[0].len = 1;
+ xfer[0].buf = (void *)&txdata;
+
+ xfer[1].addr = i2c->addr;
+ xfer[1].flags = I2C_M_RD;
+ xfer[1].len = 2;
+ xfer[1].buf = (void *)rxdata;
+
+ ret = i2c_transfer(i2c->adapter, xfer, 2);
+ if (ret < 0)
+ return ret;
+ if (ret != 2)
+ return -EIO;
+
+ /*
+ * Byte 0 is transfer length, which is always 1 due
+ * to BCP register programming to 1 in rs9_probe(),
+ * ignore it and use data from Byte 1.
+ */
+ *val = rxdata[1];
+ return 0;
+}
+
static const struct regmap_config rs9_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
- .cache_type = REGCACHE_FLAT,
- .max_register = 0x8,
+ .cache_type = REGCACHE_NONE,
+ .max_register = RS9_REG_BCP,
.rd_table = &rs9_readable_table,
.wr_table = &rs9_writeable_table,
+ .reg_write = rs9_regmap_i2c_write,
+ .reg_read = rs9_regmap_i2c_read,
};
static int rs9_get_output_config(struct rs9_driver_data *rs9, int idx)
return ret;
}
- rs9->regmap = devm_regmap_init_i2c(client, &rs9_regmap_config);
+ rs9->regmap = devm_regmap_init(&client->dev, NULL,
+ client, &rs9_regmap_config);
if (IS_ERR(rs9->regmap))
return dev_err_probe(&client->dev, PTR_ERR(rs9->regmap),
"Failed to allocate register map\n");
+ /* Always read back 1 Byte via I2C */
+ ret = regmap_write(rs9->regmap, RS9_REG_BCP, 1);
+ if (ret < 0)
+ return ret;
+
/* Register clock */
for (i = 0; i < rs9->chip_info->num_clks; i++) {
snprintf(name, 5, "DIF%d", i);
{
struct clk_core *parent;
- if (WARN_ON(!core || !req))
+ if (WARN_ON(!req))
return;
memset(req, 0, sizeof(*req));
+ req->max_rate = ULONG_MAX;
+
+ if (!core)
+ return;
req->rate = rate;
clk_core_get_boundaries(core, &req->min_rate, &req->max_rate);
hw = devm_clk_hw_register_mux(&pdev->dev, "mfg_ck_fast_ref", mfg_fast_parents,
ARRAY_SIZE(mfg_fast_parents), CLK_SET_RATE_PARENT,
(base + 0x250), 8, 1, 0, &mt8195_clk_lock);
- if (IS_ERR(hw))
+ if (IS_ERR(hw)) {
+ r = PTR_ERR(hw);
goto unregister_muxes;
+ }
top_clk_data->hws[CLK_TOP_MFG_CK_FAST_REF] = hw;
r = clk_mt8195_reg_mfg_mux_notifier(&pdev->dev,
regmap_update_bits(regmap, 0x28004, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x28014, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x71004, BIT(0), BIT(0));
+ regmap_update_bits(regmap, 0x7100C, BIT(13), BIT(13));
ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks,
ARRAY_SIZE(gcc_dfs_clocks));
*/
regmap_update_bits(regmap, 0x1170, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x1098, BIT(0), BIT(0));
+ regmap_update_bits(regmap, 0x1098, BIT(13), BIT(13));
return qcom_cc_really_probe(pdev, &gpu_cc_sc7280_desc, regmap);
}
CLK_S0_VIO,
CLK_S0_VC,
CLK_S0_HSC,
+ CLK_SASYNCPER,
CLK_SV_VIP,
CLK_SV_IR,
CLK_SDSRC,
DEF_FIXED(".s0_vio", CLK_S0_VIO, CLK_PLL1_DIV2, 2, 1),
DEF_FIXED(".s0_vc", CLK_S0_VC, CLK_PLL1_DIV2, 2, 1),
DEF_FIXED(".s0_hsc", CLK_S0_HSC, CLK_PLL1_DIV2, 2, 1),
+ DEF_FIXED(".sasyncper", CLK_SASYNCPER, CLK_PLL5_DIV4, 3, 1),
DEF_FIXED(".sv_vip", CLK_SV_VIP, CLK_PLL1, 5, 1),
DEF_FIXED(".sv_ir", CLK_SV_IR, CLK_PLL1, 5, 1),
DEF_BASE(".sdsrc", CLK_SDSRC, CLK_TYPE_GEN4_SDSRC, CLK_PLL5),
DEF_FIXED("s0d4_hsc", R8A779G0_CLK_S0D4_HSC, CLK_S0_HSC, 4, 1),
DEF_FIXED("cl16m_hsc", R8A779G0_CLK_CL16M_HSC, CLK_S0_HSC, 48, 1),
DEF_FIXED("s0d2_cc", R8A779G0_CLK_S0D2_CC, CLK_S0, 2, 1),
+ DEF_FIXED("sasyncperd1",R8A779G0_CLK_SASYNCPERD1, CLK_SASYNCPER,1, 1),
+ DEF_FIXED("sasyncperd2",R8A779G0_CLK_SASYNCPERD2, CLK_SASYNCPER,2, 1),
+ DEF_FIXED("sasyncperd4",R8A779G0_CLK_SASYNCPERD4, CLK_SASYNCPER,4, 1),
DEF_FIXED("svd1_ir", R8A779G0_CLK_SVD1_IR, CLK_SV_IR, 1, 1),
DEF_FIXED("svd2_ir", R8A779G0_CLK_SVD2_IR, CLK_SV_IR, 2, 1),
DEF_FIXED("svd1_vip", R8A779G0_CLK_SVD1_VIP, CLK_SV_VIP, 1, 1),
DEF_MOD("avb0", 211, R8A779G0_CLK_S0D4_HSC),
DEF_MOD("avb1", 212, R8A779G0_CLK_S0D4_HSC),
DEF_MOD("avb2", 213, R8A779G0_CLK_S0D4_HSC),
- DEF_MOD("hscif0", 514, R8A779G0_CLK_S0D3_PER),
- DEF_MOD("hscif1", 515, R8A779G0_CLK_S0D3_PER),
- DEF_MOD("hscif2", 516, R8A779G0_CLK_S0D3_PER),
- DEF_MOD("hscif3", 517, R8A779G0_CLK_S0D3_PER),
+ DEF_MOD("hscif0", 514, R8A779G0_CLK_SASYNCPERD1),
+ DEF_MOD("hscif1", 515, R8A779G0_CLK_SASYNCPERD1),
+ DEF_MOD("hscif2", 516, R8A779G0_CLK_SASYNCPERD1),
+ DEF_MOD("hscif3", 517, R8A779G0_CLK_SASYNCPERD1),
DEF_MOD("i2c0", 518, R8A779G0_CLK_S0D6_PER),
DEF_MOD("i2c1", 519, R8A779G0_CLK_S0D6_PER),
DEF_MOD("i2c2", 520, R8A779G0_CLK_S0D6_PER),
menuconfig CLK_SIFIVE
bool "SiFive SoC driver support"
- depends on RISCV || COMPILE_TEST
+ depends on SOC_SIFIVE || COMPILE_TEST
+ default SOC_SIFIVE
help
SoC drivers for SiFive Linux-capable SoCs.
config CLK_SIFIVE_PRCI
bool "PRCI driver for SiFive SoCs"
+ default SOC_SIFIVE
select RESET_CONTROLLER
select RESET_SIMPLE
select CLK_ANALOGBITS_WRPLL_CLN28HPC
COUNTER_FUNCTION_QUADRATURE_X4,
};
+static int quad8_function_get(const struct quad8 *const priv, const size_t id,
+ enum counter_function *const function)
+{
+ if (!priv->quadrature_mode[id]) {
+ *function = COUNTER_FUNCTION_PULSE_DIRECTION;
+ return 0;
+ }
+
+ switch (priv->quadrature_scale[id]) {
+ case 0:
+ *function = COUNTER_FUNCTION_QUADRATURE_X1_A;
+ return 0;
+ case 1:
+ *function = COUNTER_FUNCTION_QUADRATURE_X2_A;
+ return 0;
+ case 2:
+ *function = COUNTER_FUNCTION_QUADRATURE_X4;
+ return 0;
+ default:
+ /* should never reach this path */
+ return -EINVAL;
+ }
+}
+
static int quad8_function_read(struct counter_device *counter,
struct counter_count *count,
enum counter_function *function)
{
struct quad8 *const priv = counter_priv(counter);
- const int id = count->id;
unsigned long irqflags;
+ int retval;
spin_lock_irqsave(&priv->lock, irqflags);
- if (priv->quadrature_mode[id])
- switch (priv->quadrature_scale[id]) {
- case 0:
- *function = COUNTER_FUNCTION_QUADRATURE_X1_A;
- break;
- case 1:
- *function = COUNTER_FUNCTION_QUADRATURE_X2_A;
- break;
- case 2:
- *function = COUNTER_FUNCTION_QUADRATURE_X4;
- break;
- }
- else
- *function = COUNTER_FUNCTION_PULSE_DIRECTION;
+ retval = quad8_function_get(priv, count->id, function);
spin_unlock_irqrestore(&priv->lock, irqflags);
- return 0;
+ return retval;
}
static int quad8_function_write(struct counter_device *counter,
enum counter_synapse_action *action)
{
struct quad8 *const priv = counter_priv(counter);
+ unsigned long irqflags;
int err;
enum counter_function function;
const size_t signal_a_id = count->synapses[0].signal->id;
return 0;
}
- err = quad8_function_read(counter, count, &function);
- if (err)
+ spin_lock_irqsave(&priv->lock, irqflags);
+
+ /* Get Count function and direction atomically */
+ err = quad8_function_get(priv, count->id, &function);
+ if (err) {
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+ return err;
+ }
+ err = quad8_direction_read(counter, count, &direction);
+ if (err) {
+ spin_unlock_irqrestore(&priv->lock, irqflags);
return err;
+ }
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
/* Default action mode */
*action = COUNTER_SYNAPSE_ACTION_NONE;
return 0;
case COUNTER_FUNCTION_QUADRATURE_X1_A:
if (synapse->signal->id == signal_a_id) {
- err = quad8_direction_read(counter, count, &direction);
- if (err)
- return err;
-
if (direction == COUNTER_COUNT_DIRECTION_FORWARD)
*action = COUNTER_SYNAPSE_ACTION_RISING_EDGE;
else
int qdec_mode;
int num_channels;
int channel[2];
- bool trig_inverted;
};
static const enum counter_function mchp_tc_count_functions[] = {
regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], SR), &sr);
- if (priv->trig_inverted)
+ if (signal->id == 1)
sigstatus = (sr & ATMEL_TC_MTIOB);
else
sigstatus = (sr & ATMEL_TC_MTIOA);
struct mchp_tc_data *const priv = counter_priv(counter);
u32 cmr;
+ if (priv->qdec_mode) {
+ *action = COUNTER_SYNAPSE_ACTION_BOTH_EDGES;
+ return 0;
+ }
+
+ /* Only TIOA signal is evaluated in non-QDEC mode */
+ if (synapse->signal->id != 0) {
+ *action = COUNTER_SYNAPSE_ACTION_NONE;
+ return 0;
+ }
+
regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], CMR), &cmr);
switch (cmr & ATMEL_TC_ETRGEDG) {
struct mchp_tc_data *const priv = counter_priv(counter);
u32 edge = ATMEL_TC_ETRGEDG_NONE;
- /* QDEC mode is rising edge only */
- if (priv->qdec_mode)
+ /* QDEC mode is rising edge only; only TIOA handled in non-QDEC mode */
+ if (priv->qdec_mode || synapse->signal->id != 0)
return -EINVAL;
switch (action) {
COUNTER_SIGNAL_POLARITY_NEGATIVE,
};
-static DEFINE_COUNTER_ARRAY_POLARITY(ecap_cnt_pol_array, ecap_cnt_pol_avail, ECAP_NB_CEVT);
+static DEFINE_COUNTER_AVAILABLE(ecap_cnt_pol_available, ecap_cnt_pol_avail);
+static DEFINE_COUNTER_ARRAY_POLARITY(ecap_cnt_pol_array, ecap_cnt_pol_available, ECAP_NB_CEVT);
static struct counter_comp ecap_cnt_signal_ext[] = {
COUNTER_COMP_ARRAY_POLARITY(ecap_cnt_pol_read, ecap_cnt_pol_write, ecap_cnt_pol_array),
int ret;
counter_dev = devm_counter_alloc(dev, sizeof(*ecap_dev));
- if (IS_ERR(counter_dev))
- return PTR_ERR(counter_dev);
+ if (!counter_dev)
+ return -ENOMEM;
counter_dev->name = ECAP_DRV_NAME;
counter_dev->parent = dev;
if (reg_name[0]) {
priv->opp_token = dev_pm_opp_set_regulators(cpu_dev, reg_name);
if (priv->opp_token < 0) {
- ret = priv->opp_token;
- if (ret != -EPROBE_DEFER)
- dev_err(cpu_dev, "failed to set regulators: %d\n",
- ret);
+ ret = dev_err_probe(cpu_dev, priv->opp_token,
+ "failed to set regulators\n");
goto free_cpumask;
}
}
ret = imx6q_opp_check_speed_grading(cpu_dev);
}
if (ret) {
- if (ret != -EPROBE_DEFER)
- dev_err(cpu_dev, "failed to read ocotp: %d\n",
- ret);
+ dev_err_probe(cpu_dev, ret, "failed to read ocotp\n");
goto out_free_opp;
}
#include <linux/pm_qos.h>
#include <trace/events/power.h>
+#include <asm/cpu.h>
#include <asm/div64.h>
#include <asm/msr.h>
#include <asm/cpu_device_id.h>
* structure is used to store those callbacks.
*/
struct pstate_funcs {
- int (*get_max)(void);
- int (*get_max_physical)(void);
- int (*get_min)(void);
- int (*get_turbo)(void);
+ int (*get_max)(int cpu);
+ int (*get_max_physical)(int cpu);
+ int (*get_min)(int cpu);
+ int (*get_turbo)(int cpu);
int (*get_scaling)(void);
int (*get_cpu_scaling)(int cpu);
int (*get_aperf_mperf_shift)(void);
return cppc_perf.nominal_perf;
}
-
-static u32 intel_pstate_cppc_nominal(int cpu)
-{
- u64 nominal_perf;
-
- if (cppc_get_nominal_perf(cpu, &nominal_perf))
- return 0;
-
- return nominal_perf;
-}
#else /* CONFIG_ACPI_CPPC_LIB */
static inline void intel_pstate_set_itmt_prio(int cpu)
{
{
int perf_ctl_max_phys = cpu->pstate.max_pstate_physical;
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
- int perf_ctl_turbo = pstate_funcs.get_turbo();
- int turbo_freq = perf_ctl_turbo * perf_ctl_scaling;
+ int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu);
int scaling = cpu->pstate.scaling;
pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
- pr_debug("CPU%d: perf_ctl_max = %d\n", cpu->cpu, pstate_funcs.get_max());
pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling);
pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate);
pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate);
pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling);
- /*
- * If the product of the HWP performance scaling factor and the HWP_CAP
- * highest performance is greater than the maximum turbo frequency
- * corresponding to the pstate_funcs.get_turbo() return value, the
- * scaling factor is too high, so recompute it to make the HWP_CAP
- * highest performance correspond to the maximum turbo frequency.
- */
- cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling;
- if (turbo_freq < cpu->pstate.turbo_freq) {
- cpu->pstate.turbo_freq = turbo_freq;
- scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate);
- cpu->pstate.scaling = scaling;
-
- pr_debug("CPU%d: refined HWP-to-frequency scaling factor: %d\n",
- cpu->cpu, scaling);
- }
-
+ cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_pstate * scaling,
+ perf_ctl_scaling);
cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
perf_ctl_scaling);
intel_pstate_update_epp_defaults(cpudata);
}
-static int atom_get_min_pstate(void)
+static int atom_get_min_pstate(int not_used)
{
u64 value;
return (value >> 8) & 0x7F;
}
-static int atom_get_max_pstate(void)
+static int atom_get_max_pstate(int not_used)
{
u64 value;
return (value >> 16) & 0x7F;
}
-static int atom_get_turbo_pstate(void)
+static int atom_get_turbo_pstate(int not_used)
{
u64 value;
cpudata->vid.turbo = value & 0x7f;
}
-static int core_get_min_pstate(void)
+static int core_get_min_pstate(int cpu)
{
u64 value;
- rdmsrl(MSR_PLATFORM_INFO, value);
+ rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 40) & 0xFF;
}
-static int core_get_max_pstate_physical(void)
+static int core_get_max_pstate_physical(int cpu)
{
u64 value;
- rdmsrl(MSR_PLATFORM_INFO, value);
+ rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 8) & 0xFF;
}
-static int core_get_tdp_ratio(u64 plat_info)
+static int core_get_tdp_ratio(int cpu, u64 plat_info)
{
/* Check how many TDP levels present */
if (plat_info & 0x600000000) {
int err;
/* Get the TDP level (0, 1, 2) to get ratios */
- err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
+ err = rdmsrl_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
if (err)
return err;
/* TDP MSR are continuous starting at 0x648 */
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
- err = rdmsrl_safe(tdp_msr, &tdp_ratio);
+ err = rdmsrl_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
if (err)
return err;
return -ENXIO;
}
-static int core_get_max_pstate(void)
+static int core_get_max_pstate(int cpu)
{
u64 tar;
u64 plat_info;
int tdp_ratio;
int err;
- rdmsrl(MSR_PLATFORM_INFO, plat_info);
+ rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
max_pstate = (plat_info >> 8) & 0xFF;
- tdp_ratio = core_get_tdp_ratio(plat_info);
+ tdp_ratio = core_get_tdp_ratio(cpu, plat_info);
if (tdp_ratio <= 0)
return max_pstate;
return tdp_ratio;
}
- err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar);
+ err = rdmsrl_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
int tar_levels;
return max_pstate;
}
-static int core_get_turbo_pstate(void)
+static int core_get_turbo_pstate(int cpu)
{
u64 value;
int nont, ret;
- rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
- nont = core_get_max_pstate();
+ rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ nont = core_get_max_pstate(cpu);
ret = (value) & 255;
if (ret <= nont)
ret = nont;
return 10;
}
-static int knl_get_turbo_pstate(void)
+static int knl_get_turbo_pstate(int cpu)
{
u64 value;
int nont, ret;
- rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
- nont = core_get_max_pstate();
+ rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ nont = core_get_max_pstate(cpu);
ret = (((value) >> 8) & 0xFF);
if (ret <= nont)
ret = nont;
return ret;
}
-#ifdef CONFIG_ACPI_CPPC_LIB
-static u32 hybrid_ref_perf;
-
-static int hybrid_get_cpu_scaling(int cpu)
+static void hybrid_get_type(void *data)
{
- return DIV_ROUND_UP(core_get_scaling() * hybrid_ref_perf,
- intel_pstate_cppc_nominal(cpu));
+ u8 *cpu_type = data;
+
+ *cpu_type = get_this_hybrid_cpu_type();
}
-static void intel_pstate_cppc_set_cpu_scaling(void)
+static int hybrid_get_cpu_scaling(int cpu)
{
- u32 min_nominal_perf = U32_MAX;
- int cpu;
+ u8 cpu_type = 0;
- for_each_present_cpu(cpu) {
- u32 nominal_perf = intel_pstate_cppc_nominal(cpu);
+ smp_call_function_single(cpu, hybrid_get_type, &cpu_type, 1);
+ /* P-cores have a smaller perf level-to-freqency scaling factor. */
+ if (cpu_type == 0x40)
+ return 78741;
- if (nominal_perf && nominal_perf < min_nominal_perf)
- min_nominal_perf = nominal_perf;
- }
-
- if (min_nominal_perf < U32_MAX) {
- hybrid_ref_perf = min_nominal_perf;
- pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling;
- }
+ return core_get_scaling();
}
-#else
-static inline void intel_pstate_cppc_set_cpu_scaling(void)
-{
-}
-#endif /* CONFIG_ACPI_CPPC_LIB */
static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
{
static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
{
- int perf_ctl_max_phys = pstate_funcs.get_max_physical();
+ int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu);
int perf_ctl_scaling = pstate_funcs.get_scaling();
- cpu->pstate.min_pstate = pstate_funcs.get_min();
+ cpu->pstate.min_pstate = pstate_funcs.get_min(cpu->cpu);
cpu->pstate.max_pstate_physical = perf_ctl_max_phys;
cpu->pstate.perf_ctl_scaling = perf_ctl_scaling;
}
} else {
cpu->pstate.scaling = perf_ctl_scaling;
- cpu->pstate.max_pstate = pstate_funcs.get_max();
- cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
+ cpu->pstate.max_pstate = pstate_funcs.get_max(cpu->cpu);
+ cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(cpu->cpu);
}
if (cpu->pstate.scaling == perf_ctl_scaling) {
static int __init intel_pstate_msrs_not_valid(void)
{
- if (!pstate_funcs.get_max() ||
- !pstate_funcs.get_min() ||
- !pstate_funcs.get_turbo())
+ if (!pstate_funcs.get_max(0) ||
+ !pstate_funcs.get_min(0) ||
+ !pstate_funcs.get_turbo(0))
return -ENODEV;
return 0;
default_driver = &intel_pstate;
if (boot_cpu_has(X86_FEATURE_HYBRID_CPU))
- intel_pstate_cppc_set_cpu_scaling();
+ pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling;
goto hwp_cpu_matched;
}
static void get_krait_bin_format_a(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
- struct nvmem_cell *pvs_nvmem, u8 *buf)
+ u8 *buf)
{
u32 pte_efuse;
static void get_krait_bin_format_b(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
- struct nvmem_cell *pvs_nvmem, u8 *buf)
+ u8 *buf)
{
u32 pte_efuse, redundant_sel;
int speed = 0, pvs = 0, pvs_ver = 0;
u8 *speedbin;
size_t len;
+ int ret = 0;
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
switch (len) {
case 4:
get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver,
- speedbin_nvmem, speedbin);
+ speedbin);
break;
case 8:
get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver,
- speedbin_nvmem, speedbin);
+ speedbin);
break;
default:
dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n");
- return -ENODEV;
+ ret = -ENODEV;
+ goto len_error;
}
snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d",
drv->versions = (1 << speed);
+len_error:
kfree(speedbin);
- return 0;
+ return ret;
}
static const struct qcom_cpufreq_match_data match_data_kryo = {
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
- char *pvs_name = "speedXX-pvsXX-vXX";
+ char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
+ char *pvs_name = pvs_name_buffer;
unsigned cpu;
const struct of_device_id *match;
int ret;
if (drv->data->get_version) {
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
if (IS_ERR(speedbin_nvmem)) {
- if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
- dev_err(cpu_dev,
- "Could not get nvmem cell: %ld\n",
- PTR_ERR(speedbin_nvmem));
- ret = PTR_ERR(speedbin_nvmem);
+ ret = dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+ "Could not get nvmem cell\n");
goto free_drv;
}
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
- if (IS_ERR(speedbin_nvmem)) {
- if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
- pr_err("Could not get nvmem cell: %ld\n",
- PTR_ERR(speedbin_nvmem));
- return PTR_ERR(speedbin_nvmem);
- }
+ if (IS_ERR(speedbin_nvmem))
+ return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+ "Could not get nvmem cell\n");
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
nvmem_cell_put(speedbin_nvmem);
{ .compatible = "nvidia,tegra239-ccplex-cluster", .data = &tegra239_cpufreq_soc },
{ /* sentinel */ }
};
+MODULE_DEVICE_TABLE(of, tegra194_cpufreq_of_match);
static struct platform_driver tegra194_ccplex_driver = {
.driver = {
};
int rc;
- if (out_size > cxlds->payload_size)
+ if (in_size > cxlds->payload_size || out_size > cxlds->payload_size)
return -E2BIG;
rc = cxlds->mbox_send(cxlds, &mbox_cmd);
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
+ xa_destroy(&cxl_nvd->pmem_regions);
kfree(cxl_nvd);
}
dev = &cxl_nvd->dev;
cxl_nvd->cxlmd = cxlmd;
+ xa_init(&cxl_nvd->pmem_regions);
device_initialize(dev);
lockdep_set_class(&dev->mutex, &cxl_nvdimm_key);
device_set_pm_not_required(dev);
static int add_dport(struct cxl_port *port, struct cxl_dport *new)
{
struct cxl_dport *dup;
+ int rc;
device_lock_assert(&port->dev);
dup = find_dport(port, new->port_id);
dev_name(dup->dport));
return -EBUSY;
}
- return xa_insert(&port->dports, (unsigned long)new->dport, new,
- GFP_KERNEL);
+
+ rc = xa_insert(&port->dports, (unsigned long)new->dport, new,
+ GFP_KERNEL);
+ if (rc)
+ return rc;
+
+ port->nr_dports++;
+ return 0;
}
/*
iter = to_cxl_port(iter->dev.parent)) {
cxl_rr = cxl_rr_load(iter, cxlr);
cxld = cxl_rr->decoder;
- rc = cxld->commit(cxld);
+ if (cxld->commit)
+ rc = cxld->commit(cxld);
if (rc)
break;
}
xa_for_each(&port->regions, index, iter) {
struct cxl_region_params *ip = &iter->region->params;
+ if (!ip->res)
+ continue;
+
if (ip->res->start > p->res->start) {
dev_dbg(&cxlr->dev,
"%s: HPA order violation %s:%pr vs %pr\n",
return cxl_rr;
}
-static void free_region_ref(struct cxl_region_ref *cxl_rr)
+static void cxl_rr_free_decoder(struct cxl_region_ref *cxl_rr)
{
- struct cxl_port *port = cxl_rr->port;
struct cxl_region *cxlr = cxl_rr->region;
struct cxl_decoder *cxld = cxl_rr->decoder;
+ if (!cxld)
+ return;
+
dev_WARN_ONCE(&cxlr->dev, cxld->region != cxlr, "region mismatch\n");
if (cxld->region == cxlr) {
cxld->region = NULL;
put_device(&cxlr->dev);
}
+}
+static void free_region_ref(struct cxl_region_ref *cxl_rr)
+{
+ struct cxl_port *port = cxl_rr->port;
+ struct cxl_region *cxlr = cxl_rr->region;
+
+ cxl_rr_free_decoder(cxl_rr);
xa_erase(&port->regions, (unsigned long)cxlr);
xa_destroy(&cxl_rr->endpoints);
kfree(cxl_rr);
return 0;
}
+static int cxl_rr_alloc_decoder(struct cxl_port *port, struct cxl_region *cxlr,
+ struct cxl_endpoint_decoder *cxled,
+ struct cxl_region_ref *cxl_rr)
+{
+ struct cxl_decoder *cxld;
+
+ if (port == cxled_to_port(cxled))
+ cxld = &cxled->cxld;
+ else
+ cxld = cxl_region_find_decoder(port, cxlr);
+ if (!cxld) {
+ dev_dbg(&cxlr->dev, "%s: no decoder available\n",
+ dev_name(&port->dev));
+ return -EBUSY;
+ }
+
+ if (cxld->region) {
+ dev_dbg(&cxlr->dev, "%s: %s already attached to %s\n",
+ dev_name(&port->dev), dev_name(&cxld->dev),
+ dev_name(&cxld->region->dev));
+ return -EBUSY;
+ }
+
+ cxl_rr->decoder = cxld;
+ return 0;
+}
+
/**
* cxl_port_attach_region() - track a region's interest in a port by endpoint
* @port: port to add a new region reference 'struct cxl_region_ref'
cxl_rr->nr_targets++;
nr_targets_inc = true;
}
-
- /*
- * The decoder for @cxlr was allocated when the region was first
- * attached to @port.
- */
- cxld = cxl_rr->decoder;
} else {
cxl_rr = alloc_region_ref(port, cxlr);
if (IS_ERR(cxl_rr)) {
}
nr_targets_inc = true;
- if (port == cxled_to_port(cxled))
- cxld = &cxled->cxld;
- else
- cxld = cxl_region_find_decoder(port, cxlr);
- if (!cxld) {
- dev_dbg(&cxlr->dev, "%s: no decoder available\n",
- dev_name(&port->dev));
- goto out_erase;
- }
-
- if (cxld->region) {
- dev_dbg(&cxlr->dev, "%s: %s already attached to %s\n",
- dev_name(&port->dev), dev_name(&cxld->dev),
- dev_name(&cxld->region->dev));
- rc = -EBUSY;
+ rc = cxl_rr_alloc_decoder(port, cxlr, cxled, cxl_rr);
+ if (rc)
goto out_erase;
- }
-
- cxl_rr->decoder = cxld;
}
+ cxld = cxl_rr->decoder;
rc = cxl_rr_ep_add(cxl_rr, cxled);
if (rc) {
if (cxl_rr->nr_targets_set) {
int i, distance;
- distance = p->nr_targets / cxl_rr->nr_targets;
+ /*
+ * Passthrough ports impose no distance requirements between
+ * peers
+ */
+ if (port->nr_dports == 1)
+ distance = 0;
+ else
+ distance = p->nr_targets / cxl_rr->nr_targets;
for (i = 0; i < cxl_rr->nr_targets_set; i++)
if (ep->dport == cxlsd->target[i]) {
rc = check_last_peer(cxled, ep, cxl_rr,
static void cxl_region_release(struct device *dev)
{
+ struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(dev->parent);
struct cxl_region *cxlr = to_cxl_region(dev);
+ int id = atomic_read(&cxlrd->region_id);
+
+ /*
+ * Try to reuse the recently idled id rather than the cached
+ * next id to prevent the region id space from increasing
+ * unnecessarily.
+ */
+ if (cxlr->id < id)
+ if (atomic_try_cmpxchg(&cxlrd->region_id, &id, cxlr->id)) {
+ memregion_free(id);
+ goto out;
+ }
memregion_free(cxlr->id);
+out:
+ put_device(dev->parent);
kfree(cxlr);
}
static void unregister_region(void *dev)
{
struct cxl_region *cxlr = to_cxl_region(dev);
+ struct cxl_region_params *p = &cxlr->params;
+ int i;
device_del(dev);
+
+ /*
+ * Now that region sysfs is shutdown, the parameter block is now
+ * read-only, so no need to hold the region rwsem to access the
+ * region parameters.
+ */
+ for (i = 0; i < p->interleave_ways; i++)
+ detach_target(cxlr, i);
+
cxl_region_iomem_release(cxlr);
put_device(dev);
}
device_initialize(dev);
lockdep_set_class(&dev->mutex, &cxl_region_key);
dev->parent = &cxlrd->cxlsd.cxld.dev;
+ /*
+ * Keep root decoder pinned through cxl_region_release to fixup
+ * region id allocations
+ */
+ get_device(dev->parent);
device_set_pm_not_required(dev);
dev->bus = &cxl_bus_type;
dev->type = &cxl_region_type;
struct device dev;
struct cxl_memdev *cxlmd;
struct cxl_nvdimm_bridge *bridge;
- struct cxl_pmem_region *region;
+ struct xarray pmem_regions;
};
struct cxl_pmem_region_mapping {
* @regions: cxl_region_ref instances, regions mapped by this port
* @parent_dport: dport that points to this port in the parent
* @decoder_ida: allocator for decoder ids
+ * @nr_dports: number of entries in @dports
* @hdm_end: track last allocated HDM decoder instance for allocation ordering
* @commit_end: cursor to track highest committed decoder for commit ordering
* @component_reg_phys: component register capability base address (optional)
struct xarray regions;
struct cxl_dport *parent_dport;
struct ida decoder_ida;
+ int nr_dports;
int hdm_end;
int commit_end;
resource_size_t component_reg_phys;
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_nvdimm_bridge *cxl_nvb = cxl_nvd->bridge;
struct cxl_pmem_region *cxlr_pmem;
+ unsigned long index;
device_lock(&cxl_nvb->dev);
- cxlr_pmem = cxl_nvd->region;
dev_set_drvdata(&cxl_nvd->dev, NULL);
- cxl_nvd->region = NULL;
- device_unlock(&cxl_nvb->dev);
+ xa_for_each(&cxl_nvd->pmem_regions, index, cxlr_pmem) {
+ get_device(&cxlr_pmem->dev);
+ device_unlock(&cxl_nvb->dev);
- if (cxlr_pmem) {
device_release_driver(&cxlr_pmem->dev);
put_device(&cxlr_pmem->dev);
+
+ device_lock(&cxl_nvb->dev);
}
+ device_unlock(&cxl_nvb->dev);
nvdimm_delete(nvdimm);
cxl_nvd->bridge = NULL;
*cmd = (struct nd_cmd_get_config_size) {
.config_size = cxlds->lsa_size,
- .max_xfer = cxlds->payload_size,
+ .max_xfer = cxlds->payload_size - sizeof(struct cxl_mbox_set_lsa),
};
return 0;
return -EINVAL;
/* 4-byte status follows the input data in the payload */
- if (struct_size(cmd, in_buf, cmd->in_length) + 4 > buf_len)
+ if (size_add(struct_size(cmd, in_buf, cmd->in_length), 4) > buf_len)
return -EINVAL;
set_lsa =
static void unregister_nvdimm_region(void *nd_region)
{
- struct cxl_nvdimm_bridge *cxl_nvb;
- struct cxl_pmem_region *cxlr_pmem;
+ nvdimm_region_delete(nd_region);
+}
+
+static int cxl_nvdimm_add_region(struct cxl_nvdimm *cxl_nvd,
+ struct cxl_pmem_region *cxlr_pmem)
+{
+ int rc;
+
+ rc = xa_insert(&cxl_nvd->pmem_regions, (unsigned long)cxlr_pmem,
+ cxlr_pmem, GFP_KERNEL);
+ if (rc)
+ return rc;
+
+ get_device(&cxlr_pmem->dev);
+ return 0;
+}
+
+static void cxl_nvdimm_del_region(struct cxl_nvdimm *cxl_nvd,
+ struct cxl_pmem_region *cxlr_pmem)
+{
+ /*
+ * It is possible this is called without a corresponding
+ * cxl_nvdimm_add_region for @cxlr_pmem
+ */
+ cxlr_pmem = xa_erase(&cxl_nvd->pmem_regions, (unsigned long)cxlr_pmem);
+ if (cxlr_pmem)
+ put_device(&cxlr_pmem->dev);
+}
+
+static void release_mappings(void *data)
+{
int i;
+ struct cxl_pmem_region *cxlr_pmem = data;
+ struct cxl_nvdimm_bridge *cxl_nvb = cxlr_pmem->bridge;
- cxlr_pmem = nd_region_provider_data(nd_region);
- cxl_nvb = cxlr_pmem->bridge;
device_lock(&cxl_nvb->dev);
for (i = 0; i < cxlr_pmem->nr_mappings; i++) {
struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];
struct cxl_nvdimm *cxl_nvd = m->cxl_nvd;
- if (cxl_nvd->region) {
- put_device(&cxlr_pmem->dev);
- cxl_nvd->region = NULL;
- }
+ cxl_nvdimm_del_region(cxl_nvd, cxlr_pmem);
}
device_unlock(&cxl_nvb->dev);
-
- nvdimm_region_delete(nd_region);
}
static void cxlr_pmem_remove_resource(void *res)
if (!cxl_nvb->nvdimm_bus) {
dev_dbg(dev, "nvdimm bus not found\n");
rc = -ENXIO;
- goto err;
+ goto out_nvb;
}
memset(&mappings, 0, sizeof(mappings));
res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
if (!res) {
rc = -ENOMEM;
- goto err;
+ goto out_nvb;
}
res->name = "Persistent Memory";
rc = insert_resource(&iomem_resource, res);
if (rc)
- goto err;
+ goto out_nvb;
rc = devm_add_action_or_reset(dev, cxlr_pmem_remove_resource, res);
if (rc)
- goto err;
+ goto out_nvb;
ndr_desc.res = res;
ndr_desc.provider_data = cxlr_pmem;
nd_set = devm_kzalloc(dev, sizeof(*nd_set), GFP_KERNEL);
if (!nd_set) {
rc = -ENOMEM;
- goto err;
+ goto out_nvb;
}
ndr_desc.memregion = cxlr->id;
info = kmalloc_array(cxlr_pmem->nr_mappings, sizeof(*info), GFP_KERNEL);
if (!info) {
rc = -ENOMEM;
- goto err;
+ goto out_nvb;
}
+ rc = devm_add_action_or_reset(dev, release_mappings, cxlr_pmem);
+ if (rc)
+ goto out_nvd;
+
for (i = 0; i < cxlr_pmem->nr_mappings; i++) {
struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];
struct cxl_memdev *cxlmd = m->cxlmd;
dev_dbg(dev, "[%d]: %s: no cxl_nvdimm found\n", i,
dev_name(&cxlmd->dev));
rc = -ENODEV;
- goto err;
+ goto out_nvd;
}
/* safe to drop ref now with bridge lock held */
dev_dbg(dev, "[%d]: %s: no nvdimm found\n", i,
dev_name(&cxlmd->dev));
rc = -ENODEV;
- goto err;
+ goto out_nvd;
}
- cxl_nvd->region = cxlr_pmem;
- get_device(&cxlr_pmem->dev);
+
+ /*
+ * Pin the region per nvdimm device as those may be released
+ * out-of-order with respect to the region, and a single nvdimm
+ * maybe associated with multiple regions
+ */
+ rc = cxl_nvdimm_add_region(cxl_nvd, cxlr_pmem);
+ if (rc)
+ goto out_nvd;
m->cxl_nvd = cxl_nvd;
mappings[i] = (struct nd_mapping_desc) {
.nvdimm = nvdimm,
nvdimm_pmem_region_create(cxl_nvb->nvdimm_bus, &ndr_desc);
if (!cxlr_pmem->nd_region) {
rc = -ENOMEM;
- goto err;
+ goto out_nvd;
}
rc = devm_add_action_or_reset(dev, unregister_nvdimm_region,
cxlr_pmem->nd_region);
-out:
+out_nvd:
kfree(info);
+out_nvb:
device_unlock(&cxl_nvb->dev);
put_device(&cxl_nvb->dev);
return rc;
-
-err:
- dev_dbg(dev, "failed to create nvdimm region\n");
- for (i--; i >= 0; i--) {
- nvdimm = mappings[i].nvdimm;
- cxl_nvd = nvdimm_provider_data(nvdimm);
- put_device(&cxl_nvd->region->dev);
- cxl_nvd->region = NULL;
- }
- goto out;
}
static struct cxl_driver cxl_pmem_region_driver = {
device_unregister(&scmi_dev->dev);
}
+void scmi_device_link_add(struct device *consumer, struct device *supplier)
+{
+ struct device_link *link;
+
+ link = device_link_add(consumer, supplier, DL_FLAG_AUTOREMOVE_CONSUMER);
+
+ WARN_ON(!link);
+}
+
void scmi_set_handle(struct scmi_device *scmi_dev)
{
scmi_dev->handle = scmi_handle_get(&scmi_dev->dev);
+ if (scmi_dev->handle)
+ scmi_device_link_add(&scmi_dev->dev, scmi_dev->handle->dev);
}
int scmi_protocol_register(const struct scmi_protocol *proto)
struct scmi_revision_info *
scmi_revision_area_get(const struct scmi_protocol_handle *ph);
int scmi_handle_put(const struct scmi_handle *handle);
+void scmi_device_link_add(struct device *consumer, struct device *supplier);
struct scmi_handle *scmi_handle_get(struct device *dev);
void scmi_set_handle(struct scmi_device *scmi_dev);
void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph,
*
* @dev: Reference to device in the SCMI hierarchy corresponding to this
* channel
+ * @rx_timeout_ms: The configured RX timeout in milliseconds.
* @handle: Pointer to SCMI entity handle
* @no_completion_irq: Flag to indicate that this channel has no completion
* interrupt mechanism for synchronous commands.
*/
struct scmi_chan_info {
struct device *dev;
+ unsigned int rx_timeout_ms;
struct scmi_handle *handle;
bool no_completion_irq;
void *transport_info;
struct scmi_shared_mem;
void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
- struct scmi_xfer *xfer);
+ struct scmi_xfer *xfer, struct scmi_chan_info *cinfo);
u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem);
void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);
return -ENOMEM;
cinfo->dev = dev;
+ cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms;
ret = info->desc->ops->chan_setup(cinfo, info->dev, tx);
if (ret)
{
int ret = scmi_chan_setup(info, dev, prot_id, true);
- if (!ret) /* Rx is optional, hence no error check */
- scmi_chan_setup(info, dev, prot_id, false);
+ if (!ret) {
+ /* Rx is optional, report only memory errors */
+ ret = scmi_chan_setup(info, dev, prot_id, false);
+ if (ret && ret != -ENOMEM)
+ ret = 0;
+ }
return ret;
}
sdev = scmi_get_protocol_device(child, info,
id_table->protocol_id,
id_table->name);
- /* Set handle if not already set: device existed */
- if (sdev && !sdev->handle)
- sdev->handle =
- scmi_handle_get_from_info_unlocked(info);
+ if (sdev) {
+ /* Set handle if not already set: device existed */
+ if (!sdev->handle)
+ sdev->handle =
+ scmi_handle_get_from_info_unlocked(info);
+ /* Relink consumer and suppliers */
+ if (sdev->handle)
+ scmi_device_link_add(&sdev->dev,
+ sdev->handle->dev);
+ }
} else {
dev_err(info->dev,
"Failed. SCMI protocol %d not active.\n",
static int scmi_remove(struct platform_device *pdev)
{
- int ret = 0, id;
+ int ret, id;
struct scmi_info *info = platform_get_drvdata(pdev);
struct device_node *child;
mutex_lock(&scmi_list_mutex);
if (info->users)
- ret = -EBUSY;
- else
- list_del(&info->node);
+ dev_warn(&pdev->dev,
+ "Still active SCMI users will be forcibly unbound.\n");
+ list_del(&info->node);
mutex_unlock(&scmi_list_mutex);
- if (ret)
- return ret;
-
scmi_notification_exit(&info->handle);
mutex_lock(&info->protocols_mtx);
idr_destroy(&info->active_protocols);
/* Safe to free channels since no more users */
- return scmi_cleanup_txrx_channels(info);
+ ret = scmi_cleanup_txrx_channels(info);
+ if (ret)
+ dev_warn(&pdev->dev, "Failed to cleanup SCMI channels.\n");
+
+ return 0;
}
static ssize_t protocol_version_show(struct device *dev,
static struct platform_driver scmi_driver = {
.driver = {
.name = "arm-scmi",
+ .suppress_bind_attrs = true,
.of_match_table = scmi_of_match,
.dev_groups = versions_groups,
},
{
struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);
- shmem_tx_prepare(smbox->shmem, m);
+ shmem_tx_prepare(smbox->shmem, m, smbox->cinfo);
}
static void rx_callback(struct mbox_client *cl, void *m)
msg_tx_prepare(channel->req.msg, xfer);
ret = invoke_process_msg_channel(channel, msg_command_size(xfer));
} else {
- shmem_tx_prepare(channel->req.shmem, xfer);
+ shmem_tx_prepare(channel->req.shmem, xfer, cinfo);
ret = invoke_process_smt_channel(channel);
}
* Copyright (C) 2019 ARM Ltd.
*/
+#include <linux/ktime.h>
#include <linux/io.h>
#include <linux/processor.h>
#include <linux/types.h>
+#include <asm-generic/bug.h>
+
#include "common.h"
/*
};
void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
- struct scmi_xfer *xfer)
+ struct scmi_xfer *xfer, struct scmi_chan_info *cinfo)
{
+ ktime_t stop;
+
/*
* Ideally channel must be free by now unless OS timeout last
* request and platform continued to process the same, wait
* until it releases the shared memory, otherwise we may endup
- * overwriting its response with new message payload or vice-versa
+ * overwriting its response with new message payload or vice-versa.
+ * Giving up anyway after twice the expected channel timeout so as
+ * not to bail-out on intermittent issues where the platform is
+ * occasionally a bit slower to answer.
+ *
+ * Note that after a timeout is detected we bail-out and carry on but
+ * the transport functionality is probably permanently compromised:
+ * this is just to ease debugging and avoid complete hangs on boot
+ * due to a misbehaving SCMI firmware.
*/
- spin_until_cond(ioread32(&shmem->channel_status) &
- SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
+ stop = ktime_add_ms(ktime_get(), 2 * cinfo->rx_timeout_ms);
+ spin_until_cond((ioread32(&shmem->channel_status) &
+ SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE) ||
+ ktime_after(ktime_get(), stop));
+ if (!(ioread32(&shmem->channel_status) &
+ SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE)) {
+ WARN_ON_ONCE(1);
+ dev_err(cinfo->dev,
+ "Timeout waiting for a free TX channel !\n");
+ return;
+ }
+
/* Mark channel busy + clear error */
iowrite32(0x0, &shmem->channel_status);
iowrite32(xfer->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED,
*/
smc_channel_lock_acquire(scmi_info, xfer);
- shmem_tx_prepare(scmi_info->shmem, xfer);
+ shmem_tx_prepare(scmi_info->shmem, xfer, cinfo);
arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res);
{
unsigned long flags;
DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done);
- void *deferred_wq = NULL;
/*
* Prepare to wait for the last release if not already released
vioch->shutdown_done = &vioch_shutdown_done;
virtio_break_device(vioch->vqueue->vdev);
- if (!vioch->is_rx && vioch->deferred_tx_wq) {
- deferred_wq = vioch->deferred_tx_wq;
+ if (!vioch->is_rx && vioch->deferred_tx_wq)
/* Cannot be kicked anymore after this...*/
vioch->deferred_tx_wq = NULL;
- }
spin_unlock_irqrestore(&vioch->lock, flags);
- if (deferred_wq)
- destroy_workqueue(deferred_wq);
-
scmi_vio_channel_release(vioch);
/* Let any possibly concurrent RX path release the channel */
return vioch && !vioch->cinfo;
}
+static void scmi_destroy_tx_workqueue(void *deferred_tx_wq)
+{
+ destroy_workqueue(deferred_tx_wq);
+}
+
static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
bool tx)
{
/* Setup a deferred worker for polling. */
if (tx && !vioch->deferred_tx_wq) {
+ int ret;
+
vioch->deferred_tx_wq =
alloc_workqueue(dev_name(&scmi_vdev->dev),
WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
if (!vioch->deferred_tx_wq)
return -ENOMEM;
+ ret = devm_add_action_or_reset(dev, scmi_destroy_tx_workqueue,
+ vioch->deferred_tx_wq);
+ if (ret)
+ return ret;
+
INIT_WORK(&vioch->deferred_tx_work,
scmi_vio_deferred_tx_worker);
}
for (i = 0; i < vioch->max_msg; i++) {
struct scmi_vio_msg *msg;
- msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
+ msg = devm_kzalloc(dev, sizeof(*msg), GFP_KERNEL);
if (!msg)
return -ENOMEM;
if (tx) {
- msg->request = devm_kzalloc(cinfo->dev,
+ msg->request = devm_kzalloc(dev,
VIRTIO_SCMI_MAX_PDU_SIZE,
GFP_KERNEL);
if (!msg->request)
refcount_set(&msg->users, 1);
}
- msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
+ msg->input = devm_kzalloc(dev, VIRTIO_SCMI_MAX_PDU_SIZE,
GFP_KERNEL);
if (!msg->input)
return -ENOMEM;
is supported by the encapsulated image. (The compression algorithm
used is described in the zboot image header)
-config EFI_ZBOOT_SIGNED
- def_bool y
- depends on EFI_ZBOOT_SIGNING_CERT != ""
- depends on EFI_ZBOOT_SIGNING_KEY != ""
-
-config EFI_ZBOOT_SIGNING
- bool "Sign the EFI decompressor for UEFI secure boot"
- depends on EFI_ZBOOT
- help
- Use the 'sbsign' command line tool (which must exist on the host
- path) to sign both the EFI decompressor PE/COFF image, as well as the
- encapsulated PE/COFF image, which is subsequently compressed and
- wrapped by the former image.
-
-config EFI_ZBOOT_SIGNING_CERT
- string "Certificate to use for signing the compressed EFI boot image"
- depends on EFI_ZBOOT_SIGNING
-
-config EFI_ZBOOT_SIGNING_KEY
- string "Private key to use for signing the compressed EFI boot image"
- depends on EFI_ZBOOT_SIGNING
-
config EFI_ARMSTUB_DTB_LOADER
bool "Enable the DTB loader"
depends on EFI_GENERIC_STUB && !RISCV && !LOONGARCH
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
- if (md->virt_addr == 0)
+ if (md->virt_addr == U64_MAX)
return false;
ret = efi_create_mapping(&efi_mm, md);
acpi_status ret = acpi_load_table(data, NULL);
if (ret)
pr_err("failed to load table: %u\n", ret);
+ else
+ continue;
} else {
pr_err("failed to get var data: 0x%lx\n", status);
}
seed = early_memremap(efi_rng_seed, sizeof(*seed));
if (seed != NULL) {
- size = READ_ONCE(seed->size);
+ size = min(seed->size, EFI_RANDOM_SEED_SIZE);
early_memunmap(seed, sizeof(*seed));
} else {
pr_err("Could not map UEFI random seed!\n");
zboot-method-$(CONFIG_KERNEL_GZIP) := gzip
zboot-size-len-$(CONFIG_KERNEL_GZIP) := 0
-quiet_cmd_sbsign = SBSIGN $@
- cmd_sbsign = sbsign --out $@ $< \
- --key $(CONFIG_EFI_ZBOOT_SIGNING_KEY) \
- --cert $(CONFIG_EFI_ZBOOT_SIGNING_CERT)
-
-$(obj)/$(EFI_ZBOOT_PAYLOAD).signed: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
- $(call if_changed,sbsign)
-
-ZBOOT_PAYLOAD-y := $(EFI_ZBOOT_PAYLOAD)
-ZBOOT_PAYLOAD-$(CONFIG_EFI_ZBOOT_SIGNED) := $(EFI_ZBOOT_PAYLOAD).signed
-
-$(obj)/vmlinuz: $(obj)/$(ZBOOT_PAYLOAD-y) FORCE
+$(obj)/vmlinuz: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
$(call if_changed,$(zboot-method-y))
OBJCOPYFLAGS_vmlinuz.o := -I binary -O $(EFI_ZBOOT_BFD_TARGET) \
- --rename-section .data=.gzdata,load,alloc,readonly,contents
+ --rename-section .data=.gzdata,load,alloc,readonly,contents
$(obj)/vmlinuz.o: $(obj)/vmlinuz FORCE
$(call if_changed,objcopy)
$(obj)/vmlinuz.efi.elf: $(obj)/vmlinuz.o $(ZBOOT_DEPS) FORCE
$(call if_changed,ld)
-ZBOOT_EFI-y := vmlinuz.efi
-ZBOOT_EFI-$(CONFIG_EFI_ZBOOT_SIGNED) := vmlinuz.efi.unsigned
-
-OBJCOPYFLAGS_$(ZBOOT_EFI-y) := -O binary
-$(obj)/$(ZBOOT_EFI-y): $(obj)/vmlinuz.efi.elf FORCE
+OBJCOPYFLAGS_vmlinuz.efi := -O binary
+$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.elf FORCE
$(call if_changed,objcopy)
targets += zboot-header.o vmlinuz vmlinuz.o vmlinuz.efi.elf vmlinuz.efi
-
-ifneq ($(CONFIG_EFI_ZBOOT_SIGNED),)
-$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.unsigned FORCE
- $(call if_changed,sbsign)
-endif
-
-targets += $(EFI_ZBOOT_PAYLOAD).signed vmlinuz.efi.unsigned
/*
* Set the virtual address field of all
- * EFI_MEMORY_RUNTIME entries to 0. This will signal
- * the incoming kernel that no virtual translation has
- * been installed.
+ * EFI_MEMORY_RUNTIME entries to U64_MAX. This will
+ * signal the incoming kernel that no virtual
+ * translation has been installed.
*/
for (l = 0; l < priv.boot_memmap->map_size;
l += priv.boot_memmap->desc_size) {
p = (void *)priv.boot_memmap->map + l;
if (p->attribute & EFI_MEMORY_RUNTIME)
- p->virt_addr = 0;
+ p->virt_addr = U64_MAX;
}
}
return EFI_SUCCESS;
if (status != EFI_SUCCESS)
return status;
- status = efi_bs_call(allocate_pool, EFI_RUNTIME_SERVICES_DATA,
+ /*
+ * Use EFI_ACPI_RECLAIM_MEMORY here so that it is guaranteed that the
+ * allocation will survive a kexec reboot (although we refresh the seed
+ * beforehand)
+ */
+ status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
sizeof(*seed) + EFI_RANDOM_SEED_SIZE,
(void **)&seed);
if (status != EFI_SUCCESS)
* relocated by efi_relocate_kernel.
* On failure, we exit to the firmware via efi_exit instead of returning.
*/
-unsigned long efi_main(efi_handle_t handle,
- efi_system_table_t *sys_table_arg,
- struct boot_params *boot_params)
+asmlinkage unsigned long efi_main(efi_handle_t handle,
+ efi_system_table_t *sys_table_arg,
+ struct boot_params *boot_params)
{
unsigned long bzimage_addr = (unsigned long)startup_32;
unsigned long buffer_start, buffer_end;
}
}
-PROVIDE(__efistub__gzdata_size = ABSOLUTE(. - __efistub__gzdata_start));
+PROVIDE(__efistub__gzdata_size =
+ ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start));
PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext));
PROVIDE(__data_size = ABSOLUTE(_end - _etext));
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
- if (md->virt_addr == 0)
+ if (md->virt_addr == U64_MAX)
return false;
ret = efi_create_mapping(&efi_mm, md);
goto out_calc;
}
- memblock_reserve((unsigned long)final_tbl,
+ memblock_reserve(efi.tpm_final_log,
tbl_size + sizeof(*final_tbl));
efi_tpm_final_log_size = tbl_size;
*/
#include <linux/types.h>
+#include <linux/sizes.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/module.h>
static DEFINE_SEMAPHORE(efivars_lock);
-efi_status_t check_var_size(u32 attributes, unsigned long size)
-{
- const struct efivar_operations *fops;
-
- fops = __efivars->ops;
-
- if (!fops->query_variable_store)
- return EFI_UNSUPPORTED;
-
- return fops->query_variable_store(attributes, size, false);
-}
-EXPORT_SYMBOL_NS_GPL(check_var_size, EFIVAR);
-
-efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)
+static efi_status_t check_var_size(bool nonblocking, u32 attributes,
+ unsigned long size)
{
const struct efivar_operations *fops;
+ efi_status_t status;
fops = __efivars->ops;
if (!fops->query_variable_store)
- return EFI_UNSUPPORTED;
-
- return fops->query_variable_store(attributes, size, true);
+ status = EFI_UNSUPPORTED;
+ else
+ status = fops->query_variable_store(attributes, size,
+ nonblocking);
+ if (status == EFI_UNSUPPORTED)
+ return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
+ return status;
}
-EXPORT_SYMBOL_NS_GPL(check_var_size_nonblocking, EFIVAR);
/**
* efivars_kobject - get the kobject for the registered efivars
}
EXPORT_SYMBOL_NS_GPL(efivar_get_next_variable, EFIVAR);
-/*
- * efivar_set_variable_blocking() - local helper function for set_variable
- *
- * Must be called with efivars_lock held.
- */
-static efi_status_t
-efivar_set_variable_blocking(efi_char16_t *name, efi_guid_t *vendor,
- u32 attr, unsigned long data_size, void *data)
-{
- efi_status_t status;
-
- if (data_size > 0) {
- status = check_var_size(attr, data_size +
- ucs2_strsize(name, 1024));
- if (status != EFI_SUCCESS)
- return status;
- }
- return __efivars->ops->set_variable(name, vendor, attr, data_size, data);
-}
-
/*
* efivar_set_variable_locked() - set a variable identified by name/vendor
*
efi_set_variable_t *setvar;
efi_status_t status;
- if (!nonblocking)
- return efivar_set_variable_blocking(name, vendor, attr,
- data_size, data);
+ if (data_size > 0) {
+ status = check_var_size(nonblocking, attr,
+ data_size + ucs2_strsize(name, 1024));
+ if (status != EFI_SUCCESS)
+ return status;
+ }
/*
* If no _nonblocking variant exists, the ordinary one
* is assumed to be non-blocking.
*/
- setvar = __efivars->ops->set_variable_nonblocking ?:
- __efivars->ops->set_variable;
+ setvar = __efivars->ops->set_variable_nonblocking;
+ if (!setvar || !nonblocking)
+ setvar = __efivars->ops->set_variable;
- if (data_size > 0) {
- status = check_var_size_nonblocking(attr, data_size +
- ucs2_strsize(name, 1024));
- if (status != EFI_SUCCESS)
- return status;
- }
return setvar(name, vendor, attr, data_size, data);
}
EXPORT_SYMBOL_NS_GPL(efivar_set_variable_locked, EFIVAR);
if (efivar_lock())
return EFI_ABORTED;
- status = efivar_set_variable_blocking(name, vendor, attr, data_size, data);
+ status = efivar_set_variable_locked(name, vendor, attr, data_size,
+ data, false);
efivar_unlock();
return status;
}
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/module.h>
+#include <linux/seq_file.h>
#include <linux/irqdomain.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/pinctrl/consumer.h>
struct tegra_gpio_bank *bank_info;
const struct tegra_gpio_soc_config *soc;
struct gpio_chip gc;
- struct irq_chip ic;
u32 bank_count;
unsigned int *irqs;
};
unsigned int gpio = d->hwirq;
tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 0);
+ gpiochip_disable_irq(chip, gpio);
}
static void tegra_gpio_irq_unmask(struct irq_data *d)
struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
unsigned int gpio = d->hwirq;
+ gpiochip_enable_irq(chip, gpio);
tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 1);
}
tegra_gpio_enable(tgi, d->hwirq);
}
+static void tegra_gpio_irq_print_chip(struct irq_data *d, struct seq_file *s)
+{
+ struct gpio_chip *chip = irq_data_get_irq_chip_data(d);
+
+ seq_printf(s, dev_name(chip->parent));
+}
+
+static const struct irq_chip tegra_gpio_irq_chip = {
+ .irq_shutdown = tegra_gpio_irq_shutdown,
+ .irq_ack = tegra_gpio_irq_ack,
+ .irq_mask = tegra_gpio_irq_mask,
+ .irq_unmask = tegra_gpio_irq_unmask,
+ .irq_set_type = tegra_gpio_irq_set_type,
+#ifdef CONFIG_PM_SLEEP
+ .irq_set_wake = tegra_gpio_irq_set_wake,
+#endif
+ .irq_print_chip = tegra_gpio_irq_print_chip,
+ .irq_request_resources = tegra_gpio_irq_request_resources,
+ .irq_release_resources = tegra_gpio_irq_release_resources,
+ .flags = IRQCHIP_IMMUTABLE,
+};
+
+static const struct irq_chip tegra210_gpio_irq_chip = {
+ .irq_shutdown = tegra_gpio_irq_shutdown,
+ .irq_ack = tegra_gpio_irq_ack,
+ .irq_mask = tegra_gpio_irq_mask,
+ .irq_unmask = tegra_gpio_irq_unmask,
+ .irq_set_affinity = tegra_gpio_irq_set_affinity,
+ .irq_set_type = tegra_gpio_irq_set_type,
+#ifdef CONFIG_PM_SLEEP
+ .irq_set_wake = tegra_gpio_irq_set_wake,
+#endif
+ .irq_print_chip = tegra_gpio_irq_print_chip,
+ .irq_request_resources = tegra_gpio_irq_request_resources,
+ .irq_release_resources = tegra_gpio_irq_release_resources,
+ .flags = IRQCHIP_IMMUTABLE,
+};
+
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
-#include <linux/seq_file.h>
static int tegra_dbg_gpio_show(struct seq_file *s, void *unused)
{
tgi->gc.ngpio = tgi->bank_count * 32;
tgi->gc.parent = &pdev->dev;
- tgi->ic.name = "GPIO";
- tgi->ic.irq_ack = tegra_gpio_irq_ack;
- tgi->ic.irq_mask = tegra_gpio_irq_mask;
- tgi->ic.irq_unmask = tegra_gpio_irq_unmask;
- tgi->ic.irq_set_type = tegra_gpio_irq_set_type;
- tgi->ic.irq_shutdown = tegra_gpio_irq_shutdown;
-#ifdef CONFIG_PM_SLEEP
- tgi->ic.irq_set_wake = tegra_gpio_irq_set_wake;
-#endif
- tgi->ic.irq_request_resources = tegra_gpio_irq_request_resources;
- tgi->ic.irq_release_resources = tegra_gpio_irq_release_resources;
-
platform_set_drvdata(pdev, tgi);
if (tgi->soc->debounce_supported)
}
irq = &tgi->gc.irq;
- irq->chip = &tgi->ic;
irq->fwnode = of_node_to_fwnode(pdev->dev.of_node);
irq->child_to_parent_hwirq = tegra_gpio_child_to_parent_hwirq;
irq->populate_parent_alloc_arg = tegra_gpio_populate_parent_fwspec;
if (!irq->parent_domain)
return -EPROBE_DEFER;
- tgi->ic.irq_set_affinity = tegra_gpio_irq_set_affinity;
+ gpio_irq_chip_set_chip(irq, &tegra210_gpio_irq_chip);
+ } else {
+ gpio_irq_chip_set_chip(irq, &tegra_gpio_irq_chip);
}
tgi->regs = devm_platform_ioremap_resource(pdev, 0);
#define AMDGPU_RESET_VCE (1 << 13)
#define AMDGPU_RESET_VCE1 (1 << 14)
-#define AMDGPU_RESET_LEVEL_SOFT_RECOVERY (1 << 0)
-#define AMDGPU_RESET_LEVEL_MODE2 (1 << 1)
-
/* max cursor sizes (in pixels) */
#define CIK_CURSOR_WIDTH 128
#define CIK_CURSOR_HEIGHT 128
struct work_struct reset_work;
- uint32_t amdgpu_reset_level_mask;
bool job_hang;
};
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
amdgpu_device_gpu_recover(adev, NULL, &reset_context);
}
void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)
{
+ /* Temporary workaround to fix issues observed in some
+ * compute applications when GFXOFF is enabled on GFX11.
+ */
+ if (IP_VERSION_MAJ(adev->ip_versions[GC_HWIP][0]) == 11) {
+ pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
+ amdgpu_gfx_off_ctrl(adev, idle);
+ }
amdgpu_dpm_switch_power_profile(adev,
PP_SMC_POWER_PROFILE_COMPUTE,
!idle);
lock_srbm(adev, mec, pipe, 0, 0);
- WREG32(SOC15_REG_OFFSET(GC, 0, regCPC_INT_CNTL),
+ WREG32_SOC15(GC, 0, regCPC_INT_CNTL,
CP_INT_CNTL_RING0__TIME_STAMP_INT_ENABLE_MASK |
CP_INT_CNTL_RING0__OPCODE_ERROR_INT_ENABLE_MASK);
struct ttm_tt *ttm = bo->tbo.ttm;
int ret;
+ if (WARN_ON(ttm->num_pages != src_ttm->num_pages))
+ return -EINVAL;
+
ttm->sg = kmalloc(sizeof(*ttm->sg), GFP_KERNEL);
if (unlikely(!ttm->sg))
return -ENOMEM;
- if (WARN_ON(ttm->num_pages != src_ttm->num_pages))
- return -EINVAL;
-
/* Same sequence as in amdgpu_ttm_tt_pin_userptr */
ret = sg_alloc_table_from_pages(ttm->sg, src_ttm->pages,
ttm->num_pages, 0,
if (r)
return r;
- ctx->stable_pstate = current_stable_pstate;
+ if (mgr->adev->pm.stable_pstate_ctx)
+ ctx->stable_pstate = mgr->adev->pm.stable_pstate_ctx->stable_pstate;
+ else
+ ctx->stable_pstate = current_stable_pstate;
return 0;
}
return PTR_ERR(ent);
}
- debugfs_create_u32("amdgpu_reset_level", 0600, root, &adev->amdgpu_reset_level_mask);
-
/* Register debugfs entries for amdgpu_ttm */
amdgpu_ttm_debugfs_init(adev);
amdgpu_debugfs_pm_init(adev);
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ /*
+ * Per PMFW team's suggestion, driver needs to handle gfxoff
+ * and df cstate features disablement for gpu reset(e.g. Mode1Reset)
+ * scenario. Add the missing df cstate disablement here.
+ */
+ if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW))
+ dev_warn(adev->dev, "Failed to disallow df cstate");
+
for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
if (!adev->ip_blocks[i].status.valid)
continue;
return r;
}
adev->ip_blocks[i].status.hw = true;
+
+ if (adev->in_s0ix && adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
+ /* disable gfxoff for IP resume. The gfxoff will be re-enabled in
+ * amdgpu_device_resume() after IP resume.
+ */
+ amdgpu_gfx_off_ctrl(adev, false);
+ DRM_DEBUG("will disable gfxoff for re-initializing other blocks\n");
+ }
+
}
return 0;
* at suspend time.
*
*/
-static void amdgpu_device_evict_resources(struct amdgpu_device *adev)
+static int amdgpu_device_evict_resources(struct amdgpu_device *adev)
{
+ int ret;
+
/* No need to evict vram on APUs for suspend to ram or s2idle */
if ((adev->in_s3 || adev->in_s0ix) && (adev->flags & AMD_IS_APU))
- return;
+ return 0;
- if (amdgpu_ttm_evict_resources(adev, TTM_PL_VRAM))
+ ret = amdgpu_ttm_evict_resources(adev, TTM_PL_VRAM);
+ if (ret)
DRM_WARN("evicting device resources failed\n");
-
+ return ret;
}
/*
if (!adev->in_s0ix)
amdgpu_amdkfd_suspend(adev, adev->in_runpm);
- amdgpu_device_evict_resources(adev);
+ r = amdgpu_device_evict_resources(adev);
+ if (r)
+ return r;
amdgpu_fence_driver_hw_fini(adev);
/* Make sure IB tests flushed */
flush_delayed_work(&adev->delayed_init_work);
+ if (adev->in_s0ix) {
+ /* re-enable gfxoff after IP resume. This re-enables gfxoff after
+ * it was disabled for IP resume in amdgpu_device_ip_resume_phase2().
+ */
+ amdgpu_gfx_off_ctrl(adev, true);
+ DRM_DEBUG("will enable gfxoff for the mission mode\n");
+ }
if (fbcon)
drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, false);
reset_context->job = job;
reset_context->hive = hive;
-
/*
* Build list of devices to reset.
* In case we are in XGMI hive mode, resort the device list
amdgpu_ras_resume(adev);
} else {
r = amdgpu_do_asic_reset(device_list_handle, reset_context);
- if (r && r == -EAGAIN) {
- set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags);
- adev->asic_reset_res = 0;
+ if (r && r == -EAGAIN)
goto retry;
- }
if (!r && gpu_reset_for_dev_remove)
goto recover_end;
drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);
}
- if (adev->enable_mes)
+ if (adev->enable_mes && adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3))
amdgpu_mes_self_test(tmp_adev);
if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) {
reset_context.reset_req_dev = adev;
set_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
set_bit(AMDGPU_SKIP_HW_RESET, &reset_context.flags);
- set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
adev->no_hw_access = true;
r = amdgpu_device_pre_asic_reset(adev, &reset_context);
pm_runtime_forbid(dev->dev);
}
- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2)) {
+ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
+ !amdgpu_sriov_vf(adev)) {
bool need_to_reset_gpu = false;
if (adev->gmc.xgmi.num_physical_nodes > 1) {
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
r = amdgpu_device_gpu_recover(ring->adev, job, &reset_context);
if (r)
fw_info->feature = adev->psp.cap_feature_version;
break;
case AMDGPU_INFO_FW_MES_KIQ:
- fw_info->ver = adev->mes.ucode_fw_version[0];
- fw_info->feature = 0;
+ fw_info->ver = adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK;
+ fw_info->feature = (adev->mes.kiq_version & AMDGPU_MES_FEAT_VERSION_MASK)
+ >> AMDGPU_MES_FEAT_VERSION_SHIFT;
break;
case AMDGPU_INFO_FW_MES:
- fw_info->ver = adev->mes.ucode_fw_version[1];
+ fw_info->ver = adev->mes.sched_version & AMDGPU_MES_VERSION_MASK;
+ fw_info->feature = (adev->mes.sched_version & AMDGPU_MES_FEAT_VERSION_MASK)
+ >> AMDGPU_MES_FEAT_VERSION_SHIFT;
+ break;
+ case AMDGPU_INFO_FW_IMU:
+ fw_info->ver = adev->gfx.imu_fw_version;
fw_info->feature = 0;
break;
default:
fw_info.feature, fw_info.ver);
}
+ /* IMU */
+ query_fw.fw_type = AMDGPU_INFO_FW_IMU;
+ query_fw.index = 0;
+ ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
+ if (ret)
+ return ret;
+ seq_printf(m, "IMU feature version: %u, firmware version: 0x%08x\n",
+ fw_info.feature, fw_info.ver);
+
/* PSP SOS */
query_fw.fw_type = AMDGPU_INFO_FW_SOS;
ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
amdgpu_device_gpu_recover(ras->adev, NULL, &reset_context);
}
static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
{
+ if (amdgpu_sriov_vf(adev)) {
+ switch (adev->ip_versions[MP0_HWIP][0]) {
+ case IP_VERSION(13, 0, 2):
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ if (adev->asic_type == CHIP_IP_DISCOVERY) {
+ switch (adev->ip_versions[MP0_HWIP][0]) {
+ case IP_VERSION(13, 0, 0):
+ case IP_VERSION(13, 0, 10):
+ return true;
+ default:
+ return false;
+ }
+ }
+
return adev->asic_type == CHIP_VEGA10 ||
adev->asic_type == CHIP_VEGA20 ||
adev->asic_type == CHIP_ARCTURUS ||
!amdgpu_ras_asic_supported(adev))
return;
- /* If driver run on sriov guest side, only enable ras for aldebaran */
- if (amdgpu_sriov_vf(adev) &&
- adev->ip_versions[MP1_HWIP][0] != IP_VERSION(13, 0, 2))
- return;
-
if (!adev->gmc.xgmi.connected_to_cpu) {
if (amdgpu_atomfirmware_mem_ecc_supported(adev)) {
dev_info(adev->dev, "MEM ECC is active.\n");
{
int ret = 0;
- adev->amdgpu_reset_level_mask = 0x1;
-
switch (adev->ip_versions[MP1_HWIP][0]) {
case IP_VERSION(13, 0, 2):
ret = aldebaran_reset_init(adev);
{
struct amdgpu_reset_handler *reset_handler = NULL;
- if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2))
- return -ENOSYS;
-
- if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags))
- return -ENOSYS;
-
if (adev->reset_cntl && adev->reset_cntl->get_reset_handler)
reset_handler = adev->reset_cntl->get_reset_handler(
adev->reset_cntl, reset_context);
int ret;
struct amdgpu_reset_handler *reset_handler = NULL;
- if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2))
- return -ENOSYS;
-
- if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags))
- return -ENOSYS;
-
if (adev->reset_cntl)
reset_handler = adev->reset_cntl->get_reset_handler(
adev->reset_cntl, reset_context);
AMDGPU_NEED_FULL_RESET = 0,
AMDGPU_SKIP_HW_RESET = 1,
- AMDGPU_SKIP_MODE2_RESET = 2,
- AMDGPU_RESET_FOR_DEVICE_REMOVE = 3,
+ AMDGPU_RESET_FOR_DEVICE_REMOVE = 2,
};
struct amdgpu_reset_context {
{
ktime_t deadline = ktime_add_us(ktime_get(), 10000);
- if (!(ring->adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_SOFT_RECOVERY))
- return false;
-
if (amdgpu_sriov_vf(ring->adev) || !ring->funcs->soft_recovery || !fence)
return false;
while (cursor.remaining) {
amdgpu_res_next(&cursor, cursor.size);
+ if (!cursor.remaining)
+ break;
+
/* ttm_resource_ioremap only supports contiguous memory */
if (end != cursor.start)
return false;
FW_VERSION_ATTR(rlc_srls_fw_version, 0444, gfx.rlc_srls_fw_version);
FW_VERSION_ATTR(mec_fw_version, 0444, gfx.mec_fw_version);
FW_VERSION_ATTR(mec2_fw_version, 0444, gfx.mec2_fw_version);
+FW_VERSION_ATTR(imu_fw_version, 0444, gfx.imu_fw_version);
FW_VERSION_ATTR(sos_fw_version, 0444, psp.sos.fw_version);
FW_VERSION_ATTR(asd_fw_version, 0444, psp.asd_context.bin_desc.fw_version);
FW_VERSION_ATTR(ta_ras_fw_version, 0444, psp.ras_context.context.bin_desc.fw_version);
&dev_attr_ta_ras_fw_version.attr, &dev_attr_ta_xgmi_fw_version.attr,
&dev_attr_smc_fw_version.attr, &dev_attr_sdma_fw_version.attr,
&dev_attr_sdma2_fw_version.attr, &dev_attr_vcn_fw_version.attr,
- &dev_attr_dmcu_fw_version.attr, NULL
+ &dev_attr_dmcu_fw_version.attr, &dev_attr_imu_fw_version.attr,
+ NULL
};
static const struct attribute_group fw_attr_group = {
POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_RLC_SRLS, adev->gfx.rlc_srls_fw_version);
POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_MEC, adev->gfx.mec_fw_version);
POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_MEC2, adev->gfx.mec2_fw_version);
+ POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_IMU, adev->gfx.imu_fw_version);
POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_SOS, adev->psp.sos.fw_version);
POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_ASD,
adev->psp.asd_context.bin_desc.fw_version);
adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
}
+ if (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)
+ /* VF MMIO access (except mailbox range) from CPU
+ * will be blocked during sriov runtime
+ */
+ adev->virt.caps |= AMDGPU_VF_MMIO_ACCESS_PROTECT;
+
/* we have the ability to check now */
if (amdgpu_sriov_vf(adev)) {
switch (adev->asic_type) {
#define AMDGPU_SRIOV_CAPS_IS_VF (1 << 2) /* this GPU is a virtual function */
#define AMDGPU_PASSTHROUGH_MODE (1 << 3) /* thw whole GPU is pass through for VM */
#define AMDGPU_SRIOV_CAPS_RUNTIME (1 << 4) /* is out of full access mode */
+#define AMDGPU_VF_MMIO_ACCESS_PROTECT (1 << 5) /* MMIO write access is not allowed in sriov runtime */
/* flags for indirect register access path supported by rlcg for sriov */
#define AMDGPU_RLCG_GC_WRITE_LEGACY (0x8 << 28)
#define amdgpu_passthrough(adev) \
((adev)->virt.caps & AMDGPU_PASSTHROUGH_MODE)
+#define amdgpu_sriov_vf_mmio_access_protection(adev) \
+((adev)->virt.caps & AMDGPU_VF_MMIO_ACCESS_PROTECT)
+
static inline bool is_virtual_machine(void)
{
#if defined(CONFIG_X86)
adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base;
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
r = amdgpu_display_modeset_create_props(adev);
if (r)
return r;
*/
#ifdef CONFIG_X86_64
if (amdgpu_vm_update_mode == -1) {
- if (amdgpu_gmc_vram_full_visible(&adev->gmc))
+ /* For asic with VF MMIO access protection
+ * avoid using CPU for VM table updates
+ */
+ if (amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+ !amdgpu_sriov_vf_mmio_access_protection(adev))
adev->vm_manager.vm_update_mode =
AMDGPU_VM_USE_CPU_FOR_COMPUTE;
else
DMA_RESV_USAGE_BOOKKEEP);
}
- if (fence && !p->immediate)
+ if (fence && !p->immediate) {
+ /*
+ * Most hw generations now have a separate queue for page table
+ * updates, but when the queue is shared with userspace we need
+ * the extra CPU round trip to correctly flush the TLB.
+ */
+ set_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &f->flags);
swap(*fence, f);
+ }
dma_fence_put(f);
return 0;
AMD_SRIOV_UCODE_ID_RLC_SRLS,
AMD_SRIOV_UCODE_ID_MEC,
AMD_SRIOV_UCODE_ID_MEC2,
+ AMD_SRIOV_UCODE_ID_IMU,
AMD_SRIOV_UCODE_ID_SOS,
AMD_SRIOV_UCODE_ID_ASD,
AMD_SRIOV_UCODE_ID_TA_RAS,
WREG32_SOC15(GC, 0, regSH_MEM_BASES, sh_mem_bases);
/* Enable trap for each kfd vmid. */
- data = RREG32(SOC15_REG_OFFSET(GC, 0, regSPI_GDBG_PER_VMID_CNTL));
+ data = RREG32_SOC15(GC, 0, regSPI_GDBG_PER_VMID_CNTL);
data = REG_SET_FIELD(data, SPI_GDBG_PER_VMID_CNTL, TRAP_EN, 1);
}
soc21_grbm_select(adev, 0, 0, 0, 0);
switch (adev->ip_versions[GC_HWIP][0]) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 2):
+ case IP_VERSION(11, 0, 3):
amdgpu_gfx_off_ctrl(adev, enable);
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 2):
+ case IP_VERSION(11, 0, 3):
gfx_v11_0_update_gfx_clock_gating(adev,
state == AMD_CG_STATE_GATE);
break;
/* Use register 17 for GART */
const unsigned eng = 17;
unsigned int i;
+ unsigned char hub_ip = 0;
+
+ hub_ip = (vmhub == AMDGPU_GFXHUB_0) ?
+ GC_HWIP : MMHUB_HWIP;
spin_lock(&adev->gmc.invalidate_lock);
/*
if (use_semaphore) {
for (i = 0; i < adev->usec_timeout; i++) {
/* a read return value of 1 means semaphore acuqire */
- tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_sem +
- hub->eng_distance * eng);
+ tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem +
+ hub->eng_distance * eng, hub_ip);
if (tmp & 0x1)
break;
udelay(1);
DRM_ERROR("Timeout waiting for sem acquire in VM flush!\n");
}
- WREG32_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req);
+ WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req, hub_ip);
/* Wait for ACK with a delay.*/
for (i = 0; i < adev->usec_timeout; i++) {
- tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_ack +
- hub->eng_distance * eng);
+ tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_ack +
+ hub->eng_distance * eng, hub_ip);
tmp &= 1 << vmid;
if (tmp)
break;
* add semaphore release after invalidation,
* write with 0 means semaphore release
*/
- WREG32_NO_KIQ(hub->vm_inv_eng0_sem +
- hub->eng_distance * eng, 0);
+ WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem +
+ hub->eng_distance * eng, 0, hub_ip);
/* Issue additional private vm invalidation to MMHUB */
if ((vmhub != AMDGPU_GFXHUB_0) &&
struct amdgpu_device *adev = mes->adev;
struct amdgpu_ring *ring = &mes->ring;
unsigned long flags;
+ signed long timeout = adev->usec_timeout;
+ if (amdgpu_emu_mode) {
+ timeout *= 100;
+ } else if (amdgpu_sriov_vf(adev)) {
+ /* Worst case in sriov where all other 15 VF timeout, each VF needs about 600ms */
+ timeout = 15 * 600 * 1000;
+ }
BUG_ON(size % 4 != 0);
spin_lock_irqsave(&mes->ring_lock, flags);
DRM_DEBUG("MES msg=%d was emitted\n", x_pkt->header.opcode);
r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq,
- adev->usec_timeout * (amdgpu_emu_mode ? 100 : 1));
+ timeout);
if (r < 1) {
DRM_ERROR("MES failed to response msg=%d\n",
x_pkt->header.opcode);
return 0;
}
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+ uint32_t data;
+ int i;
+
+ mutex_lock(&adev->srbm_mutex);
+ soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+ /* disable the queue if it's active */
+ if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+ WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+ break;
+ udelay(1);
+ }
+ }
+ data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_EN, 0);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_HIT, 1);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+ soc21_grbm_select(adev, 0, 0, 0, 0);
+ mutex_unlock(&adev->srbm_mutex);
+
+ adev->mes.ring.sched.ready = false;
+}
+
static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
{
uint32_t tmp;
static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev)
{
+ if (adev->mes.ring.sched.ready)
+ mes_v11_0_kiq_dequeue_sched(adev);
+
mes_v11_0_enable(adev, false);
return 0;
}
static int mes_v11_0_hw_fini(void *handle)
{
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
- adev->mes.ring.sched.ready = false;
return 0;
}
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- if (!amdgpu_in_reset(adev))
+ if (!amdgpu_in_reset(adev) &&
+ (adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3)))
amdgpu_mes_self_test(adev);
return 0;
#include "gc/gc_10_1_0_offset.h"
#include "soc15_common.h"
-#define mmMM_ATC_L2_MISC_CG_Sienna_Cichlid 0x064d
-#define mmMM_ATC_L2_MISC_CG_Sienna_Cichlid_BASE_IDX 0
#define mmDAGB0_CNTL_MISC2_Sienna_Cichlid 0x0070
#define mmDAGB0_CNTL_MISC2_Sienna_Cichlid_BASE_IDX 0
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
case IP_VERSION(2, 1, 2):
- def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid);
def1 = data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid);
break;
default:
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
case IP_VERSION(2, 1, 2):
- if (def != data)
- WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid, data);
if (def1 != data1)
WREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid, data1);
break;
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
case IP_VERSION(2, 1, 2):
- def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid);
- break;
+ /* There is no ATCL2 in MMHUB for 2.1.x */
+ return;
default:
def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG);
break;
else
data &= ~MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
- if (def != data) {
- switch (adev->ip_versions[MMHUB_HWIP][0]) {
- case IP_VERSION(2, 1, 0):
- case IP_VERSION(2, 1, 1):
- case IP_VERSION(2, 1, 2):
- WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid, data);
- break;
- default:
- WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data);
- break;
- }
- }
+ if (def != data)
+ WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data);
}
static int mmhub_v2_0_set_clockgating(struct amdgpu_device *adev,
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
case IP_VERSION(2, 1, 2):
- data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid);
+ /* There is no ATCL2 in MMHUB for 2.1.x. Keep the status
+ * based on DAGB
+ */
+ data = MM_ATC_L2_MISC_CG__ENABLE_MASK;
data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid);
break;
default:
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
amdgpu_device_gpu_recover(adev, NULL, &reset_context);
}
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
amdgpu_device_gpu_recover(adev, NULL, &reset_context);
}
reset_context.method = AMD_RESET_METHOD_NONE;
reset_context.reset_req_dev = adev;
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
- clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
amdgpu_device_gpu_recover(adev, NULL, &reset_context);
}
WREG32_SDMA(i, mmSDMA0_CNTL, temp);
if (!amdgpu_sriov_vf(adev)) {
- ring = &adev->sdma.instance[i].ring;
- adev->nbio.funcs->sdma_doorbell_range(adev, i,
- ring->use_doorbell, ring->doorbell_index,
- adev->doorbell_index.sdma_doorbell_range);
-
/* unhalt engine */
temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
#include "amdgpu_psp.h"
#include "amdgpu_xgmi.h"
+static bool sienna_cichlid_is_mode2_default(struct amdgpu_reset_control *reset_ctl)
+{
+#if 0
+ struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
+
+ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 7) &&
+ adev->pm.fw_version >= 0x3a5500 && !amdgpu_sriov_vf(adev))
+ return true;
+#endif
+ return false;
+}
+
static struct amdgpu_reset_handler *
sienna_cichlid_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
struct amdgpu_reset_context *reset_context)
{
struct amdgpu_reset_handler *handler;
- struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
if (reset_context->method != AMD_RESET_METHOD_NONE) {
list_for_each_entry(handler, &reset_ctl->reset_handlers,
if (handler->reset_method == reset_context->method)
return handler;
}
- } else {
- list_for_each_entry(handler, &reset_ctl->reset_handlers,
+ }
+
+ if (sienna_cichlid_is_mode2_default(reset_ctl)) {
+ list_for_each_entry (handler, &reset_ctl->reset_handlers,
handler_list) {
- if (handler->reset_method == AMD_RESET_METHOD_MODE2 &&
- adev->pm.fw_version >= 0x3a5500 &&
- !amdgpu_sriov_vf(adev)) {
- reset_context->method = AMD_RESET_METHOD_MODE2;
+ if (handler->reset_method == AMD_RESET_METHOD_MODE2)
return handler;
- }
}
}
return 0;
}
+static void soc15_sdma_doorbell_range_init(struct amdgpu_device *adev)
+{
+ int i;
+
+ /* sdma doorbell range is programed by hypervisor */
+ if (!amdgpu_sriov_vf(adev)) {
+ for (i = 0; i < adev->sdma.num_instances; i++) {
+ adev->nbio.funcs->sdma_doorbell_range(adev, i,
+ true, adev->doorbell_index.sdma_engine[i] << 1,
+ adev->doorbell_index.sdma_doorbell_range);
+ }
+ }
+}
+
static int soc15_common_hw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* enable the doorbell aperture */
soc15_enable_doorbell_aperture(adev, true);
+ /* HW doorbell routing policy: doorbell writing not
+ * in SDMA/IH/MM/ACV range will be routed to CP. So
+ * we need to init SDMA doorbell range prior
+ * to CP ip block init and ring test. IH already
+ * happens before CP.
+ */
+ soc15_sdma_doorbell_range_init(adev);
return 0;
}
case IP_VERSION(11, 0, 0):
return amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC);
case IP_VERSION(11, 0, 2):
+ case IP_VERSION(11, 0, 3):
return false;
default:
return true;
break;
case IP_VERSION(11, 0, 3):
adev->cg_flags = AMD_CG_SUPPORT_VCN_MGCG |
- AMD_CG_SUPPORT_JPEG_MGCG;
+ AMD_CG_SUPPORT_JPEG_MGCG |
+ AMD_CG_SUPPORT_GFX_CGCG |
+ AMD_CG_SUPPORT_GFX_CGLS |
+ AMD_CG_SUPPORT_REPEATER_FGCG |
+ AMD_CG_SUPPORT_GFX_MGCG;
adev->pg_flags = AMD_PG_SUPPORT_VCN |
AMD_PG_SUPPORT_VCN_DPG |
AMD_PG_SUPPORT_JPEG;
0xbf9f0000, 0x00000000,
};
static const uint32_t cwsr_trap_gfx11_hex[] = {
- 0xbfa00001, 0xbfa0021e,
+ 0xbfa00001, 0xbfa00221,
0xb0804006, 0xb8f8f802,
0x9178ff78, 0x00020006,
- 0xb8fbf803, 0xbf0d9f6d,
- 0xbfa20006, 0x8b6eff78,
- 0x00002000, 0xbfa10009,
- 0x8b6eff6d, 0x00ff0000,
- 0xbfa2001e, 0x8b6eff7b,
- 0x00000400, 0xbfa20041,
- 0xbf830010, 0xb8fbf803,
- 0xbfa0fffa, 0x8b6eff7b,
- 0x00000900, 0xbfa20015,
- 0x8b6eff7b, 0x000071ff,
- 0xbfa10008, 0x8b6fff7b,
- 0x00007080, 0xbfa10001,
- 0xbeee1287, 0xb8eff801,
- 0x846e8c6e, 0x8b6e6f6e,
- 0xbfa2000a, 0x8b6eff6d,
- 0x00ff0000, 0xbfa20007,
- 0xb8eef801, 0x8b6eff6e,
- 0x00000800, 0xbfa20003,
+ 0xb8fbf803, 0xbf0d9e6d,
+ 0xbfa10001, 0xbfbd0000,
+ 0xbf0d9f6d, 0xbfa20006,
+ 0x8b6eff78, 0x00002000,
+ 0xbfa10009, 0x8b6eff6d,
+ 0x00ff0000, 0xbfa2001e,
0x8b6eff7b, 0x00000400,
- 0xbfa20026, 0xbefa4d82,
- 0xbf89fc07, 0x84fa887a,
- 0xf4005bbd, 0xf8000010,
- 0xbf89fc07, 0x846e976e,
- 0x9177ff77, 0x00800000,
- 0x8c776e77, 0xf4045bbd,
- 0xf8000000, 0xbf89fc07,
- 0xf4045ebd, 0xf8000008,
- 0xbf89fc07, 0x8bee6e6e,
- 0xbfa10001, 0xbe80486e,
- 0x8b6eff6d, 0x01ff0000,
- 0xbfa20005, 0x8c78ff78,
- 0x00002000, 0x80ec886c,
- 0x82ed806d, 0xbfa00005,
- 0x8b6eff6d, 0x01000000,
- 0xbfa20002, 0x806c846c,
- 0x826d806d, 0x8b6dff6d,
- 0x0000ffff, 0x8bfe7e7e,
- 0x8bea6a6a, 0xb978f802,
- 0xbe804a6c, 0x8b6dff6d,
- 0x0000ffff, 0xbefa0080,
- 0xb97a0283, 0xbeee007e,
- 0xbeef007f, 0xbefe0180,
- 0xbefe4d84, 0xbf89fc07,
- 0x8b7aff7f, 0x04000000,
- 0x847a857a, 0x8c6d7a6d,
- 0xbefa007e, 0x8b7bff7f,
- 0x0000ffff, 0xbefe00c1,
- 0xbeff00c1, 0xdca6c000,
- 0x007a0000, 0x7e000280,
- 0xbefe007a, 0xbeff007b,
- 0xb8fb02dc, 0x847b997b,
- 0xb8fa3b05, 0x807a817a,
- 0xbf0d997b, 0xbfa20002,
- 0x847a897a, 0xbfa00001,
- 0x847a8a7a, 0xb8fb1e06,
- 0x847b8a7b, 0x807a7b7a,
+ 0xbfa20041, 0xbf830010,
+ 0xb8fbf803, 0xbfa0fffa,
+ 0x8b6eff7b, 0x00000900,
+ 0xbfa20015, 0x8b6eff7b,
+ 0x000071ff, 0xbfa10008,
+ 0x8b6fff7b, 0x00007080,
+ 0xbfa10001, 0xbeee1287,
+ 0xb8eff801, 0x846e8c6e,
+ 0x8b6e6f6e, 0xbfa2000a,
+ 0x8b6eff6d, 0x00ff0000,
+ 0xbfa20007, 0xb8eef801,
+ 0x8b6eff6e, 0x00000800,
+ 0xbfa20003, 0x8b6eff7b,
+ 0x00000400, 0xbfa20026,
+ 0xbefa4d82, 0xbf89fc07,
+ 0x84fa887a, 0xf4005bbd,
+ 0xf8000010, 0xbf89fc07,
+ 0x846e976e, 0x9177ff77,
+ 0x00800000, 0x8c776e77,
+ 0xf4045bbd, 0xf8000000,
+ 0xbf89fc07, 0xf4045ebd,
+ 0xf8000008, 0xbf89fc07,
+ 0x8bee6e6e, 0xbfa10001,
+ 0xbe80486e, 0x8b6eff6d,
+ 0x01ff0000, 0xbfa20005,
+ 0x8c78ff78, 0x00002000,
+ 0x80ec886c, 0x82ed806d,
+ 0xbfa00005, 0x8b6eff6d,
+ 0x01000000, 0xbfa20002,
+ 0x806c846c, 0x826d806d,
+ 0x8b6dff6d, 0x0000ffff,
+ 0x8bfe7e7e, 0x8bea6a6a,
+ 0xb978f802, 0xbe804a6c,
+ 0x8b6dff6d, 0x0000ffff,
+ 0xbefa0080, 0xb97a0283,
+ 0xbeee007e, 0xbeef007f,
+ 0xbefe0180, 0xbefe4d84,
+ 0xbf89fc07, 0x8b7aff7f,
+ 0x04000000, 0x847a857a,
+ 0x8c6d7a6d, 0xbefa007e,
0x8b7bff7f, 0x0000ffff,
- 0x807aff7a, 0x00000200,
- 0x807a7e7a, 0x827b807b,
- 0xd7610000, 0x00010870,
- 0xd7610000, 0x00010a71,
- 0xd7610000, 0x00010c72,
- 0xd7610000, 0x00010e73,
- 0xd7610000, 0x00011074,
- 0xd7610000, 0x00011275,
- 0xd7610000, 0x00011476,
- 0xd7610000, 0x00011677,
- 0xd7610000, 0x00011a79,
- 0xd7610000, 0x00011c7e,
- 0xd7610000, 0x00011e7f,
- 0xbefe00ff, 0x00003fff,
- 0xbeff0080, 0xdca6c040,
- 0x007a0000, 0xd760007a,
- 0x00011d00, 0xd760007b,
- 0x00011f00, 0xbefe007a,
- 0xbeff007b, 0xbef4007e,
- 0x8b75ff7f, 0x0000ffff,
- 0x8c75ff75, 0x00040000,
- 0xbef60080, 0xbef700ff,
- 0x10807fac, 0xbef1007d,
- 0xbef00080, 0xb8f302dc,
- 0x84739973, 0xbefe00c1,
- 0x857d9973, 0x8b7d817d,
- 0xbf06817d, 0xbfa20002,
- 0xbeff0080, 0xbfa00002,
- 0xbeff00c1, 0xbfa00009,
+ 0xbefe00c1, 0xbeff00c1,
+ 0xdca6c000, 0x007a0000,
+ 0x7e000280, 0xbefe007a,
+ 0xbeff007b, 0xb8fb02dc,
+ 0x847b997b, 0xb8fa3b05,
+ 0x807a817a, 0xbf0d997b,
+ 0xbfa20002, 0x847a897a,
+ 0xbfa00001, 0x847a8a7a,
+ 0xb8fb1e06, 0x847b8a7b,
+ 0x807a7b7a, 0x8b7bff7f,
+ 0x0000ffff, 0x807aff7a,
+ 0x00000200, 0x807a7e7a,
+ 0x827b807b, 0xd7610000,
+ 0x00010870, 0xd7610000,
+ 0x00010a71, 0xd7610000,
+ 0x00010c72, 0xd7610000,
+ 0x00010e73, 0xd7610000,
+ 0x00011074, 0xd7610000,
+ 0x00011275, 0xd7610000,
+ 0x00011476, 0xd7610000,
+ 0x00011677, 0xd7610000,
+ 0x00011a79, 0xd7610000,
+ 0x00011c7e, 0xd7610000,
+ 0x00011e7f, 0xbefe00ff,
+ 0x00003fff, 0xbeff0080,
+ 0xdca6c040, 0x007a0000,
+ 0xd760007a, 0x00011d00,
+ 0xd760007b, 0x00011f00,
+ 0xbefe007a, 0xbeff007b,
+ 0xbef4007e, 0x8b75ff7f,
+ 0x0000ffff, 0x8c75ff75,
+ 0x00040000, 0xbef60080,
+ 0xbef700ff, 0x10807fac,
+ 0xbef1007d, 0xbef00080,
+ 0xb8f302dc, 0x84739973,
+ 0xbefe00c1, 0x857d9973,
+ 0x8b7d817d, 0xbf06817d,
+ 0xbfa20002, 0xbeff0080,
+ 0xbfa00002, 0xbeff00c1,
+ 0xbfa00009, 0xbef600ff,
+ 0x01000000, 0xe0685080,
+ 0x701d0100, 0xe0685100,
+ 0x701d0200, 0xe0685180,
+ 0x701d0300, 0xbfa00008,
0xbef600ff, 0x01000000,
- 0xe0685080, 0x701d0100,
- 0xe0685100, 0x701d0200,
- 0xe0685180, 0x701d0300,
- 0xbfa00008, 0xbef600ff,
- 0x01000000, 0xe0685100,
- 0x701d0100, 0xe0685200,
- 0x701d0200, 0xe0685300,
- 0x701d0300, 0xb8f03b05,
- 0x80708170, 0xbf0d9973,
- 0xbfa20002, 0x84708970,
- 0xbfa00001, 0x84708a70,
- 0xb8fa1e06, 0x847a8a7a,
- 0x80707a70, 0x8070ff70,
- 0x00000200, 0xbef600ff,
- 0x01000000, 0x7e000280,
- 0x7e020280, 0x7e040280,
- 0xbefd0080, 0xd7610002,
- 0x0000fa71, 0x807d817d,
- 0xd7610002, 0x0000fa6c,
- 0x807d817d, 0x917aff6d,
- 0x80000000, 0xd7610002,
- 0x0000fa7a, 0x807d817d,
- 0xd7610002, 0x0000fa6e,
- 0x807d817d, 0xd7610002,
- 0x0000fa6f, 0x807d817d,
- 0xd7610002, 0x0000fa78,
- 0x807d817d, 0xb8faf803,
- 0xd7610002, 0x0000fa7a,
- 0x807d817d, 0xd7610002,
- 0x0000fa7b, 0x807d817d,
- 0xb8f1f801, 0xd7610002,
- 0x0000fa71, 0x807d817d,
- 0xb8f1f814, 0xd7610002,
- 0x0000fa71, 0x807d817d,
- 0xb8f1f815, 0xd7610002,
- 0x0000fa71, 0x807d817d,
- 0xbefe00ff, 0x0000ffff,
- 0xbeff0080, 0xe0685000,
- 0x701d0200, 0xbefe00c1,
+ 0xe0685100, 0x701d0100,
+ 0xe0685200, 0x701d0200,
+ 0xe0685300, 0x701d0300,
0xb8f03b05, 0x80708170,
0xbf0d9973, 0xbfa20002,
0x84708970, 0xbfa00001,
0x84708a70, 0xb8fa1e06,
0x847a8a7a, 0x80707a70,
+ 0x8070ff70, 0x00000200,
0xbef600ff, 0x01000000,
- 0xbef90080, 0xbefd0080,
- 0xbf800000, 0xbe804100,
- 0xbe824102, 0xbe844104,
- 0xbe864106, 0xbe884108,
- 0xbe8a410a, 0xbe8c410c,
- 0xbe8e410e, 0xd7610002,
- 0x0000f200, 0x80798179,
- 0xd7610002, 0x0000f201,
+ 0x7e000280, 0x7e020280,
+ 0x7e040280, 0xbefd0080,
+ 0xd7610002, 0x0000fa71,
+ 0x807d817d, 0xd7610002,
+ 0x0000fa6c, 0x807d817d,
+ 0x917aff6d, 0x80000000,
+ 0xd7610002, 0x0000fa7a,
+ 0x807d817d, 0xd7610002,
+ 0x0000fa6e, 0x807d817d,
+ 0xd7610002, 0x0000fa6f,
+ 0x807d817d, 0xd7610002,
+ 0x0000fa78, 0x807d817d,
+ 0xb8faf803, 0xd7610002,
+ 0x0000fa7a, 0x807d817d,
+ 0xd7610002, 0x0000fa7b,
+ 0x807d817d, 0xb8f1f801,
+ 0xd7610002, 0x0000fa71,
+ 0x807d817d, 0xb8f1f814,
+ 0xd7610002, 0x0000fa71,
+ 0x807d817d, 0xb8f1f815,
+ 0xd7610002, 0x0000fa71,
+ 0x807d817d, 0xbefe00ff,
+ 0x0000ffff, 0xbeff0080,
+ 0xe0685000, 0x701d0200,
+ 0xbefe00c1, 0xb8f03b05,
+ 0x80708170, 0xbf0d9973,
+ 0xbfa20002, 0x84708970,
+ 0xbfa00001, 0x84708a70,
+ 0xb8fa1e06, 0x847a8a7a,
+ 0x80707a70, 0xbef600ff,
+ 0x01000000, 0xbef90080,
+ 0xbefd0080, 0xbf800000,
+ 0xbe804100, 0xbe824102,
+ 0xbe844104, 0xbe864106,
+ 0xbe884108, 0xbe8a410a,
+ 0xbe8c410c, 0xbe8e410e,
+ 0xd7610002, 0x0000f200,
0x80798179, 0xd7610002,
- 0x0000f202, 0x80798179,
- 0xd7610002, 0x0000f203,
+ 0x0000f201, 0x80798179,
+ 0xd7610002, 0x0000f202,
0x80798179, 0xd7610002,
- 0x0000f204, 0x80798179,
- 0xd7610002, 0x0000f205,
+ 0x0000f203, 0x80798179,
+ 0xd7610002, 0x0000f204,
0x80798179, 0xd7610002,
- 0x0000f206, 0x80798179,
- 0xd7610002, 0x0000f207,
+ 0x0000f205, 0x80798179,
+ 0xd7610002, 0x0000f206,
0x80798179, 0xd7610002,
- 0x0000f208, 0x80798179,
- 0xd7610002, 0x0000f209,
+ 0x0000f207, 0x80798179,
+ 0xd7610002, 0x0000f208,
0x80798179, 0xd7610002,
- 0x0000f20a, 0x80798179,
- 0xd7610002, 0x0000f20b,
+ 0x0000f209, 0x80798179,
+ 0xd7610002, 0x0000f20a,
0x80798179, 0xd7610002,
- 0x0000f20c, 0x80798179,
- 0xd7610002, 0x0000f20d,
+ 0x0000f20b, 0x80798179,
+ 0xd7610002, 0x0000f20c,
0x80798179, 0xd7610002,
- 0x0000f20e, 0x80798179,
- 0xd7610002, 0x0000f20f,
- 0x80798179, 0xbf06a079,
- 0xbfa10006, 0xe0685000,
- 0x701d0200, 0x8070ff70,
- 0x00000080, 0xbef90080,
- 0x7e040280, 0x807d907d,
- 0xbf0aff7d, 0x00000060,
- 0xbfa2ffbc, 0xbe804100,
- 0xbe824102, 0xbe844104,
- 0xbe864106, 0xbe884108,
- 0xbe8a410a, 0xd7610002,
- 0x0000f200, 0x80798179,
- 0xd7610002, 0x0000f201,
+ 0x0000f20d, 0x80798179,
+ 0xd7610002, 0x0000f20e,
0x80798179, 0xd7610002,
- 0x0000f202, 0x80798179,
- 0xd7610002, 0x0000f203,
+ 0x0000f20f, 0x80798179,
+ 0xbf06a079, 0xbfa10006,
+ 0xe0685000, 0x701d0200,
+ 0x8070ff70, 0x00000080,
+ 0xbef90080, 0x7e040280,
+ 0x807d907d, 0xbf0aff7d,
+ 0x00000060, 0xbfa2ffbc,
+ 0xbe804100, 0xbe824102,
+ 0xbe844104, 0xbe864106,
+ 0xbe884108, 0xbe8a410a,
+ 0xd7610002, 0x0000f200,
0x80798179, 0xd7610002,
- 0x0000f204, 0x80798179,
- 0xd7610002, 0x0000f205,
+ 0x0000f201, 0x80798179,
+ 0xd7610002, 0x0000f202,
0x80798179, 0xd7610002,
- 0x0000f206, 0x80798179,
- 0xd7610002, 0x0000f207,
+ 0x0000f203, 0x80798179,
+ 0xd7610002, 0x0000f204,
0x80798179, 0xd7610002,
- 0x0000f208, 0x80798179,
- 0xd7610002, 0x0000f209,
+ 0x0000f205, 0x80798179,
+ 0xd7610002, 0x0000f206,
0x80798179, 0xd7610002,
- 0x0000f20a, 0x80798179,
- 0xd7610002, 0x0000f20b,
- 0x80798179, 0xe0685000,
- 0x701d0200, 0xbefe00c1,
- 0x857d9973, 0x8b7d817d,
- 0xbf06817d, 0xbfa20002,
- 0xbeff0080, 0xbfa00001,
- 0xbeff00c1, 0xb8fb4306,
- 0x8b7bc17b, 0xbfa10044,
- 0xbfbd0000, 0x8b7aff6d,
- 0x80000000, 0xbfa10040,
- 0x847b867b, 0x847b827b,
- 0xbef6007b, 0xb8f03b05,
- 0x80708170, 0xbf0d9973,
- 0xbfa20002, 0x84708970,
- 0xbfa00001, 0x84708a70,
- 0xb8fa1e06, 0x847a8a7a,
- 0x80707a70, 0x8070ff70,
- 0x00000200, 0x8070ff70,
- 0x00000080, 0xbef600ff,
- 0x01000000, 0xd71f0000,
- 0x000100c1, 0xd7200000,
- 0x000200c1, 0x16000084,
- 0x857d9973, 0x8b7d817d,
- 0xbf06817d, 0xbefd0080,
- 0xbfa20012, 0xbe8300ff,
- 0x00000080, 0xbf800000,
- 0xbf800000, 0xbf800000,
- 0xd8d80000, 0x01000000,
- 0xbf890000, 0xe0685000,
- 0x701d0100, 0x807d037d,
- 0x80700370, 0xd5250000,
- 0x0001ff00, 0x00000080,
- 0xbf0a7b7d, 0xbfa2fff4,
- 0xbfa00011, 0xbe8300ff,
- 0x00000100, 0xbf800000,
- 0xbf800000, 0xbf800000,
- 0xd8d80000, 0x01000000,
- 0xbf890000, 0xe0685000,
- 0x701d0100, 0x807d037d,
- 0x80700370, 0xd5250000,
- 0x0001ff00, 0x00000100,
- 0xbf0a7b7d, 0xbfa2fff4,
+ 0x0000f207, 0x80798179,
+ 0xd7610002, 0x0000f208,
+ 0x80798179, 0xd7610002,
+ 0x0000f209, 0x80798179,
+ 0xd7610002, 0x0000f20a,
+ 0x80798179, 0xd7610002,
+ 0x0000f20b, 0x80798179,
+ 0xe0685000, 0x701d0200,
0xbefe00c1, 0x857d9973,
0x8b7d817d, 0xbf06817d,
- 0xbfa20004, 0xbef000ff,
- 0x00000200, 0xbeff0080,
- 0xbfa00003, 0xbef000ff,
- 0x00000400, 0xbeff00c1,
- 0xb8fb3b05, 0x807b817b,
- 0x847b827b, 0x857d9973,
+ 0xbfa20002, 0xbeff0080,
+ 0xbfa00001, 0xbeff00c1,
+ 0xb8fb4306, 0x8b7bc17b,
+ 0xbfa10044, 0xbfbd0000,
+ 0x8b7aff6d, 0x80000000,
+ 0xbfa10040, 0x847b867b,
+ 0x847b827b, 0xbef6007b,
+ 0xb8f03b05, 0x80708170,
+ 0xbf0d9973, 0xbfa20002,
+ 0x84708970, 0xbfa00001,
+ 0x84708a70, 0xb8fa1e06,
+ 0x847a8a7a, 0x80707a70,
+ 0x8070ff70, 0x00000200,
+ 0x8070ff70, 0x00000080,
+ 0xbef600ff, 0x01000000,
+ 0xd71f0000, 0x000100c1,
+ 0xd7200000, 0x000200c1,
+ 0x16000084, 0x857d9973,
0x8b7d817d, 0xbf06817d,
- 0xbfa20017, 0xbef600ff,
- 0x01000000, 0xbefd0084,
- 0xbf0a7b7d, 0xbfa10037,
- 0x7e008700, 0x7e028701,
- 0x7e048702, 0x7e068703,
- 0xe0685000, 0x701d0000,
- 0xe0685080, 0x701d0100,
- 0xe0685100, 0x701d0200,
- 0xe0685180, 0x701d0300,
- 0x807d847d, 0x8070ff70,
- 0x00000200, 0xbf0a7b7d,
- 0xbfa2ffef, 0xbfa00025,
+ 0xbefd0080, 0xbfa20012,
+ 0xbe8300ff, 0x00000080,
+ 0xbf800000, 0xbf800000,
+ 0xbf800000, 0xd8d80000,
+ 0x01000000, 0xbf890000,
+ 0xe0685000, 0x701d0100,
+ 0x807d037d, 0x80700370,
+ 0xd5250000, 0x0001ff00,
+ 0x00000080, 0xbf0a7b7d,
+ 0xbfa2fff4, 0xbfa00011,
+ 0xbe8300ff, 0x00000100,
+ 0xbf800000, 0xbf800000,
+ 0xbf800000, 0xd8d80000,
+ 0x01000000, 0xbf890000,
+ 0xe0685000, 0x701d0100,
+ 0x807d037d, 0x80700370,
+ 0xd5250000, 0x0001ff00,
+ 0x00000100, 0xbf0a7b7d,
+ 0xbfa2fff4, 0xbefe00c1,
+ 0x857d9973, 0x8b7d817d,
+ 0xbf06817d, 0xbfa20004,
+ 0xbef000ff, 0x00000200,
+ 0xbeff0080, 0xbfa00003,
+ 0xbef000ff, 0x00000400,
+ 0xbeff00c1, 0xb8fb3b05,
+ 0x807b817b, 0x847b827b,
+ 0x857d9973, 0x8b7d817d,
+ 0xbf06817d, 0xbfa20017,
0xbef600ff, 0x01000000,
0xbefd0084, 0xbf0a7b7d,
- 0xbfa10011, 0x7e008700,
+ 0xbfa10037, 0x7e008700,
0x7e028701, 0x7e048702,
0x7e068703, 0xe0685000,
- 0x701d0000, 0xe0685100,
- 0x701d0100, 0xe0685200,
- 0x701d0200, 0xe0685300,
+ 0x701d0000, 0xe0685080,
+ 0x701d0100, 0xe0685100,
+ 0x701d0200, 0xe0685180,
0x701d0300, 0x807d847d,
- 0x8070ff70, 0x00000400,
+ 0x8070ff70, 0x00000200,
0xbf0a7b7d, 0xbfa2ffef,
- 0xb8fb1e06, 0x8b7bc17b,
- 0xbfa1000c, 0x847b837b,
- 0x807b7d7b, 0xbefe00c1,
- 0xbeff0080, 0x7e008700,
+ 0xbfa00025, 0xbef600ff,
+ 0x01000000, 0xbefd0084,
+ 0xbf0a7b7d, 0xbfa10011,
+ 0x7e008700, 0x7e028701,
+ 0x7e048702, 0x7e068703,
0xe0685000, 0x701d0000,
- 0x807d817d, 0x8070ff70,
- 0x00000080, 0xbf0a7b7d,
- 0xbfa2fff8, 0xbfa00146,
- 0xbef4007e, 0x8b75ff7f,
- 0x0000ffff, 0x8c75ff75,
- 0x00040000, 0xbef60080,
- 0xbef700ff, 0x10807fac,
- 0xb8f202dc, 0x84729972,
- 0x8b6eff7f, 0x04000000,
- 0xbfa1003a, 0xbefe00c1,
- 0x857d9972, 0x8b7d817d,
- 0xbf06817d, 0xbfa20002,
- 0xbeff0080, 0xbfa00001,
- 0xbeff00c1, 0xb8ef4306,
- 0x8b6fc16f, 0xbfa1002f,
- 0x846f866f, 0x846f826f,
- 0xbef6006f, 0xb8f83b05,
- 0x80788178, 0xbf0d9972,
- 0xbfa20002, 0x84788978,
- 0xbfa00001, 0x84788a78,
- 0xb8ee1e06, 0x846e8a6e,
- 0x80786e78, 0x8078ff78,
- 0x00000200, 0x8078ff78,
- 0x00000080, 0xbef600ff,
- 0x01000000, 0x857d9972,
- 0x8b7d817d, 0xbf06817d,
- 0xbefd0080, 0xbfa2000c,
- 0xe0500000, 0x781d0000,
- 0xbf8903f7, 0xdac00000,
- 0x00000000, 0x807dff7d,
- 0x00000080, 0x8078ff78,
- 0x00000080, 0xbf0a6f7d,
- 0xbfa2fff5, 0xbfa0000b,
- 0xe0500000, 0x781d0000,
- 0xbf8903f7, 0xdac00000,
- 0x00000000, 0x807dff7d,
- 0x00000100, 0x8078ff78,
- 0x00000100, 0xbf0a6f7d,
- 0xbfa2fff5, 0xbef80080,
+ 0xe0685100, 0x701d0100,
+ 0xe0685200, 0x701d0200,
+ 0xe0685300, 0x701d0300,
+ 0x807d847d, 0x8070ff70,
+ 0x00000400, 0xbf0a7b7d,
+ 0xbfa2ffef, 0xb8fb1e06,
+ 0x8b7bc17b, 0xbfa1000c,
+ 0x847b837b, 0x807b7d7b,
+ 0xbefe00c1, 0xbeff0080,
+ 0x7e008700, 0xe0685000,
+ 0x701d0000, 0x807d817d,
+ 0x8070ff70, 0x00000080,
+ 0xbf0a7b7d, 0xbfa2fff8,
+ 0xbfa00146, 0xbef4007e,
+ 0x8b75ff7f, 0x0000ffff,
+ 0x8c75ff75, 0x00040000,
+ 0xbef60080, 0xbef700ff,
+ 0x10807fac, 0xb8f202dc,
+ 0x84729972, 0x8b6eff7f,
+ 0x04000000, 0xbfa1003a,
0xbefe00c1, 0x857d9972,
0x8b7d817d, 0xbf06817d,
0xbfa20002, 0xbeff0080,
0xbfa00001, 0xbeff00c1,
- 0xb8ef3b05, 0x806f816f,
- 0x846f826f, 0x857d9972,
- 0x8b7d817d, 0xbf06817d,
- 0xbfa20024, 0xbef600ff,
- 0x01000000, 0xbeee0078,
+ 0xb8ef4306, 0x8b6fc16f,
+ 0xbfa1002f, 0x846f866f,
+ 0x846f826f, 0xbef6006f,
+ 0xb8f83b05, 0x80788178,
+ 0xbf0d9972, 0xbfa20002,
+ 0x84788978, 0xbfa00001,
+ 0x84788a78, 0xb8ee1e06,
+ 0x846e8a6e, 0x80786e78,
0x8078ff78, 0x00000200,
- 0xbefd0084, 0xbf0a6f7d,
- 0xbfa10050, 0xe0505000,
- 0x781d0000, 0xe0505080,
- 0x781d0100, 0xe0505100,
- 0x781d0200, 0xe0505180,
- 0x781d0300, 0xbf8903f7,
- 0x7e008500, 0x7e028501,
- 0x7e048502, 0x7e068503,
- 0x807d847d, 0x8078ff78,
- 0x00000200, 0xbf0a6f7d,
- 0xbfa2ffee, 0xe0505000,
- 0x6e1d0000, 0xe0505080,
- 0x6e1d0100, 0xe0505100,
- 0x6e1d0200, 0xe0505180,
- 0x6e1d0300, 0xbf8903f7,
- 0xbfa00034, 0xbef600ff,
- 0x01000000, 0xbeee0078,
- 0x8078ff78, 0x00000400,
- 0xbefd0084, 0xbf0a6f7d,
- 0xbfa10012, 0xe0505000,
- 0x781d0000, 0xe0505100,
- 0x781d0100, 0xe0505200,
- 0x781d0200, 0xe0505300,
- 0x781d0300, 0xbf8903f7,
- 0x7e008500, 0x7e028501,
- 0x7e048502, 0x7e068503,
- 0x807d847d, 0x8078ff78,
- 0x00000400, 0xbf0a6f7d,
- 0xbfa2ffee, 0xb8ef1e06,
- 0x8b6fc16f, 0xbfa1000e,
- 0x846f836f, 0x806f7d6f,
- 0xbefe00c1, 0xbeff0080,
+ 0x8078ff78, 0x00000080,
+ 0xbef600ff, 0x01000000,
+ 0x857d9972, 0x8b7d817d,
+ 0xbf06817d, 0xbefd0080,
+ 0xbfa2000c, 0xe0500000,
+ 0x781d0000, 0xbf8903f7,
+ 0xdac00000, 0x00000000,
+ 0x807dff7d, 0x00000080,
+ 0x8078ff78, 0x00000080,
+ 0xbf0a6f7d, 0xbfa2fff5,
+ 0xbfa0000b, 0xe0500000,
+ 0x781d0000, 0xbf8903f7,
+ 0xdac00000, 0x00000000,
+ 0x807dff7d, 0x00000100,
+ 0x8078ff78, 0x00000100,
+ 0xbf0a6f7d, 0xbfa2fff5,
+ 0xbef80080, 0xbefe00c1,
+ 0x857d9972, 0x8b7d817d,
+ 0xbf06817d, 0xbfa20002,
+ 0xbeff0080, 0xbfa00001,
+ 0xbeff00c1, 0xb8ef3b05,
+ 0x806f816f, 0x846f826f,
+ 0x857d9972, 0x8b7d817d,
+ 0xbf06817d, 0xbfa20024,
+ 0xbef600ff, 0x01000000,
+ 0xbeee0078, 0x8078ff78,
+ 0x00000200, 0xbefd0084,
+ 0xbf0a6f7d, 0xbfa10050,
0xe0505000, 0x781d0000,
+ 0xe0505080, 0x781d0100,
+ 0xe0505100, 0x781d0200,
+ 0xe0505180, 0x781d0300,
0xbf8903f7, 0x7e008500,
- 0x807d817d, 0x8078ff78,
- 0x00000080, 0xbf0a6f7d,
- 0xbfa2fff7, 0xbeff00c1,
+ 0x7e028501, 0x7e048502,
+ 0x7e068503, 0x807d847d,
+ 0x8078ff78, 0x00000200,
+ 0xbf0a6f7d, 0xbfa2ffee,
0xe0505000, 0x6e1d0000,
- 0xe0505100, 0x6e1d0100,
- 0xe0505200, 0x6e1d0200,
- 0xe0505300, 0x6e1d0300,
- 0xbf8903f7, 0xb8f83b05,
- 0x80788178, 0xbf0d9972,
- 0xbfa20002, 0x84788978,
- 0xbfa00001, 0x84788a78,
- 0xb8ee1e06, 0x846e8a6e,
- 0x80786e78, 0x8078ff78,
- 0x00000200, 0x80f8ff78,
- 0x00000050, 0xbef600ff,
- 0x01000000, 0xbefd00ff,
- 0x0000006c, 0x80f89078,
- 0xf428403a, 0xf0000000,
- 0xbf89fc07, 0x80fd847d,
- 0xbf800000, 0xbe804300,
- 0xbe824302, 0x80f8a078,
- 0xf42c403a, 0xf0000000,
- 0xbf89fc07, 0x80fd887d,
- 0xbf800000, 0xbe804300,
- 0xbe824302, 0xbe844304,
- 0xbe864306, 0x80f8c078,
- 0xf430403a, 0xf0000000,
- 0xbf89fc07, 0x80fd907d,
- 0xbf800000, 0xbe804300,
- 0xbe824302, 0xbe844304,
- 0xbe864306, 0xbe884308,
- 0xbe8a430a, 0xbe8c430c,
- 0xbe8e430e, 0xbf06807d,
- 0xbfa1fff0, 0xb980f801,
- 0x00000000, 0xbfbd0000,
+ 0xe0505080, 0x6e1d0100,
+ 0xe0505100, 0x6e1d0200,
+ 0xe0505180, 0x6e1d0300,
+ 0xbf8903f7, 0xbfa00034,
+ 0xbef600ff, 0x01000000,
+ 0xbeee0078, 0x8078ff78,
+ 0x00000400, 0xbefd0084,
+ 0xbf0a6f7d, 0xbfa10012,
+ 0xe0505000, 0x781d0000,
+ 0xe0505100, 0x781d0100,
+ 0xe0505200, 0x781d0200,
+ 0xe0505300, 0x781d0300,
+ 0xbf8903f7, 0x7e008500,
+ 0x7e028501, 0x7e048502,
+ 0x7e068503, 0x807d847d,
+ 0x8078ff78, 0x00000400,
+ 0xbf0a6f7d, 0xbfa2ffee,
+ 0xb8ef1e06, 0x8b6fc16f,
+ 0xbfa1000e, 0x846f836f,
+ 0x806f7d6f, 0xbefe00c1,
+ 0xbeff0080, 0xe0505000,
+ 0x781d0000, 0xbf8903f7,
+ 0x7e008500, 0x807d817d,
+ 0x8078ff78, 0x00000080,
+ 0xbf0a6f7d, 0xbfa2fff7,
+ 0xbeff00c1, 0xe0505000,
+ 0x6e1d0000, 0xe0505100,
+ 0x6e1d0100, 0xe0505200,
+ 0x6e1d0200, 0xe0505300,
+ 0x6e1d0300, 0xbf8903f7,
0xb8f83b05, 0x80788178,
0xbf0d9972, 0xbfa20002,
0x84788978, 0xbfa00001,
0x84788a78, 0xb8ee1e06,
0x846e8a6e, 0x80786e78,
0x8078ff78, 0x00000200,
+ 0x80f8ff78, 0x00000050,
0xbef600ff, 0x01000000,
- 0xf4205bfa, 0xf0000000,
- 0x80788478, 0xf4205b3a,
+ 0xbefd00ff, 0x0000006c,
+ 0x80f89078, 0xf428403a,
+ 0xf0000000, 0xbf89fc07,
+ 0x80fd847d, 0xbf800000,
+ 0xbe804300, 0xbe824302,
+ 0x80f8a078, 0xf42c403a,
+ 0xf0000000, 0xbf89fc07,
+ 0x80fd887d, 0xbf800000,
+ 0xbe804300, 0xbe824302,
+ 0xbe844304, 0xbe864306,
+ 0x80f8c078, 0xf430403a,
+ 0xf0000000, 0xbf89fc07,
+ 0x80fd907d, 0xbf800000,
+ 0xbe804300, 0xbe824302,
+ 0xbe844304, 0xbe864306,
+ 0xbe884308, 0xbe8a430a,
+ 0xbe8c430c, 0xbe8e430e,
+ 0xbf06807d, 0xbfa1fff0,
+ 0xb980f801, 0x00000000,
+ 0xbfbd0000, 0xb8f83b05,
+ 0x80788178, 0xbf0d9972,
+ 0xbfa20002, 0x84788978,
+ 0xbfa00001, 0x84788a78,
+ 0xb8ee1e06, 0x846e8a6e,
+ 0x80786e78, 0x8078ff78,
+ 0x00000200, 0xbef600ff,
+ 0x01000000, 0xf4205bfa,
0xf0000000, 0x80788478,
- 0xf4205b7a, 0xf0000000,
- 0x80788478, 0xf4205c3a,
+ 0xf4205b3a, 0xf0000000,
+ 0x80788478, 0xf4205b7a,
0xf0000000, 0x80788478,
- 0xf4205c7a, 0xf0000000,
- 0x80788478, 0xf4205eba,
+ 0xf4205c3a, 0xf0000000,
+ 0x80788478, 0xf4205c7a,
0xf0000000, 0x80788478,
- 0xf4205efa, 0xf0000000,
- 0x80788478, 0xf4205e7a,
+ 0xf4205eba, 0xf0000000,
+ 0x80788478, 0xf4205efa,
0xf0000000, 0x80788478,
- 0xf4205cfa, 0xf0000000,
- 0x80788478, 0xf4205bba,
+ 0xf4205e7a, 0xf0000000,
+ 0x80788478, 0xf4205cfa,
0xf0000000, 0x80788478,
- 0xbf89fc07, 0xb96ef814,
0xf4205bba, 0xf0000000,
0x80788478, 0xbf89fc07,
- 0xb96ef815, 0xbefd006f,
- 0xbefe0070, 0xbeff0071,
- 0x8b6f7bff, 0x000003ff,
- 0xb96f4803, 0x8b6f7bff,
- 0xfffff800, 0x856f8b6f,
- 0xb96fa2c3, 0xb973f801,
- 0xb8ee3b05, 0x806e816e,
- 0xbf0d9972, 0xbfa20002,
- 0x846e896e, 0xbfa00001,
- 0x846e8a6e, 0xb8ef1e06,
- 0x846f8a6f, 0x806e6f6e,
- 0x806eff6e, 0x00000200,
- 0x806e746e, 0x826f8075,
- 0x8b6fff6f, 0x0000ffff,
- 0xf4085c37, 0xf8000050,
- 0xf4085d37, 0xf8000060,
- 0xf4005e77, 0xf8000074,
- 0xbf89fc07, 0x8b6dff6d,
- 0x0000ffff, 0x8bfe7e7e,
- 0x8bea6a6a, 0xb8eef802,
- 0xbf0d866e, 0xbfa20002,
- 0xb97af802, 0xbe80486c,
- 0xb97af802, 0xbe804a6c,
- 0xbfb00000, 0xbf9f0000,
+ 0xb96ef814, 0xf4205bba,
+ 0xf0000000, 0x80788478,
+ 0xbf89fc07, 0xb96ef815,
+ 0xbefd006f, 0xbefe0070,
+ 0xbeff0071, 0x8b6f7bff,
+ 0x000003ff, 0xb96f4803,
+ 0x8b6f7bff, 0xfffff800,
+ 0x856f8b6f, 0xb96fa2c3,
+ 0xb973f801, 0xb8ee3b05,
+ 0x806e816e, 0xbf0d9972,
+ 0xbfa20002, 0x846e896e,
+ 0xbfa00001, 0x846e8a6e,
+ 0xb8ef1e06, 0x846f8a6f,
+ 0x806e6f6e, 0x806eff6e,
+ 0x00000200, 0x806e746e,
+ 0x826f8075, 0x8b6fff6f,
+ 0x0000ffff, 0xf4085c37,
+ 0xf8000050, 0xf4085d37,
+ 0xf8000060, 0xf4005e77,
+ 0xf8000074, 0xbf89fc07,
+ 0x8b6dff6d, 0x0000ffff,
+ 0x8bfe7e7e, 0x8bea6a6a,
+ 0xb8eef802, 0xbf0d866e,
+ 0xbfa20002, 0xb97af802,
+ 0xbe80486c, 0xb97af802,
+ 0xbe804a6c, 0xbfb00000,
0xbf9f0000, 0xbf9f0000,
0xbf9f0000, 0xbf9f0000,
+ 0xbf9f0000, 0x00000000,
};
s_getreg_b32 s_save_trapsts, hwreg(HW_REG_TRAPSTS)
#if SW_SA_TRAP
+ // If ttmp1[30] is set then issue s_barrier to unblock dependent waves.
+ s_bitcmp1_b32 s_save_pc_hi, 30
+ s_cbranch_scc0 L_TRAP_NO_BARRIER
+ s_barrier
+
+L_TRAP_NO_BARRIER:
// If ttmp1[31] is set then trap may occur early.
// Spin wait until SAVECTX exception is raised.
s_bitcmp1_b32 s_save_pc_hi, 31
},
};
+static struct kfd_gpu_cache_info gfx1037_cache_info[] = {
+ {
+ /* TCP L1 Cache per CU */
+ .cache_size = 16,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 1,
+ },
+ {
+ /* Scalar L1 Instruction Cache per SQC */
+ .cache_size = 32,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_INST_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* Scalar L1 Data Cache per SQC */
+ .cache_size = 16,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* GL1 Data Cache per SA */
+ .cache_size = 128,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* L2 Data Cache per GPU (Total Tex Cache) */
+ .cache_size = 256,
+ .cache_level = 2,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+};
+
+static struct kfd_gpu_cache_info gc_10_3_6_cache_info[] = {
+ {
+ /* TCP L1 Cache per CU */
+ .cache_size = 16,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 1,
+ },
+ {
+ /* Scalar L1 Instruction Cache per SQC */
+ .cache_size = 32,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_INST_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* Scalar L1 Data Cache per SQC */
+ .cache_size = 16,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* GL1 Data Cache per SA */
+ .cache_size = 128,
+ .cache_level = 1,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+ {
+ /* L2 Data Cache per GPU (Total Tex Cache) */
+ .cache_size = 256,
+ .cache_level = 2,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 2,
+ },
+};
+
static void kfd_populated_cu_info_cpu(struct kfd_topology_device *dev,
struct crat_subtype_computeunit *cu)
{
num_of_cache_types = ARRAY_SIZE(beige_goby_cache_info);
break;
case IP_VERSION(10, 3, 3):
- case IP_VERSION(10, 3, 6): /* TODO: Double check these on production silicon */
- case IP_VERSION(10, 3, 7): /* TODO: Double check these on production silicon */
pcache_info = yellow_carp_cache_info;
num_of_cache_types = ARRAY_SIZE(yellow_carp_cache_info);
break;
+ case IP_VERSION(10, 3, 6):
+ pcache_info = gc_10_3_6_cache_info;
+ num_of_cache_types = ARRAY_SIZE(gc_10_3_6_cache_info);
+ break;
+ case IP_VERSION(10, 3, 7):
+ pcache_info = gfx1037_cache_info;
+ num_of_cache_types = ARRAY_SIZE(gfx1037_cache_info);
+ break;
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 2):
out_unlock_svms:
mutex_unlock(&p->svms.lock);
out_unref_process:
+ pr_debug("CPU fault svms 0x%p address 0x%lx done\n", &p->svms, addr);
kfd_unref_process(p);
out_mmput:
mmput(mm);
-
- pr_debug("CPU fault svms 0x%p address 0x%lx done\n", &p->svms, addr);
-
return r ? VM_FAULT_SIGBUS : 0;
}
adev->dm.dc->debug.visual_confirm = amdgpu_dc_visual_confirm;
+ /* TODO: Remove after DP2 receiver gets proper support of Cable ID feature */
+ adev->dm.dc->debug.ignore_cable_id = true;
+
r = dm_dmub_hw_init(adev);
if (r) {
DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
{
struct amdgpu_device *adev = drm_to_adev(plane->dev);
const struct drm_format_info *info = drm_format_info(format);
- struct hw_asic_id asic_id = adev->dm.dc->ctx->asic_id;
+ int i;
enum dm_micro_swizzle microtile = modifier_gfx9_swizzle_mode(modifier) & 3;
return true;
}
- /* check if swizzle mode is supported by this version of DCN */
- switch (asic_id.chip_family) {
- case FAMILY_SI:
- case FAMILY_CI:
- case FAMILY_KV:
- case FAMILY_CZ:
- case FAMILY_VI:
- /* asics before AI does not have modifier support */
- return false;
- case FAMILY_AI:
- case FAMILY_RV:
- case FAMILY_NV:
- case FAMILY_VGH:
- case FAMILY_YELLOW_CARP:
- case AMDGPU_FAMILY_GC_10_3_6:
- case AMDGPU_FAMILY_GC_10_3_7:
- switch (AMD_FMT_MOD_GET(TILE, modifier)) {
- case AMD_FMT_MOD_TILE_GFX9_64K_R_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_D_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_S_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_D:
- return true;
- default:
- return false;
- }
- break;
- case AMDGPU_FAMILY_GC_11_0_0:
- case AMDGPU_FAMILY_GC_11_0_1:
- switch (AMD_FMT_MOD_GET(TILE, modifier)) {
- case AMD_FMT_MOD_TILE_GFX11_256K_R_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_R_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_D_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_S_X:
- case AMD_FMT_MOD_TILE_GFX9_64K_D:
- return true;
- default:
- return false;
- }
- break;
- default:
- ASSERT(0); /* Unknown asic */
- break;
+ /* Check that the modifier is on the list of the plane's supported modifiers. */
+ for (i = 0; i < plane->modifier_count; i++) {
+ if (modifier == plane->modifiers[i])
+ break;
}
+ if (i == plane->modifier_count)
+ return false;
/*
* For D swizzle the canonical modifier depends on the bpp, so check
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
unsigned int num_levels;
struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;
+ unsigned int i;
memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));
clk_mgr_base->clks.p_state_change_support = true;
clk_mgr->dpm_present = true;
if (clk_mgr_base->ctx->dc->debug.min_disp_clk_khz) {
- unsigned int i;
-
for (i = 0; i < num_levels; i++)
if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz
< khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz))
clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz
= khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz);
}
+ for (i = 0; i < num_levels; i++)
+ if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz > 1950)
+ clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz = 1950;
if (clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz) {
- unsigned int i;
-
for (i = 0; i < num_levels; i++)
if (clk_mgr_base->bw_params->clk_table.entries[i].dppclk_mhz
< khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz))
&clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz,
&num_entries_per_clk->num_memclk_levels);
+ /* memclk must have at least one level */
+ num_entries_per_clk->num_memclk_levels = num_entries_per_clk->num_memclk_levels ? num_entries_per_clk->num_memclk_levels : 1;
+
dcn32_init_single_clock(clk_mgr, PPCLK_FCLK,
&clk_mgr_base->bw_params->clk_table.entries[0].fclk_mhz,
&num_entries_per_clk->num_fclk_levels);
bool enable_double_buffered_dsc_pg_support;
bool enable_dp_dig_pixel_rate_div_policy;
enum lttpr_mode lttpr_mode_override;
+ unsigned int dsc_delay_factor_wa_x1000;
};
struct gpu_info_soc_bounding_box_v1_0;
hubp->att.size.bits.width = attr->width;
hubp->att.size.bits.height = attr->height;
hubp->att.cur_ctl.bits.mode = attr->color_format;
+
+ hubp->cur_rect.w = attr->width;
+ hubp->cur_rect.h = attr->height;
+
hubp->att.cur_ctl.bits.pitch = hw_pitch;
hubp->att.cur_ctl.bits.line_per_chunk = lpc;
hubp->att.cur_ctl.bits.cur_2x_magnify = attr->attribute_flags.bits.ENABLE_MAGNIFICATION;
lock,
&hw_locks,
&inst_flags);
- } else if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
- union dmub_inbox0_cmd_lock_hw hw_lock_cmd = { 0 };
- hw_lock_cmd.bits.command_code = DMUB_INBOX0_CMD__HW_LOCK;
- hw_lock_cmd.bits.hw_lock_client = HW_LOCK_CLIENT_DRIVER;
- hw_lock_cmd.bits.lock_pipe = 1;
- hw_lock_cmd.bits.otg_inst = pipe->stream_res.tg->inst;
- hw_lock_cmd.bits.lock = lock;
- if (!lock)
- hw_lock_cmd.bits.should_release = 1;
- dmub_hw_lock_mgr_inbox0_cmd(dc->ctx->dmub_srv, hw_lock_cmd);
} else if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) {
if (lock)
pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg);
for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000
&& hubp->funcs->hubp_is_flip_pending(hubp); j++)
- mdelay(1);
+ udelay(1);
}
}
.num_ddc = 5,
.num_vmid = 16,
.num_mpc_3dlut = 2,
- .num_dsc = 3,
+ .num_dsc = 4,
};
static const struct dc_plane_cap plane_cap = {
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
if (!pipe->stream)
- return false;
+ continue;
if (!pipe->plane_state)
return false;
CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/dcn32_fpu.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_ccflags) $(frame_warn_flag)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_ccflags)
-CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags)
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags) $(frame_warn_flag)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn321/dcn321_fpu.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/dcn31_fpu.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_ccflags)
pipes[pipe_cnt].pipe.src.dcc = false;
pipes[pipe_cnt].pipe.src.dcc_rate = 1;
pipes[pipe_cnt].pipe.dest.synchronized_vblank_all_planes = synchronized_vblank;
+ pipes[pipe_cnt].pipe.dest.synchronize_timings = synchronized_vblank;
pipes[pipe_cnt].pipe.dest.hblank_start = timing->h_total - timing->h_front_porch;
pipes[pipe_cnt].pipe.dest.hblank_end = pipes[pipe_cnt].pipe.dest.hblank_start
- timing->h_addressable
if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
dcn3_2_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;
-
}
+ /* DML DSC delay factor workaround */
+ dcn3_2_ip.dsc_delay_factor_wa = dc->debug.dsc_delay_factor_wa_x1000 / 1000.0;
+
/* Override dispclk_dppclk_vco_speed_mhz from Clk Mgr */
dcn3_2_soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
v->DSCDelay[k] = dml32_DSCDelayRequirement(mode_lib->vba.DSCEnabled[k],
mode_lib->vba.ODMCombineEnabled[k], mode_lib->vba.DSCInputBitPerComponent[k],
- mode_lib->vba.OutputBpp[k], mode_lib->vba.HActive[k], mode_lib->vba.HTotal[k],
+ mode_lib->vba.OutputBppPerState[mode_lib->vba.VoltageLevel][k],
+ mode_lib->vba.HActive[k], mode_lib->vba.HTotal[k],
mode_lib->vba.NumberOfDSCSlices[k], mode_lib->vba.OutputFormat[k],
mode_lib->vba.Output[k], mode_lib->vba.PixelClock[k],
- mode_lib->vba.PixelClockBackEnd[k]);
+ mode_lib->vba.PixelClockBackEnd[k], mode_lib->vba.ip.dsc_delay_factor_wa);
}
for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
&& !mode_lib->vba.MSOOrODMSplitWithNonDPLink
&& !mode_lib->vba.NotEnoughLanesForMSO
&& mode_lib->vba.LinkCapacitySupport[i] == true && !mode_lib->vba.P2IWith420
- && !mode_lib->vba.DSCOnlyIfNecessaryWithBPP
+ //&& !mode_lib->vba.DSCOnlyIfNecessaryWithBPP
&& !mode_lib->vba.DSC422NativeNotSupported
&& !mode_lib->vba.MPCCombineMethodIncompatible
&& mode_lib->vba.ODMCombine2To1SupportCheckOK[i] == true
mode_lib->vba.OutputBppPerState[i][k], mode_lib->vba.HActive[k],
mode_lib->vba.HTotal[k], mode_lib->vba.NumberOfDSCSlices[k],
mode_lib->vba.OutputFormat[k], mode_lib->vba.Output[k],
- mode_lib->vba.PixelClock[k], mode_lib->vba.PixelClockBackEnd[k]);
+ mode_lib->vba.PixelClock[k], mode_lib->vba.PixelClockBackEnd[k],
+ mode_lib->vba.ip.dsc_delay_factor_wa);
}
for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
enum output_format_class OutputFormat,
enum output_encoder_class Output,
double PixelClock,
- double PixelClockBackEnd)
+ double PixelClockBackEnd,
+ double dsc_delay_factor_wa)
{
unsigned int DSCDelayRequirement_val;
}
DSCDelayRequirement_val = DSCDelayRequirement_val + (HTotal - HActive) *
- dml_ceil(DSCDelayRequirement_val / HActive, 1);
+ dml_ceil((double)DSCDelayRequirement_val / HActive, 1);
DSCDelayRequirement_val = DSCDelayRequirement_val * PixelClock / PixelClockBackEnd;
dml_print("DML::%s: DSCDelayRequirement_val = %d\n", __func__, DSCDelayRequirement_val);
#endif
- return DSCDelayRequirement_val;
+ return dml_ceil(DSCDelayRequirement_val * dsc_delay_factor_wa, 1);
}
void dml32_CalculateSurfaceSizeInMall(
enum output_format_class OutputFormat,
enum output_encoder_class Output,
double PixelClock,
- double PixelClockBackEnd);
+ double PixelClockBackEnd,
+ double dsc_delay_factor_wa);
void dml32_CalculateSurfaceSizeInMall(
unsigned int NumberOfActiveSurfaces,
dml_print("DML_DLG: %s: vready_after_vcount0 = %d\n", __func__, dlg_regs->vready_after_vcount0);
- dst_x_after_scaler = get_dst_x_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
- dst_y_after_scaler = get_dst_y_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+ dst_x_after_scaler = dml_ceil(get_dst_x_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx), 1);
+ dst_y_after_scaler = dml_ceil(get_dst_y_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx), 1);
// do some adjustment on the dst_after scaler to account for odm combine mode
dml_print("DML_DLG: %s: input dst_x_after_scaler = %d\n", __func__, dst_x_after_scaler);
#include "dcn321_fpu.h"
#include "dcn32/dcn32_resource.h"
#include "dcn321/dcn321_resource.h"
+#include "dml/dcn32/display_mode_vba_util_32.h"
#define DCN3_2_DEFAULT_DET_SIZE 256
},
},
.num_states = 1,
- .sr_exit_time_us = 12.36,
- .sr_enter_plus_exit_time_us = 16.72,
+ .sr_exit_time_us = 19.95,
+ .sr_enter_plus_exit_time_us = 24.36,
.sr_exit_z8_time_us = 285.0,
.sr_enter_plus_exit_z8_time_us = 320,
.writeback_latency_us = 12.0,
.round_trip_ping_latency_dcfclk_cycles = 263,
- .urgent_latency_pixel_data_only_us = 4.0,
- .urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
- .urgent_latency_vm_data_only_us = 4.0,
+ .urgent_latency_pixel_data_only_us = 9.35,
+ .urgent_latency_pixel_mixed_with_vm_data_us = 9.35,
+ .urgent_latency_vm_data_only_us = 9.35,
.fclk_change_latency_us = 20,
.usr_retraining_latency_us = 2,
.smn_latency_us = 2,
if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
dcn3_21_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;
-
}
+ /* DML DSC delay factor workaround */
+ dcn3_21_ip.dsc_delay_factor_wa = dc->debug.dsc_delay_factor_wa_x1000 / 1000.0;
+
/* Override dispclk_dppclk_vco_speed_mhz from Clk Mgr */
dcn3_21_soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
unsigned int max_num_dp2p0_outputs;
unsigned int max_num_dp2p0_streams;
unsigned int VBlankNomDefaultUS;
+
+ /* DM workarounds */
+ double dsc_delay_factor_wa; // TODO: Remove after implementing root cause fix
};
struct _vcs_dpi_display_xfc_params_st {
mode_lib->vba.skip_dio_check[mode_lib->vba.NumberOfActivePlanes] =
dout->is_virtual;
- if (!dout->dsc_enable)
+ if (dout->dsc_enable)
mode_lib->vba.ForcedOutputLinkBPP[mode_lib->vba.NumberOfActivePlanes] = dout->output_bpp;
else
mode_lib->vba.ForcedOutputLinkBPP[mode_lib->vba.NumberOfActivePlanes] = 0.0;
uint32_t queue_id);
int (*hqd_destroy)(struct amdgpu_device *adev, void *mqd,
- uint32_t reset_type, unsigned int timeout,
- uint32_t pipe_id, uint32_t queue_id);
+ enum kfd_preempt_type reset_type,
+ unsigned int timeout, uint32_t pipe_id,
+ uint32_t queue_id);
bool (*hqd_sdma_is_occupied)(struct amdgpu_device *adev, void *mqd);
if (adev->pm.sysfs_initialized)
return 0;
+ INIT_LIST_HEAD(&adev->pm.pm_attr_list);
+
if (adev->pm.dpm_enabled == 0)
return 0;
- INIT_LIST_HEAD(&adev->pm.pm_attr_list);
-
adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev,
DRIVER_NAME, adev,
hwmon_groups);
int vega10_fan_ctrl_get_fan_speed_pwm(struct pp_hwmgr *hwmgr,
uint32_t *speed)
{
- struct amdgpu_device *adev = hwmgr->adev;
- uint32_t duty100, duty;
- uint64_t tmp64;
+ uint32_t current_rpm;
+ uint32_t percent = 0;
- duty100 = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL1),
- CG_FDO_CTRL1, FMAX_DUTY100);
- duty = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_THERMAL_STATUS),
- CG_THERMAL_STATUS, FDO_PWM_DUTY);
+ if (hwmgr->thermal_controller.fanInfo.bNoFan)
+ return 0;
- if (!duty100)
- return -EINVAL;
+ if (vega10_get_current_rpm(hwmgr, ¤t_rpm))
+ return -1;
+
+ if (hwmgr->thermal_controller.
+ advanceFanControlParameters.usMaxFanRPM != 0)
+ percent = current_rpm * 255 /
+ hwmgr->thermal_controller.
+ advanceFanControlParameters.usMaxFanRPM;
- tmp64 = (uint64_t)duty * 255;
- do_div(tmp64, duty100);
- *speed = MIN((uint32_t)tmp64, 255);
+ *speed = MIN(percent, 255);
return 0;
}
ret = smu_enable_thermal_alert(smu);
if (ret) {
- dev_err(adev->dev, "Failed to enable thermal alert!\n");
- return ret;
+ dev_err(adev->dev, "Failed to enable thermal alert!\n");
+ return ret;
}
ret = smu_notify_display_change(smu);
#define SMU13_DRIVER_IF_V13_0_0_H
//Increment this version if SkuTable_t or BoardTable_t change
-#define PPTABLE_VERSION 0x24
+#define PPTABLE_VERSION 0x26
#define NUM_GFXCLK_DPM_LEVELS 16
#define NUM_SOCCLK_DPM_LEVELS 8
#define FEATURE_SPARE_63_BIT 63
#define NUM_FEATURES 64
+#define ALLOWED_FEATURE_CTRL_DEFAULT 0xFFFFFFFFFFFFFFFFULL
+#define ALLOWED_FEATURE_CTRL_SCPM ((1 << FEATURE_DPM_GFXCLK_BIT) | \
+ (1 << FEATURE_DPM_GFX_POWER_OPTIMIZER_BIT) | \
+ (1 << FEATURE_DPM_UCLK_BIT) | \
+ (1 << FEATURE_DPM_FCLK_BIT) | \
+ (1 << FEATURE_DPM_SOCCLK_BIT) | \
+ (1 << FEATURE_DPM_MP0CLK_BIT) | \
+ (1 << FEATURE_DPM_LINK_BIT) | \
+ (1 << FEATURE_DPM_DCN_BIT) | \
+ (1 << FEATURE_DS_GFXCLK_BIT) | \
+ (1 << FEATURE_DS_SOCCLK_BIT) | \
+ (1 << FEATURE_DS_FCLK_BIT) | \
+ (1 << FEATURE_DS_LCLK_BIT) | \
+ (1 << FEATURE_DS_DCFCLK_BIT) | \
+ (1 << FEATURE_DS_UCLK_BIT))
+
//For use with feature control messages
typedef enum {
FEATURE_PWR_ALL,
#define DEBUG_OVERRIDE_DISABLE_DFLL 0x00000200
#define DEBUG_OVERRIDE_ENABLE_RLC_VF_BRINGUP_MODE 0x00000400
#define DEBUG_OVERRIDE_DFLL_MASTER_MODE 0x00000800
+#define DEBUG_OVERRIDE_ENABLE_PROFILING_MODE 0x00001000
// VR Mapping Bit Defines
#define VR_MAPPING_VR_SELECT_MASK 0x01
} I2cControllerPort_e;
typedef enum {
- I2C_CONTROLLER_NAME_VR_GFX = 0,
- I2C_CONTROLLER_NAME_VR_SOC,
- I2C_CONTROLLER_NAME_VR_VMEMP,
- I2C_CONTROLLER_NAME_VR_VDDIO,
- I2C_CONTROLLER_NAME_LIQUID0,
- I2C_CONTROLLER_NAME_LIQUID1,
- I2C_CONTROLLER_NAME_PLX,
- I2C_CONTROLLER_NAME_OTHER,
- I2C_CONTROLLER_NAME_COUNT,
+ I2C_CONTROLLER_NAME_VR_GFX = 0,
+ I2C_CONTROLLER_NAME_VR_SOC,
+ I2C_CONTROLLER_NAME_VR_VMEMP,
+ I2C_CONTROLLER_NAME_VR_VDDIO,
+ I2C_CONTROLLER_NAME_LIQUID0,
+ I2C_CONTROLLER_NAME_LIQUID1,
+ I2C_CONTROLLER_NAME_PLX,
+ I2C_CONTROLLER_NAME_FAN_INTAKE,
+ I2C_CONTROLLER_NAME_COUNT,
} I2cControllerName_e;
typedef enum {
I2C_CONTROLLER_THROTTLER_LIQUID0,
I2C_CONTROLLER_THROTTLER_LIQUID1,
I2C_CONTROLLER_THROTTLER_PLX,
+ I2C_CONTROLLER_THROTTLER_FAN_INTAKE,
I2C_CONTROLLER_THROTTLER_INA3221,
I2C_CONTROLLER_THROTTLER_COUNT,
} I2cControllerThrottler_e;
typedef enum {
- I2C_CONTROLLER_PROTOCOL_VR_XPDE132G5,
- I2C_CONTROLLER_PROTOCOL_VR_IR35217,
- I2C_CONTROLLER_PROTOCOL_TMP_TMP102A,
- I2C_CONTROLLER_PROTOCOL_INA3221,
- I2C_CONTROLLER_PROTOCOL_COUNT,
+ I2C_CONTROLLER_PROTOCOL_VR_XPDE132G5,
+ I2C_CONTROLLER_PROTOCOL_VR_IR35217,
+ I2C_CONTROLLER_PROTOCOL_TMP_MAX31875,
+ I2C_CONTROLLER_PROTOCOL_INA3221,
+ I2C_CONTROLLER_PROTOCOL_COUNT,
} I2cControllerProtocol_e;
typedef struct {
#define PP_NUM_OD_VF_CURVE_POINTS PP_NUM_RTAVFS_PWL_ZONES + 1
+typedef enum {
+ FAN_MODE_AUTO = 0,
+ FAN_MODE_MANUAL_LINEAR,
+} FanMode_e;
typedef struct {
uint32_t FeatureCtrlMask;
//Voltage control
int16_t VoltageOffsetPerZoneBoundary[PP_NUM_OD_VF_CURVE_POINTS];
- uint16_t reserved[2];
+ uint16_t VddGfxVmax; // in mV
+
+ uint8_t IdlePwrSavingFeaturesCtrl;
+ uint8_t RuntimePwrSavingFeaturesCtrl;
//Frequency changes
int16_t GfxclkFmin; // MHz
//PPT
int16_t Ppt; // %
- int16_t reserved1;
+ int16_t Tdc;
//Fan control
uint8_t FanLinearPwmPoints[NUM_OD_FAN_MAX_POINTS];
uint32_t FeatureCtrlMask;
int16_t VoltageOffsetPerZoneBoundary;
- uint16_t reserved[2];
+ uint16_t VddGfxVmax; // in mV
+
+ uint8_t IdlePwrSavingFeaturesCtrl;
+ uint8_t RuntimePwrSavingFeaturesCtrl;
- uint16_t GfxclkFmin; // MHz
- uint16_t GfxclkFmax; // MHz
+ int16_t GfxclkFmin; // MHz
+ int16_t GfxclkFmax; // MHz
uint16_t UclkFmin; // MHz
uint16_t UclkFmax; // MHz
//PPT
int16_t Ppt; // %
- int16_t reserved1;
+ int16_t Tdc;
uint8_t FanLinearPwmPoints;
uint8_t FanLinearTempPoints;
uint16_t FanStartTempMin;
uint16_t FanStartTempMax;
- uint32_t Spare[12];
+ uint16_t PowerMinPpt0[POWER_SOURCE_COUNT];
+ uint32_t Spare[11];
} MsgLimits_t;
uint32_t GfxoffSpare[15];
// GFX GPO
- uint32_t GfxGpoSpare[16];
+ uint32_t DfllBtcMasterScalerM;
+ int32_t DfllBtcMasterScalerB;
+ uint32_t DfllBtcSlaveScalerM;
+ int32_t DfllBtcSlaveScalerB;
+
+ uint32_t DfllPccAsWaitCtrl; //GDFLL_AS_WAIT_CTRL_PCC register value to be passed to RLC msg
+ uint32_t DfllPccAsStepCtrl; //GDFLL_AS_STEP_CTRL_PCC register value to be passed to RLC msg
+
+ uint32_t DfllL2FrequencyBoostM; //Unitless (float)
+ uint32_t DfllL2FrequencyBoostB; //In MHz (integer)
+ uint32_t GfxGpoSpare[8];
// GFX DCS
uint16_t IntakeTempHighIntakeAcousticLimit;
uint16_t IntakeTempAcouticLimitReleaseRate;
- uint16_t FanStalledTempLimitOffset;
+ int16_t FanAbnormalTempLimitOffset;
uint16_t FanStalledTriggerRpm;
- uint16_t FanAbnormalTriggerRpm;
- uint16_t FanPadding;
+ uint16_t FanAbnormalTriggerRpmCoeff;
+ uint16_t FanAbnormalDetectionEnable;
- uint32_t FanSpare[14];
+ uint8_t FanIntakeSensorSupport;
+ uint8_t FanIntakePadding[3];
+ uint32_t FanSpare[13];
// SECTION: VDD_GFX AVFS
int16_t TotalBoardPowerM;
int16_t TotalBoardPowerB;
+ //PMFW-11158
+ QuadraticInt_t qFeffCoeffGameClock[POWER_SOURCE_COUNT];
+ QuadraticInt_t qFeffCoeffBaseClock[POWER_SOURCE_COUNT];
+ QuadraticInt_t qFeffCoeffBoostClock[POWER_SOURCE_COUNT];
+
// SECTION: Sku Reserved
- uint32_t Spare[61];
+ uint32_t Spare[43];
// Padding for MMHUB - do not modify this
uint32_t MmHubPadding[8];
uint32_t PostVoltageSetBacoDelay; // in microseconds. Amount of time FW will wait after power good is established or PSI0 command is issued
uint32_t BacoEntryDelay; // in milliseconds. Amount of time FW will wait to trigger BACO entry after receiving entry notification from OS
+ uint8_t FuseWritePowerMuxPresent;
+ uint8_t FuseWritePadding[3];
+
// SECTION: Board Reserved
- uint32_t BoardSpare[64];
+ uint32_t BoardSpare[63];
// SECTION: Structure Padding
uint16_t AverageTotalBoardPower;
uint16_t AvgTemperature[TEMP_COUNT];
- uint16_t TempPadding;
+ uint16_t AvgTemperatureFanIntake;
uint8_t PcieRate ;
uint8_t PcieWidth ;
#define IH_INTERRUPT_CONTEXT_ID_AUDIO_D0 0x5
#define IH_INTERRUPT_CONTEXT_ID_AUDIO_D3 0x6
#define IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING 0x7
+#define IH_INTERRUPT_CONTEXT_ID_FAN_ABNORMAL 0x8
+#define IH_INTERRUPT_CONTEXT_ID_FAN_RECOVERY 0x9
#endif
// *** IMPORTANT ***
// SMU TEAM: Always increment the interface version if
// any structure is changed in this file
-#define PMFW_DRIVER_IF_VERSION 5
+#define PMFW_DRIVER_IF_VERSION 7
typedef struct {
int32_t value;
uint16_t DclkFrequency; //[MHz]
uint16_t MemclkFrequency; //[MHz]
uint16_t spare; //[centi]
- uint16_t UvdActivity; //[centi]
uint16_t GfxActivity; //[centi]
+ uint16_t UvdActivity; //[centi]
uint16_t Voltage[2]; //[mV] indices: VDDCR_VDD, VDDCR_SOC
uint16_t Current[2]; //[mA] indices: VDDCR_VDD, VDDCR_SOC
uint16_t DeviceState;
uint16_t CurTemp; //[centi-Celsius]
uint16_t spare2;
+
+ uint16_t AverageGfxclkFrequency;
+ uint16_t AverageFclkFrequency;
+ uint16_t AverageGfxActivity;
+ uint16_t AverageSocclkFrequency;
+ uint16_t AverageVclkFrequency;
+ uint16_t AverageVcnActivity;
+ uint16_t AverageDRAMReads; //Filtered DF Bandwidth::DRAM Reads
+ uint16_t AverageDRAMWrites; //Filtered DF Bandwidth::DRAM Writes
+ uint16_t AverageSocketPower; //Filtered value of CurrentSocketPower
+ uint16_t AverageCorePower; //Filtered of [sum of CorePower[8]])
+ uint16_t AverageCoreC0Residency[8]; //Filtered of [average C0 residency % per core]
+ uint32_t MetricsCounter; //Counts the # of metrics table parameter reads per update to the metrics table, i.e. if the metrics table update happens every 1 second, this value could be up to 1000 if the smu collected metrics data every cycle, or as low as 0 if the smu was asleep the whole time. Reset to 0 after writing.
} SmuMetrics_t;
typedef struct {
#define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF
#define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04
#define SMU13_DRIVER_IF_VERSION_ALDE 0x08
-#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x05
+#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04
-#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x30
+#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10 0x32
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x2C
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_10 0x1D
static int arcturus_set_df_cstate(struct smu_context *smu,
enum pp_df_cstate state)
{
+ struct amdgpu_device *adev = smu->adev;
uint32_t smu_version;
int ret;
+ /*
+ * Arcturus does not need the cstate disablement
+ * prerequisite for gpu reset.
+ */
+ if (amdgpu_in_reset(adev) || adev->in_suspend)
+ return 0;
+
ret = smu_cmn_get_smc_version(smu, NULL, &smu_version);
if (ret) {
dev_err(smu->adev->dev, "Failed to get smu version!\n");
static int aldebaran_set_df_cstate(struct smu_context *smu,
enum pp_df_cstate state)
{
+ struct amdgpu_device *adev = smu->adev;
+
+ /*
+ * Aldebaran does not need the cstate disablement
+ * prerequisite for gpu reset.
+ */
+ if (amdgpu_in_reset(adev) || adev->in_suspend)
+ return 0;
+
return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_DFCstateControl, state, NULL);
}
return 0;
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) ||
- (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)))
+ (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) ||
+ (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10)))
return 0;
/* override pptable_id from driver parameter */
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_ALDE;
break;
case IP_VERSION(13, 0, 0):
- smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0;
+ case IP_VERSION(13, 0, 10):
+ smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10;
break;
case IP_VERSION(13, 0, 7):
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_7;
case IP_VERSION(13, 0, 5):
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_5;
break;
- case IP_VERSION(13, 0, 10):
- smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_10;
- break;
default:
dev_err(adev->dev, "smu unsupported IP version: 0x%x.\n",
adev->ip_versions[MP1_HWIP][0]);
dev_info(adev->dev, "override pptable id %d\n", pptable_id);
} else {
pptable_id = smu->smu_table.boot_values.pp_table_id;
-
- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10))
- pptable_id = 6666;
}
/* force using vbios pptable in sriov mode */
case IP_VERSION(13, 0, 5):
case IP_VERSION(13, 0, 7):
case IP_VERSION(13, 0, 8):
+ case IP_VERSION(13, 0, 10):
if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
return 0;
if (enable)
MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0),
MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0),
MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0),
+ MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0),
};
static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = {
return ret;
}
+static int smu_v13_0_0_set_df_cstate(struct smu_context *smu,
+ enum pp_df_cstate state)
+{
+ return smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_DFCstateControl,
+ state,
+ NULL);
+}
+
static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
.get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask,
.set_default_dpm_table = smu_v13_0_0_set_default_dpm_table,
.mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported,
.mode1_reset = smu_v13_0_mode1_reset,
.set_mp1_state = smu_v13_0_0_set_mp1_state,
+ .set_df_cstate = smu_v13_0_0_set_df_cstate,
};
void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0),
MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0),
MSG_MAP(SetMGpuFanBoostLimitRpm, PPSMC_MSG_SetMGpuFanBoostLimitRpm, 0),
+ MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0),
};
static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
return true;
}
+
+static int smu_v13_0_7_set_df_cstate(struct smu_context *smu,
+ enum pp_df_cstate state)
+{
+ return smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_DFCstateControl,
+ state,
+ NULL);
+}
+
static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask,
.set_default_dpm_table = smu_v13_0_7_set_default_dpm_table,
.mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported,
.mode1_reset = smu_v13_0_mode1_reset,
.set_mp1_state = smu_v13_0_7_set_mp1_state,
+ .set_df_cstate = smu_v13_0_7_set_df_cstate,
};
void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
struct gpio_desc *gpio_powerdown;
struct device_link *link;
bool pre_enabled;
+ bool need_post_hpd_delay;
};
static const struct regmap_config ps8640_regmap_config[] = {
{
struct regmap *map = ps_bridge->regmap[PAGE2_TOP_CNTL];
int status;
+ int ret;
/*
* Apparently something about the firmware in the chip signals that
* HPD goes high by reporting GPIO9 as high (even though HPD isn't
* actually connected to GPIO9).
*/
- return regmap_read_poll_timeout(map, PAGE2_GPIO_H, status,
- status & PS_GPIO9, wait_us / 10, wait_us);
+ ret = regmap_read_poll_timeout(map, PAGE2_GPIO_H, status,
+ status & PS_GPIO9, wait_us / 10, wait_us);
+
+ /*
+ * The first time we see HPD go high after a reset we delay an extra
+ * 50 ms. The best guess is that the MCU is doing "stuff" during this
+ * time (maybe talking to the panel) and we don't want to interrupt it.
+ *
+ * No locking is done around "need_post_hpd_delay". If we're here we
+ * know we're holding a PM Runtime reference and the only other place
+ * that touches this is PM Runtime resume.
+ */
+ if (!ret && ps_bridge->need_post_hpd_delay) {
+ ps_bridge->need_post_hpd_delay = false;
+ msleep(50);
+ }
+
+ return ret;
}
static int ps8640_wait_hpd_asserted(struct drm_dp_aux *aux, unsigned long wait_us)
msleep(50);
gpiod_set_value(ps_bridge->gpio_reset, 0);
+ /* We just reset things, so we need a delay after the first HPD */
+ ps_bridge->need_post_hpd_delay = true;
+
/*
* Mystery 200 ms delay for the "MCU to be ready". It's unclear if
* this is truly necessary since the MCU will already signal that
if (drm_WARN_ON(dev, funcs && funcs->destroy))
return -EINVAL;
- ret = __drm_connector_init(dev, connector, funcs, connector_type, NULL);
+ ret = __drm_connector_init(dev, connector, funcs, connector_type, ddc);
if (ret)
return ret;
return false;
}
+static const uint32_t conv_from_xrgb8888[] = {
+ DRM_FORMAT_XRGB8888,
+ DRM_FORMAT_ARGB8888,
+ DRM_FORMAT_XRGB2101010,
+ DRM_FORMAT_ARGB2101010,
+ DRM_FORMAT_RGB565,
+ DRM_FORMAT_RGB888,
+};
+
+static const uint32_t conv_from_rgb565_888[] = {
+ DRM_FORMAT_XRGB8888,
+ DRM_FORMAT_ARGB8888,
+};
+
+static bool is_conversion_supported(uint32_t from, uint32_t to)
+{
+ switch (from) {
+ case DRM_FORMAT_XRGB8888:
+ case DRM_FORMAT_ARGB8888:
+ return is_listed_fourcc(conv_from_xrgb8888, ARRAY_SIZE(conv_from_xrgb8888), to);
+ case DRM_FORMAT_RGB565:
+ case DRM_FORMAT_RGB888:
+ return is_listed_fourcc(conv_from_rgb565_888, ARRAY_SIZE(conv_from_rgb565_888), to);
+ case DRM_FORMAT_XRGB2101010:
+ return to == DRM_FORMAT_ARGB2101010;
+ case DRM_FORMAT_ARGB2101010:
+ return to == DRM_FORMAT_XRGB2101010;
+ default:
+ return false;
+ }
+}
+
/**
* drm_fb_build_fourcc_list - Filters a list of supported color formats against
* the device's native formats
* be handed over to drm_universal_plane_init() et al. Native formats
* will go before emulated formats. Other heuristics might be applied
* to optimize the order. Formats near the beginning of the list are
- * usually preferred over formats near the end of the list.
+ * usually preferred over formats near the end of the list. Formats
+ * without conversion helpers will be skipped. New drivers should only
+ * pass in XRGB8888 and avoid exposing additional emulated formats.
*
* Returns:
* The number of color-formats 4CC codes returned in @fourccs_out.
{
u32 *fourccs = fourccs_out;
const u32 *fourccs_end = fourccs_out + nfourccs_out;
- bool found_native = false;
+ uint32_t native_format = 0;
size_t i;
/*
drm_dbg_kms(dev, "adding native format %p4cc\n", &fourcc);
- if (!found_native)
- found_native = is_listed_fourcc(driver_fourccs, driver_nfourccs, fourcc);
+ /*
+ * There should only be one native format with the current API.
+ * This API needs to be refactored to correctly support arbitrary
+ * sets of native formats, since it needs to report which native
+ * format to use for each emulated format.
+ */
+ if (!native_format)
+ native_format = fourcc;
*fourccs = fourcc;
++fourccs;
}
- /*
- * The plane's atomic_update helper converts the framebuffer's color format
- * to a native format when copying to device memory.
- *
- * If there is not a single format supported by both, device and
- * driver, the native formats are likely not supported by the conversion
- * helpers. Therefore *only* support the native formats and add a
- * conversion helper ASAP.
- */
- if (!found_native) {
- drm_warn(dev, "Format conversion helpers required to add extra formats.\n");
- goto out;
- }
-
/*
* The extra formats, emulated by the driver, go second.
*/
} else if (fourccs == fourccs_end) {
drm_warn(dev, "Ignoring emulated format %p4cc\n", &fourcc);
continue; /* end of available output buffer */
+ } else if (!is_conversion_supported(fourcc, native_format)) {
+ drm_dbg_kms(dev, "Unsupported emulated format %p4cc\n", &fourcc);
+ continue; /* format is not supported for conversion */
}
drm_dbg_kms(dev, "adding emulated format %p4cc\n", &fourcc);
++fourccs;
}
-out:
return fourccs - fourccs_out;
}
EXPORT_SYMBOL(drm_fb_build_fourcc_list);
display/intel_ddi.o \
display/intel_ddi_buf_trans.o \
display/intel_display_trace.o \
+ display/intel_dkl_phy.o \
display/intel_dp.o \
display/intel_dp_aux.o \
display/intel_dp_aux_backlight.o \
#include "intel_de.h"
#include "intel_display_power.h"
#include "intel_display_types.h"
+#include "intel_dkl_phy.h"
#include "intel_dp.h"
#include "intel_dp_link_training.h"
#include "intel_dp_mst.h"
for (ln = 0; ln < 2; ln++) {
int level;
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, ln));
-
- intel_de_write(dev_priv, DKL_TX_PMD_LANE_SUS(tc_port), 0);
+ intel_dkl_phy_write(dev_priv, DKL_TX_PMD_LANE_SUS(tc_port), ln, 0);
level = intel_ddi_level(encoder, crtc_state, 2*ln+0);
- intel_de_rmw(dev_priv, DKL_TX_DPCNTL0(tc_port),
- DKL_TX_PRESHOOT_COEFF_MASK |
- DKL_TX_DE_EMPAHSIS_COEFF_MASK |
- DKL_TX_VSWING_CONTROL_MASK,
- DKL_TX_PRESHOOT_COEFF(trans->entries[level].dkl.preshoot) |
- DKL_TX_DE_EMPHASIS_COEFF(trans->entries[level].dkl.de_emphasis) |
- DKL_TX_VSWING_CONTROL(trans->entries[level].dkl.vswing));
+ intel_dkl_phy_rmw(dev_priv, DKL_TX_DPCNTL0(tc_port), ln,
+ DKL_TX_PRESHOOT_COEFF_MASK |
+ DKL_TX_DE_EMPAHSIS_COEFF_MASK |
+ DKL_TX_VSWING_CONTROL_MASK,
+ DKL_TX_PRESHOOT_COEFF(trans->entries[level].dkl.preshoot) |
+ DKL_TX_DE_EMPHASIS_COEFF(trans->entries[level].dkl.de_emphasis) |
+ DKL_TX_VSWING_CONTROL(trans->entries[level].dkl.vswing));
level = intel_ddi_level(encoder, crtc_state, 2*ln+1);
- intel_de_rmw(dev_priv, DKL_TX_DPCNTL1(tc_port),
- DKL_TX_PRESHOOT_COEFF_MASK |
- DKL_TX_DE_EMPAHSIS_COEFF_MASK |
- DKL_TX_VSWING_CONTROL_MASK,
- DKL_TX_PRESHOOT_COEFF(trans->entries[level].dkl.preshoot) |
- DKL_TX_DE_EMPHASIS_COEFF(trans->entries[level].dkl.de_emphasis) |
- DKL_TX_VSWING_CONTROL(trans->entries[level].dkl.vswing));
+ intel_dkl_phy_rmw(dev_priv, DKL_TX_DPCNTL1(tc_port), ln,
+ DKL_TX_PRESHOOT_COEFF_MASK |
+ DKL_TX_DE_EMPAHSIS_COEFF_MASK |
+ DKL_TX_VSWING_CONTROL_MASK,
+ DKL_TX_PRESHOOT_COEFF(trans->entries[level].dkl.preshoot) |
+ DKL_TX_DE_EMPHASIS_COEFF(trans->entries[level].dkl.de_emphasis) |
+ DKL_TX_VSWING_CONTROL(trans->entries[level].dkl.vswing));
- intel_de_rmw(dev_priv, DKL_TX_DPCNTL2(tc_port),
- DKL_TX_DP20BITMODE, 0);
+ intel_dkl_phy_rmw(dev_priv, DKL_TX_DPCNTL2(tc_port), ln,
+ DKL_TX_DP20BITMODE, 0);
if (IS_ALDERLAKE_P(dev_priv)) {
u32 val;
val |= DKL_TX_DPCNTL2_CFG_LOADGENSELECT_TX2(0);
}
- intel_de_rmw(dev_priv, DKL_TX_DPCNTL2(tc_port),
- DKL_TX_DPCNTL2_CFG_LOADGENSELECT_TX1_MASK |
- DKL_TX_DPCNTL2_CFG_LOADGENSELECT_TX2_MASK,
- val);
+ intel_dkl_phy_rmw(dev_priv, DKL_TX_DPCNTL2(tc_port), ln,
+ DKL_TX_DPCNTL2_CFG_LOADGENSELECT_TX1_MASK |
+ DKL_TX_DPCNTL2_CFG_LOADGENSELECT_TX2_MASK,
+ val);
}
}
}
return;
if (DISPLAY_VER(dev_priv) >= 12) {
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x0));
- ln0 = intel_de_read(dev_priv, DKL_DP_MODE(tc_port));
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x1));
- ln1 = intel_de_read(dev_priv, DKL_DP_MODE(tc_port));
+ ln0 = intel_dkl_phy_read(dev_priv, DKL_DP_MODE(tc_port), 0);
+ ln1 = intel_dkl_phy_read(dev_priv, DKL_DP_MODE(tc_port), 1);
} else {
ln0 = intel_de_read(dev_priv, MG_DP_MODE(0, tc_port));
ln1 = intel_de_read(dev_priv, MG_DP_MODE(1, tc_port));
}
if (DISPLAY_VER(dev_priv) >= 12) {
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x0));
- intel_de_write(dev_priv, DKL_DP_MODE(tc_port), ln0);
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x1));
- intel_de_write(dev_priv, DKL_DP_MODE(tc_port), ln1);
+ intel_dkl_phy_write(dev_priv, DKL_DP_MODE(tc_port), 0, ln0);
+ intel_dkl_phy_write(dev_priv, DKL_DP_MODE(tc_port), 1, ln1);
} else {
intel_de_write(dev_priv, MG_DP_MODE(0, tc_port), ln0);
intel_de_write(dev_priv, MG_DP_MODE(1, tc_port), ln1);
enum tc_port tc_port = intel_port_to_tc(i915, encoder->port);
int ln;
- for (ln = 0; ln < 2; ln++) {
- intel_de_write(i915, HIP_INDEX_REG(tc_port), HIP_INDEX_VAL(tc_port, ln));
- intel_de_rmw(i915, DKL_PCS_DW5(tc_port), DKL_PCS_DW5_CORE_SOFTRESET, 0);
- }
+ for (ln = 0; ln < 2; ln++)
+ intel_dkl_phy_rmw(i915, DKL_PCS_DW5(tc_port), ln, DKL_PCS_DW5_CORE_SOFTRESET, 0);
}
static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp,
struct intel_global_obj obj;
} dbuf;
+ struct {
+ /*
+ * dkl.phy_lock protects against concurrent access of the
+ * Dekel TypeC PHYs.
+ */
+ spinlock_t phy_lock;
+ } dkl;
+
struct {
/* VLV/CHV/BXT/GLK DSI MMIO register base address */
u32 mmio_base;
#include "intel_de.h"
#include "intel_display_power_well.h"
#include "intel_display_types.h"
+#include "intel_dkl_phy.h"
#include "intel_dmc.h"
#include "intel_dpio_phy.h"
#include "intel_dpll.h"
enum tc_port tc_port;
tc_port = TGL_AUX_PW_TO_TC_PORT(i915_power_well_instance(power_well)->hsw.idx);
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x2));
- if (intel_de_wait_for_set(dev_priv, DKL_CMN_UC_DW_27(tc_port),
- DKL_CMN_UC_DW27_UC_HEALTH, 1))
+ if (wait_for(intel_dkl_phy_read(dev_priv, DKL_CMN_UC_DW_27(tc_port), 2) &
+ DKL_CMN_UC_DW27_UC_HEALTH, 1))
drm_warn(&dev_priv->drm,
"Timeout waiting TC uC health\n");
}
--- /dev/null
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+#include "i915_drv.h"
+#include "i915_reg.h"
+
+#include "intel_de.h"
+#include "intel_display.h"
+#include "intel_dkl_phy.h"
+
+static void
+dkl_phy_set_hip_idx(struct drm_i915_private *i915, i915_reg_t reg, int idx)
+{
+ enum tc_port tc_port = DKL_REG_TC_PORT(reg);
+
+ drm_WARN_ON(&i915->drm, tc_port < TC_PORT_1 || tc_port >= I915_MAX_TC_PORTS);
+
+ intel_de_write(i915,
+ HIP_INDEX_REG(tc_port),
+ HIP_INDEX_VAL(tc_port, idx));
+}
+
+/**
+ * intel_dkl_phy_read - read a Dekel PHY register
+ * @i915: i915 device instance
+ * @reg: Dekel PHY register
+ * @ln: lane instance of @reg
+ *
+ * Read the @reg Dekel PHY register.
+ *
+ * Returns the read value.
+ */
+u32
+intel_dkl_phy_read(struct drm_i915_private *i915, i915_reg_t reg, int ln)
+{
+ u32 val;
+
+ spin_lock(&i915->display.dkl.phy_lock);
+
+ dkl_phy_set_hip_idx(i915, reg, ln);
+ val = intel_de_read(i915, reg);
+
+ spin_unlock(&i915->display.dkl.phy_lock);
+
+ return val;
+}
+
+/**
+ * intel_dkl_phy_write - write a Dekel PHY register
+ * @i915: i915 device instance
+ * @reg: Dekel PHY register
+ * @ln: lane instance of @reg
+ * @val: value to write
+ *
+ * Write @val to the @reg Dekel PHY register.
+ */
+void
+intel_dkl_phy_write(struct drm_i915_private *i915, i915_reg_t reg, int ln, u32 val)
+{
+ spin_lock(&i915->display.dkl.phy_lock);
+
+ dkl_phy_set_hip_idx(i915, reg, ln);
+ intel_de_write(i915, reg, val);
+
+ spin_unlock(&i915->display.dkl.phy_lock);
+}
+
+/**
+ * intel_dkl_phy_rmw - read-modify-write a Dekel PHY register
+ * @i915: i915 device instance
+ * @reg: Dekel PHY register
+ * @ln: lane instance of @reg
+ * @clear: mask to clear
+ * @set: mask to set
+ *
+ * Read the @reg Dekel PHY register, clearing then setting the @clear/@set bits in it, and writing
+ * this value back to the register if the value differs from the read one.
+ */
+void
+intel_dkl_phy_rmw(struct drm_i915_private *i915, i915_reg_t reg, int ln, u32 clear, u32 set)
+{
+ spin_lock(&i915->display.dkl.phy_lock);
+
+ dkl_phy_set_hip_idx(i915, reg, ln);
+ intel_de_rmw(i915, reg, clear, set);
+
+ spin_unlock(&i915->display.dkl.phy_lock);
+}
+
+/**
+ * intel_dkl_phy_posting_read - do a posting read from a Dekel PHY register
+ * @i915: i915 device instance
+ * @reg: Dekel PHY register
+ * @ln: lane instance of @reg
+ *
+ * Read the @reg Dekel PHY register without returning the read value.
+ */
+void
+intel_dkl_phy_posting_read(struct drm_i915_private *i915, i915_reg_t reg, int ln)
+{
+ spin_lock(&i915->display.dkl.phy_lock);
+
+ dkl_phy_set_hip_idx(i915, reg, ln);
+ intel_de_posting_read(i915, reg);
+
+ spin_unlock(&i915->display.dkl.phy_lock);
+}
--- /dev/null
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+#ifndef __INTEL_DKL_PHY_H__
+#define __INTEL_DKL_PHY_H__
+
+#include <linux/types.h>
+
+#include "i915_reg_defs.h"
+
+struct drm_i915_private;
+
+u32
+intel_dkl_phy_read(struct drm_i915_private *i915, i915_reg_t reg, int ln);
+void
+intel_dkl_phy_write(struct drm_i915_private *i915, i915_reg_t reg, int ln, u32 val);
+void
+intel_dkl_phy_rmw(struct drm_i915_private *i915, i915_reg_t reg, int ln, u32 clear, u32 set);
+void
+intel_dkl_phy_posting_read(struct drm_i915_private *i915, i915_reg_t reg, int ln);
+
+#endif /* __INTEL_DKL_PHY_H__ */
drm_dp_pcon_hdmi_frl_link_error_count(&intel_dp->aux, &intel_dp->attached_connector->base);
+ intel_dp->frl.is_trained = false;
+
/* Restart FRL training or fall back to TMDS mode */
intel_dp_check_frl_training(intel_dp);
}
encoder->devdata, IS_ERR(edid) ? NULL : edid);
intel_panel_add_edid_fixed_modes(intel_connector,
- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,
+ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE ||
intel_vrr_is_capable(intel_connector));
/* MSO requires information from the EDID */
#include "intel_de.h"
#include "intel_display_types.h"
+#include "intel_dkl_phy.h"
#include "intel_dpio_phy.h"
#include "intel_dpll.h"
#include "intel_dpll_mgr.h"
* All registers read here have the same HIP_INDEX_REG even though
* they are on different building blocks
*/
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x2));
-
- hw_state->mg_refclkin_ctl = intel_de_read(dev_priv,
- DKL_REFCLKIN_CTL(tc_port));
+ hw_state->mg_refclkin_ctl = intel_dkl_phy_read(dev_priv,
+ DKL_REFCLKIN_CTL(tc_port), 2);
hw_state->mg_refclkin_ctl &= MG_REFCLKIN_CTL_OD_2_MUX_MASK;
hw_state->mg_clktop2_hsclkctl =
- intel_de_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port));
+ intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2);
hw_state->mg_clktop2_hsclkctl &=
MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL_MASK |
MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL_MASK |
MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK;
hw_state->mg_clktop2_coreclkctl1 =
- intel_de_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port));
+ intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2);
hw_state->mg_clktop2_coreclkctl1 &=
MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO_MASK;
- hw_state->mg_pll_div0 = intel_de_read(dev_priv, DKL_PLL_DIV0(tc_port));
+ hw_state->mg_pll_div0 = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV0(tc_port), 2);
val = DKL_PLL_DIV0_MASK;
if (dev_priv->display.vbt.override_afc_startup)
val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;
hw_state->mg_pll_div0 &= val;
- hw_state->mg_pll_div1 = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));
+ hw_state->mg_pll_div1 = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV1(tc_port), 2);
hw_state->mg_pll_div1 &= (DKL_PLL_DIV1_IREF_TRIM_MASK |
DKL_PLL_DIV1_TDC_TARGET_CNT_MASK);
- hw_state->mg_pll_ssc = intel_de_read(dev_priv, DKL_PLL_SSC(tc_port));
+ hw_state->mg_pll_ssc = intel_dkl_phy_read(dev_priv, DKL_PLL_SSC(tc_port), 2);
hw_state->mg_pll_ssc &= (DKL_PLL_SSC_IREF_NDIV_RATIO_MASK |
DKL_PLL_SSC_STEP_LEN_MASK |
DKL_PLL_SSC_STEP_NUM_MASK |
DKL_PLL_SSC_EN);
- hw_state->mg_pll_bias = intel_de_read(dev_priv, DKL_PLL_BIAS(tc_port));
+ hw_state->mg_pll_bias = intel_dkl_phy_read(dev_priv, DKL_PLL_BIAS(tc_port), 2);
hw_state->mg_pll_bias &= (DKL_PLL_BIAS_FRAC_EN_H |
DKL_PLL_BIAS_FBDIV_FRAC_MASK);
hw_state->mg_pll_tdc_coldst_bias =
- intel_de_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));
+ intel_dkl_phy_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);
hw_state->mg_pll_tdc_coldst_bias &= (DKL_PLL_TDC_SSC_STEP_SIZE_MASK |
DKL_PLL_TDC_FEED_FWD_GAIN_MASK);
* All registers programmed here have the same HIP_INDEX_REG even
* though on different building block
*/
- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
- HIP_INDEX_VAL(tc_port, 0x2));
-
/* All the registers are RMW */
- val = intel_de_read(dev_priv, DKL_REFCLKIN_CTL(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_REFCLKIN_CTL(tc_port), 2);
val &= ~MG_REFCLKIN_CTL_OD_2_MUX_MASK;
val |= hw_state->mg_refclkin_ctl;
- intel_de_write(dev_priv, DKL_REFCLKIN_CTL(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_REFCLKIN_CTL(tc_port), 2, val);
- val = intel_de_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2);
val &= ~MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO_MASK;
val |= hw_state->mg_clktop2_coreclkctl1;
- intel_de_write(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2, val);
- val = intel_de_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2);
val &= ~(MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL_MASK |
MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL_MASK |
MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_MASK |
MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK);
val |= hw_state->mg_clktop2_hsclkctl;
- intel_de_write(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2, val);
val = DKL_PLL_DIV0_MASK;
if (dev_priv->display.vbt.override_afc_startup)
val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;
- intel_de_rmw(dev_priv, DKL_PLL_DIV0(tc_port), val,
- hw_state->mg_pll_div0);
+ intel_dkl_phy_rmw(dev_priv, DKL_PLL_DIV0(tc_port), 2, val,
+ hw_state->mg_pll_div0);
- val = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV1(tc_port), 2);
val &= ~(DKL_PLL_DIV1_IREF_TRIM_MASK |
DKL_PLL_DIV1_TDC_TARGET_CNT_MASK);
val |= hw_state->mg_pll_div1;
- intel_de_write(dev_priv, DKL_PLL_DIV1(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_PLL_DIV1(tc_port), 2, val);
- val = intel_de_read(dev_priv, DKL_PLL_SSC(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_SSC(tc_port), 2);
val &= ~(DKL_PLL_SSC_IREF_NDIV_RATIO_MASK |
DKL_PLL_SSC_STEP_LEN_MASK |
DKL_PLL_SSC_STEP_NUM_MASK |
DKL_PLL_SSC_EN);
val |= hw_state->mg_pll_ssc;
- intel_de_write(dev_priv, DKL_PLL_SSC(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_PLL_SSC(tc_port), 2, val);
- val = intel_de_read(dev_priv, DKL_PLL_BIAS(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_BIAS(tc_port), 2);
val &= ~(DKL_PLL_BIAS_FRAC_EN_H |
DKL_PLL_BIAS_FBDIV_FRAC_MASK);
val |= hw_state->mg_pll_bias;
- intel_de_write(dev_priv, DKL_PLL_BIAS(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_PLL_BIAS(tc_port), 2, val);
- val = intel_de_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));
+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);
val &= ~(DKL_PLL_TDC_SSC_STEP_SIZE_MASK |
DKL_PLL_TDC_FEED_FWD_GAIN_MASK);
val |= hw_state->mg_pll_tdc_coldst_bias;
- intel_de_write(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), val);
+ intel_dkl_phy_write(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2, val);
- intel_de_posting_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));
+ intel_dkl_phy_posting_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);
}
static void icl_pll_power_enable(struct drm_i915_private *dev_priv,
/* Try EDID first */
intel_panel_add_edid_fixed_modes(intel_connector,
- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,
- false);
+ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
/* Failed to get EDID, what about VBT? */
if (!intel_panel_preferred_fixed_mode(intel_connector))
}
void intel_panel_add_edid_fixed_modes(struct intel_connector *connector,
- bool has_drrs, bool has_vrr)
+ bool use_alt_fixed_modes)
{
intel_panel_add_edid_preferred_mode(connector);
- if (intel_panel_preferred_fixed_mode(connector) && (has_drrs || has_vrr))
+ if (intel_panel_preferred_fixed_mode(connector) && use_alt_fixed_modes)
intel_panel_add_edid_alt_fixed_modes(connector);
intel_panel_destroy_probed_modes(connector);
}
int intel_panel_compute_config(struct intel_connector *connector,
struct drm_display_mode *adjusted_mode);
void intel_panel_add_edid_fixed_modes(struct intel_connector *connector,
- bool has_drrs, bool has_vrr);
+ bool use_alt_fixed_modes);
void intel_panel_add_vbt_lfp_fixed_mode(struct intel_connector *connector);
void intel_panel_add_vbt_sdvo_fixed_mode(struct intel_connector *connector);
void intel_panel_add_encoder_fixed_mode(struct intel_connector *connector,
if (!intel_sdvo_connector)
return false;
- if (device == 0) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS0;
+ if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS0;
- } else if (device == 1) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS1;
+ else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS1;
- }
intel_connector = &intel_sdvo_connector->base;
connector = &intel_connector->base;
encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
- intel_sdvo->controlled_output |= type;
intel_sdvo_connector->output_flag = type;
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
encoder->encoder_type = DRM_MODE_ENCODER_DAC;
connector->connector_type = DRM_MODE_CONNECTOR_VGA;
- if (device == 0) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB0;
+ if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB0;
- } else if (device == 1) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB1;
+ else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB1;
- }
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
kfree(intel_sdvo_connector);
encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
- if (device == 0) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS0;
+ if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS0;
- } else if (device == 1) {
- intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS1;
+ else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS1;
- }
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
kfree(intel_sdvo_connector);
intel_panel_add_vbt_sdvo_fixed_mode(intel_connector);
if (!intel_panel_preferred_fixed_mode(intel_connector)) {
+ mutex_lock(&i915->drm.mode_config.mutex);
+
intel_ddc_get_modes(connector, &intel_sdvo->ddc);
- intel_panel_add_edid_fixed_modes(intel_connector, false, false);
+ intel_panel_add_edid_fixed_modes(intel_connector, false);
+
+ mutex_unlock(&i915->drm.mode_config.mutex);
}
intel_panel_init(intel_connector);
return false;
}
+static u16 intel_sdvo_filter_output_flags(u16 flags)
+{
+ flags &= SDVO_OUTPUT_MASK;
+
+ /* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
+ if (!(flags & SDVO_OUTPUT_TMDS0))
+ flags &= ~SDVO_OUTPUT_TMDS1;
+
+ if (!(flags & SDVO_OUTPUT_RGB0))
+ flags &= ~SDVO_OUTPUT_RGB1;
+
+ if (!(flags & SDVO_OUTPUT_LVDS0))
+ flags &= ~SDVO_OUTPUT_LVDS1;
+
+ return flags;
+}
+
static bool
intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
{
- /* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
+ struct drm_i915_private *i915 = to_i915(intel_sdvo->base.base.dev);
+
+ flags = intel_sdvo_filter_output_flags(flags);
+
+ intel_sdvo->controlled_output = flags;
+
+ intel_sdvo_select_ddc_bus(i915, intel_sdvo);
if (flags & SDVO_OUTPUT_TMDS0)
if (!intel_sdvo_dvi_init(intel_sdvo, 0))
return false;
- if ((flags & SDVO_TMDS_MASK) == SDVO_TMDS_MASK)
+ if (flags & SDVO_OUTPUT_TMDS1)
if (!intel_sdvo_dvi_init(intel_sdvo, 1))
return false;
if (!intel_sdvo_analog_init(intel_sdvo, 0))
return false;
- if ((flags & SDVO_RGB_MASK) == SDVO_RGB_MASK)
+ if (flags & SDVO_OUTPUT_RGB1)
if (!intel_sdvo_analog_init(intel_sdvo, 1))
return false;
if (!intel_sdvo_lvds_init(intel_sdvo, 0))
return false;
- if ((flags & SDVO_LVDS_MASK) == SDVO_LVDS_MASK)
+ if (flags & SDVO_OUTPUT_LVDS1)
if (!intel_sdvo_lvds_init(intel_sdvo, 1))
return false;
- if ((flags & SDVO_OUTPUT_MASK) == 0) {
+ if (flags == 0) {
unsigned char bytes[2];
- intel_sdvo->controlled_output = 0;
memcpy(bytes, &intel_sdvo->caps.output_flags, 2);
DRM_DEBUG_KMS("%s: Unknown SDVO output type (0x%02x%02x)\n",
SDVO_NAME(intel_sdvo),
*/
intel_sdvo->base.cloneable = 0;
- intel_sdvo_select_ddc_bus(dev_priv, intel_sdvo);
-
/* Set the input timing to the screen. Assume always input 0. */
if (!intel_sdvo_set_target_input(intel_sdvo))
goto err_output;
#include <linux/scatterlist.h>
#include <linux/slab.h>
-#include <linux/swiotlb.h>
#include "i915_drv.h"
#include "i915_gem.h"
struct scatterlist *sg;
unsigned int sg_page_sizes;
unsigned int npages;
- int max_order;
+ int max_order = MAX_ORDER;
+ unsigned int max_segment;
gfp_t gfp;
- max_order = MAX_ORDER;
-#ifdef CONFIG_SWIOTLB
- if (is_swiotlb_active(obj->base.dev->dev)) {
- unsigned int max_segment;
-
- max_segment = swiotlb_max_segment();
- if (max_segment) {
- max_segment = max_t(unsigned int, max_segment,
- PAGE_SIZE) >> PAGE_SHIFT;
- max_order = min(max_order, ilog2(max_segment));
- }
- }
-#endif
+ max_segment = i915_sg_segment_size(i915->drm.dev) >> PAGE_SHIFT;
+ max_order = min(max_order, get_order(max_segment));
gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
if (IS_I965GM(i915) || IS_I965G(i915)) {
struct intel_memory_region *mem = obj->mm.region;
struct address_space *mapping = obj->base.filp->f_mapping;
const unsigned long page_count = obj->base.size / PAGE_SIZE;
- unsigned int max_segment = i915_sg_segment_size();
+ unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
struct sg_table *st;
struct sgt_iter sgt_iter;
struct page *page;
struct drm_i915_private *i915 = container_of(bdev, typeof(*i915), bdev);
struct intel_memory_region *mr = i915->mm.regions[INTEL_MEMORY_SYSTEM];
struct i915_ttm_tt *i915_tt = container_of(ttm, typeof(*i915_tt), ttm);
- const unsigned int max_segment = i915_sg_segment_size();
+ const unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
const size_t size = (size_t)ttm->num_pages << PAGE_SHIFT;
struct file *filp = i915_tt->filp;
struct sgt_iter sgt_iter;
ret = sg_alloc_table_from_pages_segment(st,
ttm->pages, ttm->num_pages,
0, (unsigned long)ttm->num_pages << PAGE_SHIFT,
- i915_sg_segment_size(), GFP_KERNEL);
+ i915_sg_segment_size(i915_tt->dev), GFP_KERNEL);
if (ret) {
st->sgl = NULL;
return ERR_PTR(ret);
static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
{
const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
- unsigned int max_segment = i915_sg_segment_size();
+ unsigned int max_segment = i915_sg_segment_size(obj->base.dev->dev);
struct sg_table *st;
unsigned int sg_page_sizes;
struct page **pvec;
}
if (IS_DG1_GRAPHICS_STEP(i915, STEP_A0, STEP_B0) ||
- IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915)) {
+ IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) {
/*
* Wa_1607030317:tgl
* Wa_1607186500:tgl
- * Wa_1607297627:tgl,rkl,dg1[a0]
+ * Wa_1607297627:tgl,rkl,dg1[a0],adlp
*
* On TGL and RKL there are multiple entries for this WA in the
* BSpec; some indicate this is an A0-only WA, others indicate
mutex_init(&dev_priv->display.wm.wm_mutex);
mutex_init(&dev_priv->display.pps.mutex);
mutex_init(&dev_priv->display.hdcp.comp_mutex);
+ spin_lock_init(&dev_priv->display.dkl.phy_lock);
i915_memcpy_init_early(dev_priv);
intel_runtime_pm_init_early(&dev_priv->runtime_pm);
#define _DKL_PHY5_BASE 0x16C000
#define _DKL_PHY6_BASE 0x16D000
+#define DKL_REG_TC_PORT(__reg) \
+ (TC_PORT_1 + ((__reg).reg - _DKL_PHY1_BASE) / (_DKL_PHY2_BASE - _DKL_PHY1_BASE))
+
/* DEKEL PHY MMIO Address = Phy base + (internal address & ~index_mask) */
#define _DKL_PCS_DW5 0x14
#define DKL_PCS_DW5(tc_port) _MMIO(_PORT(tc_port, _DKL_PHY1_BASE, \
#include <linux/pfn.h>
#include <linux/scatterlist.h>
-#include <linux/swiotlb.h>
+#include <linux/dma-mapping.h>
+#include <xen/xen.h>
#include "i915_gem.h"
return page_sizes;
}
-static inline unsigned int i915_sg_segment_size(void)
+static inline unsigned int i915_sg_segment_size(struct device *dev)
{
- unsigned int size = swiotlb_max_segment();
-
- if (size == 0)
- size = UINT_MAX;
-
- size = rounddown(size, PAGE_SIZE);
- /* swiotlb_max_segment_size can return 1 byte when it means one page. */
- if (size < PAGE_SIZE)
- size = PAGE_SIZE;
-
- return size;
+ size_t max = min_t(size_t, UINT_MAX, dma_max_mapping_size(dev));
+
+ /*
+ * For Xen PV guests pages aren't contiguous in DMA (machine) address
+ * space. The DMA API takes care of that both in dma_alloc_* (by
+ * calling into the hypervisor to make the pages contiguous) and in
+ * dma_map_* (by bounce buffering). But i915 abuses ignores the
+ * coherency aspects of the DMA API and thus can't cope with bounce
+ * buffering actually happening, so add a hack here to force small
+ * allocations and mappings when running in PV mode on Xen.
+ *
+ * Note this will still break if bounce buffering is required for other
+ * reasons, like confidential computing hypervisors or PCIe root ports
+ * with addressing limitations.
+ */
+ if (xen_pv_domain())
+ max = PAGE_SIZE;
+ return round_down(max, PAGE_SIZE);
}
bool i915_sg_trim(struct sg_table *orig_st);
pm_runtime_use_autosuspend(kdev);
}
- /* Enable by default */
- pm_runtime_allow(kdev);
+ /*
+ * FIXME: Temp hammer to keep autosupend disable on lmem supported platforms.
+ * As per PCIe specs 5.3.1.4.1, all iomem read write request over a PCIe
+ * function will be unsupported in case PCIe endpoint function is in D3.
+ * Let's keep i915 autosuspend control 'on' till we fix all known issue
+ * with lmem access in D3.
+ */
+ if (!IS_DGFX(i915))
+ pm_runtime_allow(kdev);
/*
* The core calls the driver load handler with an RPM reference held.
select DRM_KMS_HELPER
select VIDEOMODE_HELPERS
select DRM_GEM_DMA_HELPER
- select DRM_KMS_HELPER
depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM || COMPILE_TEST)
depends on IMX_IPUV3_CORE
help
return ret;
}
-static int imx_tve_connector_mode_valid(struct drm_connector *connector,
- struct drm_display_mode *mode)
+static enum drm_mode_status
+imx_tve_connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
{
struct imx_tve *tve = con_to_tve(connector);
unsigned long rate;
Compile in support for the HDMI output MSM DRM driver. It can
be a primary or a secondary display on device. Note that this is used
only for the direct HDMI output. If the device outputs HDMI data
- throught some kind of DSI-to-HDMI bridge, this option can be disabled.
+ through some kind of DSI-to-HDMI bridge, this option can be disabled.
config DRM_MSM_HDMI_HDCP
bool "Enable HDMI HDCP support in MSM DRM driver"
static void *state_kcalloc(struct a6xx_gpu_state *a6xx_state, int nr, size_t objsize)
{
struct a6xx_state_memobj *obj =
- kzalloc((nr * objsize) + sizeof(*obj), GFP_KERNEL);
+ kvzalloc((nr * objsize) + sizeof(*obj), GFP_KERNEL);
if (!obj)
return NULL;
{
struct msm_gpu_state_bo *snapshot;
+ if (!bo->size)
+ return NULL;
+
snapshot = state_kcalloc(a6xx_state, 1, sizeof(*snapshot));
if (!snapshot)
return NULL;
if (a6xx_state->gmu_hfi)
kvfree(a6xx_state->gmu_hfi->data);
- list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node)
- kfree(obj);
+ if (a6xx_state->gmu_debug)
+ kvfree(a6xx_state->gmu_debug->data);
+
+ list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node) {
+ list_del(&obj->node);
+ kvfree(obj);
+ }
adreno_gpu_state_destroy(state);
kfree(a6xx_state);
struct msm_gpu *gpu = dev_to_gpu(dev);
int remaining, ret;
+ if (!gpu)
+ return 0;
+
suspend_scheduler(gpu);
remaining = wait_event_timeout(gpu->retire_event,
static int adreno_system_resume(struct device *dev)
{
- resume_scheduler(dev_to_gpu(dev));
+ struct msm_gpu *gpu = dev_to_gpu(dev);
+
+ if (!gpu)
+ return 0;
+
+ resume_scheduler(gpu);
return pm_runtime_force_resume(dev);
}
return buf;
}
-/* len is expected to be in bytes */
+/* len is expected to be in bytes
+ *
+ * WARNING: *ptr should be allocated with kvmalloc or friends. It can be free'd
+ * with kvfree() and replaced with a newly kvmalloc'd buffer on the first call
+ * when the unencoded raw data is encoded
+ */
void adreno_show_object(struct drm_printer *p, void **ptr, int len,
bool *encoded)
{
return ret;
}
-static int mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
- struct drm_display_mode *mode)
+static enum drm_mode_status
+mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
{
struct mdp4_lvds_connector *mdp4_lvds_connector =
to_mdp4_lvds_connector(connector);
{
int ret = 0;
const u8 *dpcd = ctrl->panel->dpcd;
- u8 encoding = DP_SET_ANSI_8B10B;
- u8 ssc;
+ u8 encoding[] = { 0, DP_SET_ANSI_8B10B };
u8 assr;
struct dp_link_info link_info = {0};
dp_aux_link_configure(ctrl->aux, &link_info);
- if (drm_dp_max_downspread(dpcd)) {
- ssc = DP_SPREAD_AMP_0_5;
- drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, &ssc, 1);
- }
+ if (drm_dp_max_downspread(dpcd))
+ encoding[0] |= DP_SPREAD_AMP_0_5;
- drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
- &encoding, 1);
+ /* config DOWNSPREAD_CTRL and MAIN_LINK_CHANNEL_CODING_SET */
+ drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, encoding, 2);
if (drm_dp_alternate_scrambler_reset_cap(dpcd)) {
assr = DP_ALTERNATE_SCRAMBLER_RESET_ENABLE;
return -EINVAL;
}
- rc = devm_request_irq(&dp->pdev->dev, dp->irq,
+ rc = devm_request_irq(dp_display->drm_dev->dev, dp->irq,
dp_display_irq_handler,
IRQF_TRIGGER_HIGH, "dp_display_isr", dp);
if (rc < 0) {
}
}
+static void of_dp_aux_depopulate_bus_void(void *data)
+{
+ of_dp_aux_depopulate_bus(data);
+}
+
static int dp_display_get_next_bridge(struct msm_dp *dp)
{
int rc;
* panel driver is probed asynchronously but is the best we
* can do without a bigger driver reorganization.
*/
- rc = devm_of_dp_aux_populate_ep_devices(dp_priv->aux);
+ rc = of_dp_aux_populate_bus(dp_priv->aux, NULL);
of_node_put(aux_bus);
if (rc)
goto error;
+
+ rc = devm_add_action_or_reset(dp->drm_dev->dev,
+ of_dp_aux_depopulate_bus_void,
+ dp_priv->aux);
+ if (rc)
+ goto error;
} else if (dp->is_edp) {
DRM_ERROR("eDP aux_bus not found\n");
return -ENODEV;
* For DisplayPort interfaces external bridges are optional, so
* silently ignore an error if one is not present (-ENODEV).
*/
- rc = dp_parser_find_next_bridge(dp_priv->parser);
+ rc = devm_dp_parser_find_next_bridge(dp->drm_dev->dev, dp_priv->parser);
if (!dp->is_edp && rc == -ENODEV)
return 0;
return -EINVAL;
priv = dev->dev_private;
+
+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");
+ return -ENOSPC;
+ }
+
dp_display->drm_dev = dev;
dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
connector_status_disconnected;
}
+static int dp_bridge_atomic_check(struct drm_bridge *bridge,
+ struct drm_bridge_state *bridge_state,
+ struct drm_crtc_state *crtc_state,
+ struct drm_connector_state *conn_state)
+{
+ struct msm_dp *dp;
+
+ dp = to_dp_bridge(bridge)->dp_display;
+
+ drm_dbg_dp(dp->drm_dev, "is_connected = %s\n",
+ (dp->is_connected) ? "true" : "false");
+
+ /*
+ * There is no protection in the DRM framework to check if the display
+ * pipeline has been already disabled before trying to disable it again.
+ * Hence if the sink is unplugged, the pipeline gets disabled, but the
+ * crtc->active is still true. Any attempt to set the mode or manually
+ * disable this encoder will result in the crash.
+ *
+ * TODO: add support for telling the DRM subsystem that the pipeline is
+ * disabled by the hardware and thus all access to it should be forbidden.
+ * After that this piece of code can be removed.
+ */
+ if (bridge->ops & DRM_BRIDGE_OP_HPD)
+ return (dp->is_connected) ? 0 : -ENOTCONN;
+
+ return 0;
+}
+
+
/**
* dp_bridge_get_modes - callback to add drm modes via drm_mode_probed_add()
* @bridge: Poiner to drm bridge
}
static const struct drm_bridge_funcs dp_bridge_ops = {
+ .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
+ .atomic_reset = drm_atomic_helper_bridge_reset,
.enable = dp_bridge_enable,
.disable = dp_bridge_disable,
.post_disable = dp_bridge_post_disable,
.mode_valid = dp_bridge_mode_valid,
.get_modes = dp_bridge_get_modes,
.detect = dp_bridge_detect,
+ .atomic_check = dp_bridge_atomic_check,
};
struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev,
return 0;
}
-int dp_parser_find_next_bridge(struct dp_parser *parser)
+int devm_dp_parser_find_next_bridge(struct device *dev, struct dp_parser *parser)
{
- struct device *dev = &parser->pdev->dev;
+ struct platform_device *pdev = parser->pdev;
struct drm_bridge *bridge;
- bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0);
+ bridge = devm_drm_of_get_bridge(dev, pdev->dev.of_node, 1, 0);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
struct dp_parser *dp_parser_get(struct platform_device *pdev);
/**
- * dp_parser_find_next_bridge() - find an additional bridge to DP
+ * devm_dp_parser_find_next_bridge() - find an additional bridge to DP
*
+ * @dev: device to tie bridge lifetime to
* @parser: dp_parser data from client
*
* This function is used to find any additional bridge attached to
*
* Return: 0 if able to get the bridge, otherwise negative errno for failure.
*/
-int dp_parser_find_next_bridge(struct dp_parser *parser);
+int devm_dp_parser_find_next_bridge(struct device *dev, struct dp_parser *parser);
#endif
return -EINVAL;
priv = dev->dev_private;
+
+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");
+ return -ENOSPC;
+ }
+
msm_dsi->dev = dev;
ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
struct platform_device *pdev = hdmi->pdev;
int ret;
+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");
+ return -ENOSPC;
+ }
+
hdmi->dev = dev;
hdmi->encoder = encoder;
goto fail;
}
- ret = devm_request_irq(&pdev->dev, hdmi->irq,
+ ret = devm_request_irq(dev->dev, hdmi->irq,
msm_hdmi_irq, IRQF_TRIGGER_HIGH,
"hdmi_isr", hdmi);
if (ret < 0) {
for (i = 0; i < priv->num_bridges; i++)
drm_bridge_remove(priv->bridges[i]);
+ priv->num_bridges = 0;
pm_runtime_get_sync(dev);
msm_irq_uninstall(ddev);
*/
static void submit_cleanup(struct msm_gem_submit *submit, bool error)
{
- unsigned cleanup_flags = BO_LOCKED | BO_OBJ_PINNED;
+ unsigned cleanup_flags = BO_LOCKED;
unsigned i;
if (error)
- cleanup_flags |= BO_VMA_PINNED;
+ cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED;
for (i = 0; i < submit->nr_bos; i++) {
struct msm_gem_object *msm_obj = submit->bos[i].obj;
struct msm_drm_private *priv = dev->dev_private;
struct drm_msm_gem_submit *args = data;
struct msm_file_private *ctx = file->driver_priv;
- struct msm_gem_submit *submit = NULL;
+ struct msm_gem_submit *submit;
struct msm_gpu *gpu = priv->gpu;
struct msm_gpu_submitqueue *queue;
struct msm_ringbuffer *ring;
put_unused_fd(out_fence_fd);
mutex_unlock(&queue->lock);
out_post_unlock:
- if (submit)
- msm_gem_submit_put(submit);
+ msm_gem_submit_put(submit);
if (!IS_ERR_OR_NULL(post_deps)) {
for (i = 0; i < args->nr_out_syncobjs; ++i) {
kfree(post_deps[i].chain);
}
msm_devfreq_cleanup(gpu);
+
+ platform_set_drvdata(gpu->pdev, NULL);
}
static inline struct msm_gpu *dev_to_gpu(struct device *dev)
{
struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(dev);
+
+ if (!adreno_smmu)
+ return NULL;
+
return container_of(adreno_smmu, struct msm_gpu, adreno_smmu);
}
msm_gem_lock(obj);
msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx);
- submit->bos[i].flags &= ~BO_VMA_PINNED;
+ msm_gem_unpin_locked(obj);
+ submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED);
msm_gem_unlock(obj);
}
.src = &src,
.dst = &dst,
.pgmap_owner = drm->dev,
+ .fault_page = vmf->page,
.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE,
};
{
struct panfrost_dump_object_header *hdr = iter->hdr;
- hdr->magic = cpu_to_le32(PANFROSTDUMP_MAGIC);
- hdr->type = cpu_to_le32(type);
- hdr->file_offset = cpu_to_le32(iter->data - iter->start);
- hdr->file_size = cpu_to_le32(data_end - iter->data);
+ hdr->magic = PANFROSTDUMP_MAGIC;
+ hdr->type = type;
+ hdr->file_offset = iter->data - iter->start;
+ hdr->file_size = data_end - iter->data;
iter->hdr++;
- iter->data += le32_to_cpu(hdr->file_size);
+ iter->data += hdr->file_size;
}
static void
reg = panfrost_dump_registers[i] + js_as_offset;
- dumpreg->reg = cpu_to_le32(reg);
- dumpreg->value = cpu_to_le32(gpu_read(pfdev, reg));
+ dumpreg->reg = reg;
+ dumpreg->value = gpu_read(pfdev, reg);
}
panfrost_core_dump_header(iter, PANFROSTDUMP_BUF_REG, dumpreg);
struct panfrost_dump_iterator iter;
struct drm_gem_object *dbo;
unsigned int n_obj, n_bomap_pages;
- __le64 *bomap, *bomap_start;
+ u64 *bomap, *bomap_start;
size_t file_size;
u32 as_nr;
int slot;
* For now, we write the job identifier in the register dump header,
* so that we can decode the entire dump later with pandecode
*/
- iter.hdr->reghdr.jc = cpu_to_le64(job->jc);
- iter.hdr->reghdr.major = cpu_to_le32(PANFROSTDUMP_MAJOR);
- iter.hdr->reghdr.minor = cpu_to_le32(PANFROSTDUMP_MINOR);
- iter.hdr->reghdr.gpu_id = cpu_to_le32(pfdev->features.id);
- iter.hdr->reghdr.nbos = cpu_to_le64(job->bo_count);
+ iter.hdr->reghdr.jc = job->jc;
+ iter.hdr->reghdr.major = PANFROSTDUMP_MAJOR;
+ iter.hdr->reghdr.minor = PANFROSTDUMP_MINOR;
+ iter.hdr->reghdr.gpu_id = pfdev->features.id;
+ iter.hdr->reghdr.nbos = job->bo_count;
panfrost_core_dump_registers(&iter, pfdev, as_nr, slot);
WARN_ON(!mapping->active);
- iter.hdr->bomap.data[0] = cpu_to_le32((bomap - bomap_start));
+ iter.hdr->bomap.data[0] = bomap - bomap_start;
for_each_sgtable_page(bo->base.sgt, &page_iter, 0) {
struct page *page = sg_page_iter_page(&page_iter);
if (!IS_ERR(page)) {
- *bomap++ = cpu_to_le64(page_to_phys(page));
+ *bomap++ = page_to_phys(page);
} else {
dev_err(pfdev->dev, "Panfrost Dump: wrong page\n");
- *bomap++ = ~cpu_to_le64(0);
+ *bomap++ = 0;
}
}
- iter.hdr->bomap.iova = cpu_to_le64(mapping->mmnode.start << PAGE_SHIFT);
+ iter.hdr->bomap.iova = mapping->mmnode.start << PAGE_SHIFT;
vaddr = map.vaddr;
memcpy(iter.data, vaddr, bo->base.base.size);
drm_gem_shmem_vunmap(&bo->base, &map);
- iter.hdr->bomap.valid = cpu_to_le32(1);
+ iter.hdr->bomap.valid = 1;
dump_header: panfrost_core_dump_header(&iter, PANFROSTDUMP_BUF_BO, iter.data +
bo->base.base.size);
static void dw_mipi_dsi_rockchip_set_lcdsel(struct dw_mipi_dsi_rockchip *dsi,
int mux)
{
- if (dsi->cdata->lcdsel_grf_reg < 0)
+ if (dsi->cdata->lcdsel_grf_reg)
regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
}
if (ret) {
DRM_DEV_ERROR(dsi->dev, "Failed to register component: %d\n",
ret);
- return ret;
+ goto out;
}
second = dw_mipi_dsi_rockchip_find_second(dsi);
- if (IS_ERR(second))
- return PTR_ERR(second);
+ if (IS_ERR(second)) {
+ ret = PTR_ERR(second);
+ goto out;
+ }
if (second) {
ret = component_add(second, &dw_mipi_dsi_rockchip_ops);
if (ret) {
DRM_DEV_ERROR(second,
"Failed to register component: %d\n",
ret);
- return ret;
+ goto out;
}
}
return 0;
+
+out:
+ mutex_lock(&dsi->usage_mutex);
+ dsi->usage_mode = DW_DSI_USAGE_IDLE;
+ mutex_unlock(&dsi->usage_mutex);
+ return ret;
}
static int dw_mipi_dsi_rockchip_host_detach(void *priv_data,
static const struct rockchip_dw_dsi_chip_data rk3568_chip_data[] = {
{
.reg = 0xfe060000,
- .lcdsel_grf_reg = -1,
.lanecfg1_grf_reg = RK3568_GRF_VO_CON2,
.lanecfg1 = HIWORD_UPDATE(0, RK3568_DSI0_SKEWCALHS |
RK3568_DSI0_FORCETXSTOPMODE |
},
{
.reg = 0xfe070000,
- .lcdsel_grf_reg = -1,
.lanecfg1_grf_reg = RK3568_GRF_VO_CON3,
.lanecfg1 = HIWORD_UPDATE(0, RK3568_DSI1_SKEWCALHS |
RK3568_DSI1_FORCETXSTOPMODE |
.of_match_table = dw_mipi_dsi_rockchip_dt_ids,
.pm = &dw_mipi_dsi_rockchip_pm_ops,
.name = "dw-mipi-dsi-rockchip",
+ /*
+ * For dual-DSI display, one DSI pokes at the other DSI's
+ * drvdata in dw_mipi_dsi_rockchip_find_second(). This is not
+ * safe for asynchronous probe.
+ */
+ .probe_type = PROBE_FORCE_SYNCHRONOUS,
},
};
ret = rockchip_hdmi_parse_dt(hdmi);
if (ret) {
- DRM_DEV_ERROR(hdmi->dev, "Unable to parse OF data\n");
+ if (ret != -EPROBE_DEFER)
+ DRM_DEV_ERROR(hdmi->dev, "Unable to parse OF data\n");
return ret;
}
{
struct rockchip_gem_object *rk_obj;
struct drm_gem_object *obj;
+ bool is_framebuffer;
int ret;
- rk_obj = rockchip_gem_create_object(drm, size, false);
+ is_framebuffer = drm->fb_helper && file_priv == drm->fb_helper->client.file;
+
+ rk_obj = rockchip_gem_create_object(drm, size, is_framebuffer);
if (IS_ERR(rk_obj))
return ERR_CAST(rk_obj);
{
struct vop2_video_port *vp = to_vop2_video_port(crtc);
struct vop2 *vop2 = vp->vop2;
+ struct drm_crtc_state *old_crtc_state;
int ret;
vop2_lock(vop2);
+ old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
+ drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, false);
+
drm_crtc_vblank_off(crtc);
/*
static void vop2_plane_atomic_disable(struct drm_plane *plane,
struct drm_atomic_state *state)
{
- struct drm_plane_state *old_pstate = drm_atomic_get_old_plane_state(state, plane);
+ struct drm_plane_state *old_pstate = NULL;
struct vop2_win *win = to_vop2_win(plane);
struct vop2 *vop2 = win->vop2;
drm_dbg(vop2->drm, "%s disable\n", win->data->name);
- if (!old_pstate->crtc)
+ if (state)
+ old_pstate = drm_atomic_get_old_plane_state(state, plane);
+ if (old_pstate && !old_pstate->crtc)
return;
vop2_win_disable(win);
struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
finish_cb);
+ dma_fence_put(f);
INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work);
schedule_work(&job->work);
}
struct drm_sched_fence *s_fence = job->s_fence;
/* Wait for all dependencies to avoid data corruptions */
- while ((f = drm_sched_job_dependency(job, entity)))
+ while ((f = drm_sched_job_dependency(job, entity))) {
dma_fence_wait(f, false);
+ dma_fence_put(f);
+ }
drm_sched_fence_scheduled(s_fence);
dma_fence_set_error(&s_fence->finished, -ESRCH);
continue;
}
+ dma_fence_get(entity->last_scheduled);
r = dma_fence_add_callback(entity->last_scheduled,
&job->finish_cb,
drm_sched_entity_kill_jobs_cb);
}
s_fence = to_drm_sched_fence(fence);
- if (s_fence && s_fence->sched == sched) {
+ if (s_fence && s_fence->sched == sched &&
+ !test_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &fence->flags)) {
/*
* Fence is from the same scheduler, only need to wait for
iosys_map_set_vaddr(&src, xrgb8888);
drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, ¶ms->clip);
- buf = le32buf_to_cpu(test, buf, TEST_BUF_SIZE);
+ buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0);
}
module_exit(vc4_drm_unregister);
MODULE_ALIAS("platform:vc4-drm");
+MODULE_SOFTDEP("pre: snd-soc-hdmi-codec");
MODULE_DESCRIPTION("Broadcom VC4 DRM Driver");
MODULE_AUTHOR("Eric Anholt <eric@anholt.net>");
MODULE_LICENSE("GPL v2");
struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
unsigned long __maybe_unused flags;
u32 __maybe_unused value;
+ unsigned long rate;
int ret;
+ /*
+ * The HSM clock is in the HDMI power domain, so we need to set
+ * its frequency while the power domain is active so that it
+ * keeps its rate.
+ */
+ ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
+ if (ret)
+ return ret;
+
ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
if (ret)
return ret;
+ /*
+ * Whenever the RaspberryPi boots without an HDMI monitor
+ * plugged in, the firmware won't have initialized the HSM clock
+ * rate and it will be reported as 0.
+ *
+ * If we try to access a register of the controller in such a
+ * case, it will lead to a silent CPU stall. Let's make sure we
+ * prevent such a case.
+ */
+ rate = clk_get_rate(vc4_hdmi->hsm_clock);
+ if (!rate) {
+ ret = -EINVAL;
+ goto err_disable_clk;
+ }
+
if (vc4_hdmi->variant->reset)
vc4_hdmi->variant->reset(vc4_hdmi);
#endif
return 0;
+
+err_disable_clk:
+ clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ return ret;
}
static void vc4_hdmi_put_ddc_device(void *ptr)
#define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540
#define USB_DEVICE_ID_MADCATZ_RAT5 0x1705
#define USB_DEVICE_ID_MADCATZ_RAT9 0x1709
+#define USB_DEVICE_ID_MADCATZ_MMO7 0x1713
#define USB_VENDOR_ID_MCC 0x09db
#define USB_DEVICE_ID_MCC_PMD1024LS 0x0076
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2 0x09cc
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE 0x0ba0
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER 0x0ce6
+#define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2 0x0df2
#define USB_DEVICE_ID_SONY_MOTION_CONTROLLER 0x03d5
#define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER 0x042f
#define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER 0x0002
struct device *dev = led_cdev->dev->parent;
struct hid_device *hdev = to_hid_device(dev);
struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
- u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
+ static const u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
int led_nr = 0;
int ret = 0;
magicmouse_raw_event(hdev, report, data + 2, data[1]);
magicmouse_raw_event(hdev, report, data + 2 + data[1],
size - 2 - data[1]);
- break;
+ return 0;
default:
return 0;
}
uint32_t fw_version;
int (*parse_report)(struct ps_device *dev, struct hid_report *report, u8 *data, int size);
+ void (*remove)(struct ps_device *dev);
};
/* Calibration data for playstation motion sensors. */
#define DS_STATUS_CHARGING GENMASK(7, 4)
#define DS_STATUS_CHARGING_SHIFT 4
+/* Feature version from DualSense Firmware Info report. */
+#define DS_FEATURE_VERSION(major, minor) ((major & 0xff) << 8 | (minor & 0xff))
+
/*
* Status of a DualSense touch point contact.
* Contact IDs, with highest bit set are 'inactive'
#define DS_OUTPUT_VALID_FLAG1_RELEASE_LEDS BIT(3)
#define DS_OUTPUT_VALID_FLAG1_PLAYER_INDICATOR_CONTROL_ENABLE BIT(4)
#define DS_OUTPUT_VALID_FLAG2_LIGHTBAR_SETUP_CONTROL_ENABLE BIT(1)
+#define DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2 BIT(2)
#define DS_OUTPUT_POWER_SAVE_CONTROL_MIC_MUTE BIT(4)
#define DS_OUTPUT_LIGHTBAR_SETUP_LIGHT_OUT BIT(1)
struct input_dev *sensors;
struct input_dev *touchpad;
+ /* Update version is used as a feature/capability version. */
+ uint16_t update_version;
+
/* Calibration data for accelerometer and gyroscope. */
struct ps_calibration_data accel_calib_data[3];
struct ps_calibration_data gyro_calib_data[3];
uint32_t sensor_timestamp_us;
/* Compatible rumble state */
+ bool use_vibration_v2;
bool update_rumble;
uint8_t motor_left;
uint8_t motor_right;
struct led_classdev player_leds[5];
struct work_struct output_worker;
+ bool output_worker_initialized;
void *output_report_dmabuf;
uint8_t output_seq; /* Sequence number for output report. */
};
{0, 0},
};
+static inline void dualsense_schedule_work(struct dualsense *ds);
static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue);
/*
return ret;
}
+
static int dualsense_get_firmware_info(struct dualsense *ds)
{
uint8_t *buf;
ds->base.hw_version = get_unaligned_le32(&buf[24]);
ds->base.fw_version = get_unaligned_le32(&buf[28]);
+ /* Update version is some kind of feature version. It is distinct from
+ * the firmware version as there can be many different variations of a
+ * controller over time with the same physical shell, but with different
+ * PCBs and other internal changes. The update version (internal name) is
+ * used as a means to detect what features are available and change behavior.
+ * Note: the version is different between DualSense and DualSense Edge.
+ */
+ ds->update_version = get_unaligned_le16(&buf[44]);
+
err_free:
kfree(buf);
return ret;
ds->update_player_leds = true;
spin_unlock_irqrestore(&ds->base.lock, flags);
- schedule_work(&ds->output_worker);
+ dualsense_schedule_work(ds);
return 0;
}
}
}
+static inline void dualsense_schedule_work(struct dualsense *ds)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ds->base.lock, flags);
+ if (ds->output_worker_initialized)
+ schedule_work(&ds->output_worker);
+ spin_unlock_irqrestore(&ds->base.lock, flags);
+}
+
/*
* Helper function to send DualSense output reports. Applies a CRC at the end of a report
* for Bluetooth reports.
if (ds->update_rumble) {
/* Select classic rumble style haptics and enable it. */
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_HAPTICS_SELECT;
- common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
+ if (ds->use_vibration_v2)
+ common->valid_flag2 |= DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2;
+ else
+ common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
common->motor_left = ds->motor_left;
common->motor_right = ds->motor_right;
ds->update_rumble = false;
spin_unlock_irqrestore(&ps_dev->lock, flags);
/* Schedule updating of microphone state at hardware level. */
- schedule_work(&ds->output_worker);
+ dualsense_schedule_work(ds);
}
ds->last_btn_mic_state = btn_mic_state;
ds->motor_right = effect->u.rumble.weak_magnitude / 256;
spin_unlock_irqrestore(&ds->base.lock, flags);
- schedule_work(&ds->output_worker);
+ dualsense_schedule_work(ds);
return 0;
}
+static void dualsense_remove(struct ps_device *ps_dev)
+{
+ struct dualsense *ds = container_of(ps_dev, struct dualsense, base);
+ unsigned long flags;
+
+ spin_lock_irqsave(&ds->base.lock, flags);
+ ds->output_worker_initialized = false;
+ spin_unlock_irqrestore(&ds->base.lock, flags);
+
+ cancel_work_sync(&ds->output_worker);
+}
+
static int dualsense_reset_leds(struct dualsense *ds)
{
struct dualsense_output_report report;
ds->lightbar_blue = blue;
spin_unlock_irqrestore(&ds->base.lock, flags);
- schedule_work(&ds->output_worker);
+ dualsense_schedule_work(ds);
}
static void dualsense_set_player_leds(struct dualsense *ds)
ds->update_player_leds = true;
ds->player_leds_state = player_ids[player_id];
- schedule_work(&ds->output_worker);
+ dualsense_schedule_work(ds);
}
static struct ps_device *dualsense_create(struct hid_device *hdev)
ps_dev->battery_capacity = 100; /* initial value until parse_report. */
ps_dev->battery_status = POWER_SUPPLY_STATUS_UNKNOWN;
ps_dev->parse_report = dualsense_parse_report;
+ ps_dev->remove = dualsense_remove;
INIT_WORK(&ds->output_worker, dualsense_output_worker);
+ ds->output_worker_initialized = true;
hid_set_drvdata(hdev, ds);
max_output_report_size = sizeof(struct dualsense_output_report_bt);
return ERR_PTR(ret);
}
+ /* Original DualSense firmware simulated classic controller rumble through
+ * its new haptics hardware. It felt different from classic rumble users
+ * were used to. Since then new firmwares were introduced to change behavior
+ * and make this new 'v2' behavior default on PlayStation and other platforms.
+ * The original DualSense requires a new enough firmware as bundled with PS5
+ * software released in 2021. DualSense edge supports it out of the box.
+ * Both devices also support the old mode, but it is not really used.
+ */
+ if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
+ /* Feature version 2.21 introduced new vibration method. */
+ ds->use_vibration_v2 = ds->update_version >= DS_FEATURE_VERSION(2, 21);
+ } else if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
+ ds->use_vibration_v2 = true;
+ }
+
ret = ps_devices_list_add(ps_dev);
if (ret)
return ERR_PTR(ret);
goto err_stop;
}
- if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
+ if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER ||
+ hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
dev = dualsense_create(hdev);
if (IS_ERR(dev)) {
hid_err(hdev, "Failed to create dualsense.\n");
ps_devices_list_remove(dev);
ps_device_release_player_id(dev);
+ if (dev->remove)
+ dev->remove(dev);
+
hid_hw_close(hdev);
hid_hw_stop(hdev);
}
static const struct hid_device_id ps_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ }
};
MODULE_DEVICE_TABLE(hid, ps_devices);
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) },
#endif
#if IS_ENABLED(CONFIG_HID_SAMSUNG)
{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },
.driver_data = SAITEK_RELEASE_MODE_RAT7 },
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7),
.driver_data = SAITEK_RELEASE_MODE_MMO7 },
+ { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7),
+ .driver_data = SAITEK_RELEASE_MODE_MMO7 },
{ }
};
#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)
#define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
-#define TO_CORE_ID(cpu) (cpu_data(cpu).cpu_core_id)
-#define TO_ATTR_NO(cpu) (TO_CORE_ID(cpu) + BASE_SYSFS_ATTR_NO)
-
#ifdef CONFIG_SMP
#define for_each_sibling(i, cpu) \
for_each_cpu(i, topology_sibling_cpumask(cpu))
struct platform_data {
struct device *hwmon_dev;
u16 pkg_id;
+ u16 cpu_map[NUM_REAL_CORES];
+ struct ida ida;
struct cpumask cpumask;
struct temp_data *core_data[MAX_CORE_DATA];
struct device_attribute name_attr;
MSR_IA32_THERM_STATUS;
tdata->is_pkg_data = pkg_flag;
tdata->cpu = cpu;
- tdata->cpu_core_id = TO_CORE_ID(cpu);
+ tdata->cpu_core_id = topology_core_id(cpu);
tdata->attr_size = MAX_CORE_ATTRS;
mutex_init(&tdata->update_lock);
return tdata;
struct platform_data *pdata = platform_get_drvdata(pdev);
struct cpuinfo_x86 *c = &cpu_data(cpu);
u32 eax, edx;
- int err, attr_no;
+ int err, index, attr_no;
/*
* Find attr number for sysfs:
* The attr number is always core id + 2
* The Pkgtemp will always show up as temp1_*, if available
*/
- attr_no = pkg_flag ? PKG_SYSFS_ATTR_NO : TO_ATTR_NO(cpu);
+ if (pkg_flag) {
+ attr_no = PKG_SYSFS_ATTR_NO;
+ } else {
+ index = ida_alloc(&pdata->ida, GFP_KERNEL);
+ if (index < 0)
+ return index;
+ pdata->cpu_map[index] = topology_core_id(cpu);
+ attr_no = index + BASE_SYSFS_ATTR_NO;
+ }
- if (attr_no > MAX_CORE_DATA - 1)
- return -ERANGE;
+ if (attr_no > MAX_CORE_DATA - 1) {
+ err = -ERANGE;
+ goto ida_free;
+ }
tdata = init_temp_data(cpu, pkg_flag);
- if (!tdata)
- return -ENOMEM;
+ if (!tdata) {
+ err = -ENOMEM;
+ goto ida_free;
+ }
/* Test if we can access the status register */
err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx);
exit_free:
pdata->core_data[attr_no] = NULL;
kfree(tdata);
+ida_free:
+ if (!pkg_flag)
+ ida_free(&pdata->ida, index);
return err;
}
kfree(pdata->core_data[indx]);
pdata->core_data[indx] = NULL;
+
+ if (indx >= BASE_SYSFS_ATTR_NO)
+ ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
}
static int coretemp_probe(struct platform_device *pdev)
return -ENOMEM;
pdata->pkg_id = pdev->id;
+ ida_init(&pdata->ida);
platform_set_drvdata(pdev, pdata);
pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
if (pdata->core_data[i])
coretemp_remove_core(pdata, i);
+ ida_destroy(&pdata->ida);
return 0;
}
struct platform_device *pdev = coretemp_get_pdev(cpu);
struct platform_data *pd;
struct temp_data *tdata;
- int indx, target;
+ int i, indx = -1, target;
/*
* Don't execute this on suspend as the device remove locks
if (!pdev)
return 0;
- /* The core id is too big, just return */
- indx = TO_ATTR_NO(cpu);
- if (indx > MAX_CORE_DATA - 1)
+ pd = platform_get_drvdata(pdev);
+
+ for (i = 0; i < NUM_REAL_CORES; i++) {
+ if (pd->cpu_map[i] == topology_core_id(cpu)) {
+ indx = i + BASE_SYSFS_ATTR_NO;
+ break;
+ }
+ }
+
+ /* Too many cores and this core is not populated, just return */
+ if (indx < 0)
return 0;
- pd = platform_get_drvdata(pdev);
tdata = pd->core_data[indx];
cpumask_clear_cpu(cpu, &pd->cpumask);
{ HID_USB_DEVICE(0x1b1c, 0x1c0b) }, /* Corsair RM750i */
{ HID_USB_DEVICE(0x1b1c, 0x1c0c) }, /* Corsair RM850i */
{ HID_USB_DEVICE(0x1b1c, 0x1c0d) }, /* Corsair RM1000i */
- { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsaur HX1000i revision 2 */
+ { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsair HX1000i revision 2 */
+ { HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i */
{ },
};
MODULE_DEVICE_TABLE(hid, corsairpsu_idtable);
#define PMBUS_REGULATOR_STEP(_name, _id, _voltages, _step) \
[_id] = { \
.name = (_name # _id), \
- .supply_name = "vin", \
.id = (_id), \
.of_match = of_match_ptr(_name # _id), \
.regulators_node = of_match_ptr("regulators"), \
if (val == 0) {
/* Disable pwm-fan unconditionally */
- ret = __set_pwm(ctx, 0);
+ if (ctx->enabled)
+ ret = __set_pwm(ctx, 0);
+ else
+ ret = pwm_fan_switch_power(ctx, false);
if (ret)
ctx->enable_mode = old_val;
pwm_fan_update_state(ctx, 0);
const struct scmi_sensor_info **info[hwmon_max];
};
+struct scmi_thermal_sensor {
+ const struct scmi_protocol_handle *ph;
+ const struct scmi_sensor_info *info;
+};
+
static inline u64 __pow10(u8 x)
{
u64 r = 1;
return 0;
}
-static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
- u32 attr, int channel, long *val)
+static int scmi_hwmon_read_scaled_value(const struct scmi_protocol_handle *ph,
+ const struct scmi_sensor_info *sensor,
+ long *val)
{
int ret;
u64 value;
- const struct scmi_sensor_info *sensor;
- struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
- sensor = *(scmi_sensors->info[type] + channel);
- ret = sensor_ops->reading_get(scmi_sensors->ph, sensor->id, &value);
+ ret = sensor_ops->reading_get(ph, sensor->id, &value);
if (ret)
return ret;
return ret;
}
+static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
+ u32 attr, int channel, long *val)
+{
+ const struct scmi_sensor_info *sensor;
+ struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
+
+ sensor = *(scmi_sensors->info[type] + channel);
+
+ return scmi_hwmon_read_scaled_value(scmi_sensors->ph, sensor, val);
+}
+
static int
scmi_hwmon_read_string(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, const char **str)
.info = NULL,
};
+static int scmi_hwmon_thermal_get_temp(struct thermal_zone_device *tz,
+ int *temp)
+{
+ int ret;
+ long value;
+ struct scmi_thermal_sensor *th_sensor = tz->devdata;
+
+ ret = scmi_hwmon_read_scaled_value(th_sensor->ph, th_sensor->info,
+ &value);
+ if (!ret)
+ *temp = value;
+
+ return ret;
+}
+
+static const struct thermal_zone_device_ops scmi_hwmon_thermal_ops = {
+ .get_temp = scmi_hwmon_thermal_get_temp,
+};
+
static int scmi_hwmon_add_chan_info(struct hwmon_channel_info *scmi_hwmon_chan,
struct device *dev, int num,
enum hwmon_sensor_types type, u32 config)
};
static u32 hwmon_attributes[hwmon_max] = {
- [hwmon_chip] = HWMON_C_REGISTER_TZ,
[hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL,
[hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL,
[hwmon_curr] = HWMON_C_INPUT | HWMON_C_LABEL,
[hwmon_energy] = HWMON_E_INPUT | HWMON_E_LABEL,
};
+static int scmi_thermal_sensor_register(struct device *dev,
+ const struct scmi_protocol_handle *ph,
+ const struct scmi_sensor_info *sensor)
+{
+ struct scmi_thermal_sensor *th_sensor;
+ struct thermal_zone_device *tzd;
+
+ th_sensor = devm_kzalloc(dev, sizeof(*th_sensor), GFP_KERNEL);
+ if (!th_sensor)
+ return -ENOMEM;
+
+ th_sensor->ph = ph;
+ th_sensor->info = sensor;
+
+ /*
+ * Try to register a temperature sensor with the Thermal Framework:
+ * skip sensors not defined as part of any thermal zone (-ENODEV) but
+ * report any other errors related to misconfigured zones/sensors.
+ */
+ tzd = devm_thermal_of_zone_register(dev, th_sensor->info->id, th_sensor,
+ &scmi_hwmon_thermal_ops);
+ if (IS_ERR(tzd)) {
+ devm_kfree(dev, th_sensor);
+
+ if (PTR_ERR(tzd) != -ENODEV)
+ return PTR_ERR(tzd);
+
+ dev_dbg(dev, "Sensor '%s' not attached to any thermal zone.\n",
+ sensor->name);
+ } else {
+ dev_dbg(dev, "Sensor '%s' attached to thermal zone ID:%d\n",
+ sensor->name, tzd->id);
+ }
+
+ return 0;
+}
+
static int scmi_hwmon_probe(struct scmi_device *sdev)
{
int i, idx;
enum hwmon_sensor_types type;
struct scmi_sensors *scmi_sensors;
const struct scmi_sensor_info *sensor;
- int nr_count[hwmon_max] = {0}, nr_types = 0;
+ int nr_count[hwmon_max] = {0}, nr_types = 0, nr_count_temp = 0;
const struct hwmon_chip_info *chip_info;
struct device *hwdev, *dev = &sdev->dev;
struct hwmon_channel_info *scmi_hwmon_chan;
}
}
- if (nr_count[hwmon_temp]) {
- nr_count[hwmon_chip]++;
- nr_types++;
- }
+ if (nr_count[hwmon_temp])
+ nr_count_temp = nr_count[hwmon_temp];
scmi_hwmon_chan = devm_kcalloc(dev, nr_types, sizeof(*scmi_hwmon_chan),
GFP_KERNEL);
hwdev = devm_hwmon_device_register_with_info(dev, "scmi_sensors",
scmi_sensors, chip_info,
NULL);
+ if (IS_ERR(hwdev))
+ return PTR_ERR(hwdev);
- return PTR_ERR_OR_ZERO(hwdev);
+ for (i = 0; i < nr_count_temp; i++) {
+ int ret;
+
+ sensor = *(scmi_sensors->info[hwmon_temp] + i);
+ if (!sensor)
+ continue;
+
+ /*
+ * Warn on any misconfiguration related to thermal zones but
+ * bail out of probing only on memory errors.
+ */
+ ret = scmi_thermal_sensor_register(dev, ph, sensor);
+ if (ret) {
+ if (ret == -ENOMEM)
+ return ret;
+ dev_warn(dev,
+ "Thermal zone misconfigured for %s. err=%d\n",
+ sensor->name, ret);
+ }
+ }
+
+ return 0;
}
static const struct scmi_device_id scmi_id_table[] = {
ret = coresight_fixup_device_conns(csdev);
if (!ret)
ret = coresight_fixup_orphan_conns(csdev);
- if (!ret && cti_assoc_ops && cti_assoc_ops->add)
- cti_assoc_ops->add(csdev);
out_unlock:
mutex_unlock(&coresight_mutex);
/* Success */
- if (!ret)
+ if (!ret) {
+ if (cti_assoc_ops && cti_assoc_ops->add)
+ cti_assoc_ops->add(csdev);
return csdev;
+ }
/* Unregister the device if needed */
if (registered) {
static int cti_enable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
- struct device *dev = &drvdata->csdev->dev;
unsigned long flags;
int rc = 0;
- pm_runtime_get_sync(dev->parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
/* no need to do anything if enabled or unpowered*/
/* cannot enable due to error */
cti_err_not_enabled:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(dev->parent);
return rc;
}
static int cti_disable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
- struct device *dev = &drvdata->csdev->dev;
struct coresight_device *csdev = drvdata->csdev;
spin_lock(&drvdata->spinlock);
coresight_disclaim_device_unlocked(csdev);
CS_LOCK(drvdata->base);
spin_unlock(&drvdata->spinlock);
- pm_runtime_put(dev->parent);
return 0;
/* not disabled this call */
/*
* Search the cti list to add an associated CTI into the supplied CS device
* This will set the association if CTI declared before the CS device.
- * (called from coresight_register() with coresight_mutex locked).
+ * (called from coresight_register() without coresight_mutex locked).
*/
static void cti_add_assoc_to_csdev(struct coresight_device *csdev)
{
* if we found a matching csdev then update the ECT
* association pointer for the device with this CTI.
*/
- csdev->ect_dev = ect_item->csdev;
+ coresight_set_assoc_ectdev_mutex(csdev->ect_dev,
+ ect_item->csdev);
break;
}
}
config I2C_MLXBF
tristate "Mellanox BlueField I2C controller"
depends on MELLANOX_PLATFORM && ARM64
+ depends on ACPI
select I2C_SLAVE
help
Enabling this option will add I2C SMBus support for Mellanox BlueField
*/
{ "Latitude 5480", 0x29 },
{ "Vostro V131", 0x1d },
+ { "Vostro 5568", 0x29 },
};
static void register_dell_lis3lv02d_i2c_device(struct i801_priv *priv)
.max_write_len = MLXBF_I2C_MASTER_DATA_W_LENGTH,
};
-#ifdef CONFIG_ACPI
static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = {
{ "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] },
{ "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] },
return 0;
}
-#else
-static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)
-{
- return -ENOENT;
-}
-#endif /* CONFIG_ACPI */
static int mlxbf_i2c_probe(struct platform_device *pdev)
{
.remove = mlxbf_i2c_remove,
.driver = {
.name = "i2c-mlxbf",
-#ifdef CONFIG_ACPI
.acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids),
-#endif /* CONFIG_ACPI */
},
};
#define MLXCPLD_LPCI2C_STATUS_REG 0x9
#define MLXCPLD_LPCI2C_DATA_REG 0xa
-/* LPC I2C masks and parametres */
+/* LPC I2C masks and parameters */
#define MLXCPLD_LPCI2C_RST_SEL_MASK 0x1
#define MLXCPLD_LPCI2C_TRANS_END 0x1
#define MLXCPLD_LPCI2C_STATUS_NACK 0x10
"", &piix4_main_adapters[0]);
if (retval < 0)
return retval;
+ piix4_adapter_count = 1;
}
/* Check for auxiliary SMBus on some AMD chipsets */
if (ret < 0)
goto error;
+ pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
for (i = 0; i < cci->data->num_masters; i++) {
if (!cci->master[i].cci)
continue;
}
}
- pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
- pm_runtime_use_autosuspend(dev);
- pm_runtime_set_active(dev);
- pm_runtime_enable(dev);
-
return 0;
error_i2c:
+ pm_runtime_disable(dev);
+ pm_runtime_dont_use_autosuspend(dev);
+
for (--i ; i >= 0; i--) {
if (cci->master[i].cci) {
i2c_del_adapter(&cci->master[i].adap);
module_param(force, bool, 0);
MODULE_PARM_DESC(force, "Forcibly enable the SIS630. DANGEROUS!");
-/* SMBus base adress */
+/* SMBus base address */
static unsigned short smbus_base;
/* supported chips */
struct dma_chan *tx_dma_chan;
struct dma_chan *rx_dma_chan;
unsigned int dma_buf_size;
+ struct device *dma_dev;
dma_addr_t dma_phys;
void *dma_buf;
static void tegra_i2c_release_dma(struct tegra_i2c_dev *i2c_dev)
{
if (i2c_dev->dma_buf) {
- dma_free_coherent(i2c_dev->dev, i2c_dev->dma_buf_size,
+ dma_free_coherent(i2c_dev->dma_dev, i2c_dev->dma_buf_size,
i2c_dev->dma_buf, i2c_dev->dma_phys);
i2c_dev->dma_buf = NULL;
}
i2c_dev->tx_dma_chan = chan;
+ WARN_ON(i2c_dev->tx_dma_chan->device != i2c_dev->rx_dma_chan->device);
+ i2c_dev->dma_dev = chan->device->dev;
+
i2c_dev->dma_buf_size = i2c_dev->hw->quirks->max_write_len +
I2C_PACKET_HEADER_SIZE;
- dma_buf = dma_alloc_coherent(i2c_dev->dev, i2c_dev->dma_buf_size,
+ dma_buf = dma_alloc_coherent(i2c_dev->dma_dev, i2c_dev->dma_buf_size,
&dma_phys, GFP_KERNEL | __GFP_NOWARN);
if (!dma_buf) {
dev_err(i2c_dev->dev, "failed to allocate DMA buffer\n");
if (i2c_dev->dma_mode) {
if (i2c_dev->msg_read) {
- dma_sync_single_for_device(i2c_dev->dev,
+ dma_sync_single_for_device(i2c_dev->dma_dev,
i2c_dev->dma_phys,
xfer_size, DMA_FROM_DEVICE);
if (err)
return err;
} else {
- dma_sync_single_for_cpu(i2c_dev->dev,
+ dma_sync_single_for_cpu(i2c_dev->dma_dev,
i2c_dev->dma_phys,
xfer_size, DMA_TO_DEVICE);
}
memcpy(i2c_dev->dma_buf + I2C_PACKET_HEADER_SIZE,
msg->buf, msg->len);
- dma_sync_single_for_device(i2c_dev->dev,
+ dma_sync_single_for_device(i2c_dev->dma_dev,
i2c_dev->dma_phys,
xfer_size, DMA_TO_DEVICE);
}
if (i2c_dev->msg_read && i2c_dev->msg_err == I2C_ERR_NONE) {
- dma_sync_single_for_cpu(i2c_dev->dev,
+ dma_sync_single_for_cpu(i2c_dev->dma_dev,
i2c_dev->dma_phys,
xfer_size, DMA_FROM_DEVICE);
module_platform_driver(xiic_i2c_driver);
+MODULE_ALIAS("platform:" DRIVER_NAME);
MODULE_AUTHOR("info@mocean-labs.com");
MODULE_DESCRIPTION("Xilinx I2C bus driver");
MODULE_LICENSE("GPL v2");
return sysfs_emit(buf, "%d\n", fifo_watermark);
}
-static IIO_CONST_ATTR(hwfifo_watermark_min, "1");
-static IIO_CONST_ATTR(hwfifo_watermark_max,
- __stringify(ADXL367_FIFO_MAX_WATERMARK));
+static ssize_t hwfifo_watermark_min_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", "1");
+}
+
+static ssize_t hwfifo_watermark_max_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", __stringify(ADXL367_FIFO_MAX_WATERMARK));
+}
+
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0);
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0);
static IIO_DEVICE_ATTR(hwfifo_watermark, 0444,
adxl367_get_fifo_watermark, NULL, 0);
static IIO_DEVICE_ATTR(hwfifo_enabled, 0444,
adxl367_get_fifo_enabled, NULL, 0);
static const struct attribute *adxl367_fifo_attributes[] = {
- &iio_const_attr_hwfifo_watermark_min.dev_attr.attr,
- &iio_const_attr_hwfifo_watermark_max.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr,
&iio_dev_attr_hwfifo_watermark.dev_attr.attr,
&iio_dev_attr_hwfifo_enabled.dev_attr.attr,
NULL,
return sprintf(buf, "%d\n", st->watermark);
}
-static IIO_CONST_ATTR(hwfifo_watermark_min, "1");
-static IIO_CONST_ATTR(hwfifo_watermark_max,
- __stringify(ADXL372_FIFO_SIZE));
+static ssize_t hwfifo_watermark_min_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", "1");
+}
+
+static ssize_t hwfifo_watermark_max_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", __stringify(ADXL372_FIFO_SIZE));
+}
+
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0);
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0);
static IIO_DEVICE_ATTR(hwfifo_watermark, 0444,
adxl372_get_fifo_watermark, NULL, 0);
static IIO_DEVICE_ATTR(hwfifo_enabled, 0444,
adxl372_get_fifo_enabled, NULL, 0);
static const struct attribute *adxl372_fifo_attributes[] = {
- &iio_const_attr_hwfifo_watermark_min.dev_attr.attr,
- &iio_const_attr_hwfifo_watermark_max.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr,
&iio_dev_attr_hwfifo_watermark.dev_attr.attr,
&iio_dev_attr_hwfifo_enabled.dev_attr.attr,
NULL,
{ }
};
-static IIO_CONST_ATTR(hwfifo_watermark_min, "1");
-static IIO_CONST_ATTR(hwfifo_watermark_max,
- __stringify(BMC150_ACCEL_FIFO_LENGTH));
+static ssize_t hwfifo_watermark_min_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", "1");
+}
+
+static ssize_t hwfifo_watermark_max_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", __stringify(BMC150_ACCEL_FIFO_LENGTH));
+}
+
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0);
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0);
static IIO_DEVICE_ATTR(hwfifo_enabled, S_IRUGO,
bmc150_accel_get_fifo_state, NULL, 0);
static IIO_DEVICE_ATTR(hwfifo_watermark, S_IRUGO,
bmc150_accel_get_fifo_watermark, NULL, 0);
static const struct attribute *bmc150_accel_fifo_attributes[] = {
- &iio_const_attr_hwfifo_watermark_min.dev_attr.attr,
- &iio_const_attr_hwfifo_watermark_max.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr,
&iio_dev_attr_hwfifo_watermark.dev_attr.attr,
&iio_dev_attr_hwfifo_enabled.dev_attr.attr,
NULL,
return scnprintf(buf, PAGE_SIZE, "%d\n", st->dma_st.watermark);
}
+static ssize_t hwfifo_watermark_min_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", "2");
+}
+
+static ssize_t hwfifo_watermark_max_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%s\n", AT91_HWFIFO_MAX_SIZE_STR);
+}
+
static IIO_DEVICE_ATTR(hwfifo_enabled, 0444,
at91_adc_get_fifo_state, NULL, 0);
static IIO_DEVICE_ATTR(hwfifo_watermark, 0444,
at91_adc_get_watermark, NULL, 0);
-
-static IIO_CONST_ATTR(hwfifo_watermark_min, "2");
-static IIO_CONST_ATTR(hwfifo_watermark_max, AT91_HWFIFO_MAX_SIZE_STR);
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0);
+static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0);
static const struct attribute *at91_adc_fifo_attributes[] = {
- &iio_const_attr_hwfifo_watermark_min.dev_attr.attr,
- &iio_const_attr_hwfifo_watermark_max.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr,
+ &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr,
&iio_dev_attr_hwfifo_watermark.dev_attr.attr,
&iio_dev_attr_hwfifo_enabled.dev_attr.attr,
NULL,
/* Internal voltage reference in mV */
#define MCP3911_INT_VREF_MV 1200
-#define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 5) | (1 << 0)) & 0xff)
-#define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 5) | (0 << 0)) & 0xff)
+#define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 6) | (1 << 0)) & 0xff)
+#define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 6) | (0 << 0)) & 0xff)
+#define MCP3911_REG_MASK GENMASK(4, 1)
#define MCP3911_NUM_CHANNELS 2
be32_to_cpus(val);
*val >>= ((4 - len) * 8);
- dev_dbg(&adc->spi->dev, "reading 0x%x from register 0x%x\n", *val,
- reg >> 1);
+ dev_dbg(&adc->spi->dev, "reading 0x%x from register 0x%lx\n", *val,
+ FIELD_GET(MCP3911_REG_MASK, reg));
return ret;
}
break;
case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
- for (int i = 0; i < sizeof(mcp3911_osr_table); i++) {
+ for (int i = 0; i < ARRAY_SIZE(mcp3911_osr_table); i++) {
if (val == mcp3911_osr_table[i]) {
val = FIELD_PREP(MCP3911_CONFIG_OSR, i);
ret = mcp3911_update(adc, MCP3911_REG_CONFIG, MCP3911_CONFIG_OSR,
indio_dev->name,
iio_device_id(indio_dev));
if (!adc->trig)
- return PTR_ERR(adc->trig);
+ return -ENOMEM;
adc->trig->ops = &mcp3911_trigger_ops;
iio_trigger_set_drvdata(adc->trig, adc);
stm32_adc_chan_init_one(indio_dev, &channels[scan_index], val,
vin[1], scan_index, differential);
+ val = 0;
ret = fwnode_property_read_u32(child, "st,min-sample-time-ns", &val);
/* st,min-sample-time-ns is optional */
- if (!ret) {
- stm32_adc_smpr_init(adc, channels[scan_index].channel, val);
- if (differential)
- stm32_adc_smpr_init(adc, vin[1], val);
- } else if (ret != -EINVAL) {
+ if (ret && ret != -EINVAL) {
dev_err(&indio_dev->dev, "Invalid st,min-sample-time-ns property %d\n",
ret);
goto err;
}
+ stm32_adc_smpr_init(adc, channels[scan_index].channel, val);
+ if (differential)
+ stm32_adc_smpr_init(adc, vin[1], val);
+
scan_index++;
}
TSL2583_POWER_OFF_DELAY_MS);
pm_runtime_use_autosuspend(&clientp->dev);
- ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev);
+ ret = iio_device_register(indio_dev);
if (ret) {
dev_err(&clientp->dev, "%s: iio registration failed\n",
__func__);
return ret;
}
- st->iio_chan = devm_kzalloc(&st->spi->dev,
- st->iio_channels * sizeof(*st->iio_chan),
- GFP_KERNEL);
-
- if (!st->iio_chan)
- return -ENOMEM;
-
ret = regmap_update_bits(st->regmap, LTC2983_GLOBAL_CONFIG_REG,
LTC2983_NOTCH_FREQ_MASK,
LTC2983_NOTCH_FREQ(st->filter_notch_freq));
gpiod_set_value_cansleep(gpio, 0);
}
+ st->iio_chan = devm_kzalloc(&spi->dev,
+ st->iio_channels * sizeof(*st->iio_chan),
+ GFP_KERNEL);
+ if (!st->iio_chan)
+ return -ENOMEM;
+
ret = ltc2983_setup(st, true);
if (ret)
return ret;
return false;
memset(&fl4, 0, sizeof(fl4));
- fl4.flowi4_iif = net_dev->ifindex;
+ fl4.flowi4_oif = net_dev->ifindex;
fl4.daddr = daddr;
fl4.saddr = saddr;
nldev_init();
rdma_nl_register(RDMA_NL_LS, ibnl_ls_cb_table);
- roce_gid_mgmt_init();
+ ret = roce_gid_mgmt_init();
+ if (ret) {
+ pr_warn("Couldn't init RoCE GID management\n");
+ goto err_parent;
+ }
return 0;
+err_parent:
+ rdma_nl_unregister(RDMA_NL_LS);
+ nldev_exit();
+ unregister_pernet_device(&rdma_dev_net_ops);
err_compat:
unregister_blocking_lsm_notifier(&ibdev_lsm_nb);
err_sa:
rdma_nl_register(RDMA_NL_NLDEV, nldev_cb_table);
}
-void __exit nldev_exit(void)
+void nldev_exit(void)
{
rdma_nl_unregister(RDMA_NL_NLDEV);
}
// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
/*
- * Copyright 2018-2021 Amazon.com, Inc. or its affiliates. All rights reserved.
+ * Copyright 2018-2022 Amazon.com, Inc. or its affiliates. All rights reserved.
*/
#include <linux/module.h>
#define PCI_DEV_ID_EFA0_VF 0xefa0
#define PCI_DEV_ID_EFA1_VF 0xefa1
+#define PCI_DEV_ID_EFA2_VF 0xefa2
static const struct pci_device_id efa_pci_tbl[] = {
{ PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA0_VF) },
{ PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA1_VF) },
+ { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA2_VF) },
{ }
};
spin_unlock(&sc->release_lock);
write_seqlock(&sc->waitlock);
- if (!list_empty(&sc->piowait))
- list_move(&sc->piowait, &wake_list);
+ list_splice_init(&sc->piowait, &wake_list);
write_sequnlock(&sc->waitlock);
while (!list_empty(&wake_list)) {
struct iowait *wait;
HR_OPC_MAP(ATOMIC_CMP_AND_SWP, ATOM_CMP_AND_SWAP),
HR_OPC_MAP(ATOMIC_FETCH_AND_ADD, ATOM_FETCH_AND_ADD),
HR_OPC_MAP(SEND_WITH_INV, SEND_WITH_INV),
- HR_OPC_MAP(LOCAL_INV, LOCAL_INV),
HR_OPC_MAP(MASKED_ATOMIC_CMP_AND_SWP, ATOM_MSK_CMP_AND_SWAP),
HR_OPC_MAP(MASKED_ATOMIC_FETCH_AND_ADD, ATOM_MSK_FETCH_AND_ADD),
HR_OPC_MAP(REG_MR, FAST_REG_PMR),
else
ret = -EOPNOTSUPP;
break;
- case IB_WR_LOCAL_INV:
- hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_SO);
- fallthrough;
case IB_WR_SEND_WITH_INV:
rc_sq_wqe->inv_key = cpu_to_le32(wr->ex.invalidate_rkey);
break;
static int free_mr_init(struct hns_roce_dev *hr_dev)
{
+ struct hns_roce_v2_priv *priv = hr_dev->priv;
+ struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
int ret;
+ mutex_init(&free_mr->mutex);
+
ret = free_mr_alloc_res(hr_dev);
if (ret)
return ret;
hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_VALID);
hr_reg_write(mpt_entry, MPT_PD, mr->pd);
- hr_reg_enable(mpt_entry, MPT_L_INV_EN);
hr_reg_write_bool(mpt_entry, MPT_BIND_EN,
mr->access & IB_ACCESS_MW_BIND);
hr_reg_enable(mpt_entry, MPT_RA_EN);
hr_reg_enable(mpt_entry, MPT_R_INV_EN);
- hr_reg_enable(mpt_entry, MPT_L_INV_EN);
hr_reg_enable(mpt_entry, MPT_FRE);
hr_reg_clear(mpt_entry, MPT_MR_MW);
hr_reg_write(mpt_entry, MPT_PD, mw->pdn);
hr_reg_enable(mpt_entry, MPT_R_INV_EN);
- hr_reg_enable(mpt_entry, MPT_L_INV_EN);
hr_reg_enable(mpt_entry, MPT_LW_EN);
hr_reg_enable(mpt_entry, MPT_MR_MW);
HR_WC_OP_MAP(RDMA_READ, RDMA_READ),
HR_WC_OP_MAP(RDMA_WRITE, RDMA_WRITE),
HR_WC_OP_MAP(RDMA_WRITE_WITH_IMM, RDMA_WRITE),
- HR_WC_OP_MAP(LOCAL_INV, LOCAL_INV),
HR_WC_OP_MAP(ATOM_CMP_AND_SWAP, COMP_SWAP),
HR_WC_OP_MAP(ATOM_FETCH_AND_ADD, FETCH_ADD),
HR_WC_OP_MAP(ATOM_MSK_CMP_AND_SWAP, MASKED_COMP_SWAP),
case HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM:
wc->wc_flags |= IB_WC_WITH_IMM;
break;
- case HNS_ROCE_V2_WQE_OP_LOCAL_INV:
- wc->wc_flags |= IB_WC_WITH_INVALIDATE;
- break;
case HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP:
case HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD:
case HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP:
HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP = 0x8,
HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD = 0x9,
HNS_ROCE_V2_WQE_OP_FAST_REG_PMR = 0xa,
- HNS_ROCE_V2_WQE_OP_LOCAL_INV = 0xb,
HNS_ROCE_V2_WQE_OP_BIND_MW = 0xc,
HNS_ROCE_V2_WQE_OP_MASK = 0x1f,
};
#define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
#define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
#define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
-#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10)
#define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
#define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
#define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
if (IS_IWARP(dev)) {
xa_init(&dev->qps);
dev->iwarp_wq = create_singlethread_workqueue("qedr_iwarpq");
+ if (!dev->iwarp_wq) {
+ rc = -ENOMEM;
+ goto err1;
+ }
}
/* Allocate Status blocks for CNQ */
GFP_KERNEL);
if (!dev->sb_array) {
rc = -ENOMEM;
- goto err1;
+ goto err_destroy_wq;
}
dev->cnq_array = kcalloc(dev->num_cnq,
kfree(dev->cnq_array);
err2:
kfree(dev->sb_array);
+err_destroy_wq:
+ if (IS_IWARP(dev))
+ destroy_workqueue(dev->iwarp_wq);
err1:
kfree(dev->sgid_tbl);
return rc;
skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload,
res->cur_psn, AETH_ACK_UNLIMITED);
- if (!skb)
+ if (!skb) {
+ rxe_put(mr);
return RESPST_ERR_RNR;
+ }
rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt),
payload, RXE_FROM_MR_OBJ);
type = IOMMU_RESV_RESERVED;
region = iommu_alloc_resv_region(entry->address_start,
- length, prot, type);
+ length, prot, type,
+ GFP_KERNEL);
if (!region) {
dev_err(dev, "Out of memory allocating dm-regions\n");
return;
region = iommu_alloc_resv_region(MSI_RANGE_START,
MSI_RANGE_END - MSI_RANGE_START + 1,
- 0, IOMMU_RESV_MSI);
+ 0, IOMMU_RESV_MSI, GFP_KERNEL);
if (!region)
return;
list_add_tail(®ion->list, head);
region = iommu_alloc_resv_region(HT_RANGE_START,
HT_RANGE_END - HT_RANGE_START + 1,
- 0, IOMMU_RESV_RESERVED);
+ 0, IOMMU_RESV_RESERVED, GFP_KERNEL);
if (!region)
return;
list_add_tail(®ion->list, head);
region = iommu_alloc_resv_region(DOORBELL_ADDR,
PAGE_SIZE, prot,
- IOMMU_RESV_MSI);
+ IOMMU_RESV_MSI, GFP_KERNEL);
if (!region)
return;
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
- prot, IOMMU_RESV_SW_MSI);
+ prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
if (!region)
return;
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
- prot, IOMMU_RESV_SW_MSI);
+ prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
if (!region)
return;
if (md_domain_init(si_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
domain_exit(si_domain);
+ si_domain = NULL;
return -EFAULT;
}
disable_dmar_iommu(iommu);
free_dmar_iommu(iommu);
}
+ if (si_domain) {
+ domain_exit(si_domain);
+ si_domain = NULL;
+ }
return ret;
}
struct device *i_dev;
int i;
- down_read(&dmar_global_lock);
+ rcu_read_lock();
for_each_rmrr_units(rmrr) {
for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt,
i, i_dev) {
IOMMU_RESV_DIRECT_RELAXABLE : IOMMU_RESV_DIRECT;
resv = iommu_alloc_resv_region(rmrr->base_address,
- length, prot, type);
+ length, prot, type,
+ GFP_ATOMIC);
if (!resv)
break;
list_add_tail(&resv->list, head);
}
}
- up_read(&dmar_global_lock);
+ rcu_read_unlock();
#ifdef CONFIG_INTEL_IOMMU_FLOPPY_WA
if (dev_is_pci(device)) {
if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) {
reg = iommu_alloc_resv_region(0, 1UL << 24, prot,
- IOMMU_RESV_DIRECT_RELAXABLE);
+ IOMMU_RESV_DIRECT_RELAXABLE,
+ GFP_KERNEL);
if (reg)
list_add_tail(®->list, head);
}
reg = iommu_alloc_resv_region(IOAPIC_RANGE_START,
IOAPIC_RANGE_END - IOAPIC_RANGE_START + 1,
- 0, IOMMU_RESV_MSI);
+ 0, IOMMU_RESV_MSI, GFP_KERNEL);
if (!reg)
return;
list_add_tail(®->list, head);
LIST_HEAD(stack);
nr = iommu_alloc_resv_region(new->start, new->length,
- new->prot, new->type);
+ new->prot, new->type, GFP_KERNEL);
if (!nr)
return -ENOMEM;
struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
size_t length, int prot,
- enum iommu_resv_type type)
+ enum iommu_resv_type type,
+ gfp_t gfp)
{
struct iommu_resv_region *region;
- region = kzalloc(sizeof(*region), GFP_KERNEL);
+ region = kzalloc(sizeof(*region), gfp);
if (!region)
return NULL;
continue;
region = iommu_alloc_resv_region(resv->iova_base, resv->size,
- prot, IOMMU_RESV_RESERVED);
+ prot, IOMMU_RESV_RESERVED,
+ GFP_KERNEL);
if (!region)
return;
fallthrough;
case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
region = iommu_alloc_resv_region(start, size, 0,
- IOMMU_RESV_RESERVED);
+ IOMMU_RESV_RESERVED,
+ GFP_KERNEL);
break;
case VIRTIO_IOMMU_RESV_MEM_T_MSI:
region = iommu_alloc_resv_region(start, size, prot,
- IOMMU_RESV_MSI);
+ IOMMU_RESV_MSI,
+ GFP_KERNEL);
break;
}
if (!region)
*/
if (!msi) {
msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
- prot, IOMMU_RESV_SW_MSI);
+ prot, IOMMU_RESV_SW_MSI,
+ GFP_KERNEL);
if (!msi)
return;
}
if (card->irq > 0)
free_irq(card->irq, card);
- if (card->isac.dch.dev.dev.class)
+ if (device_is_registered(&card->isac.dch.dev.dev))
mISDN_unregister_device(&card->isac.dch.dev);
for (i = 0; i < 2; i++) {
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "mISDN_register %s %d\n",
dev_name(&dev->dev), dev->id);
+ dev->dev.class = &mISDN_class;
+
err = create_stack(dev);
if (err)
goto error1;
- dev->dev.class = &mISDN_class;
dev->dev.platform_data = dev;
dev->dev.parent = parent;
dev_set_drvdata(&dev->dev, dev);
error3:
delete_stack(dev);
- return err;
error1:
+ put_device(&dev->dev);
return err;
}
static struct gpiod_lookup_table simatic_ipc_led_gpio_table_127e = {
.dev_id = "leds-gpio",
.table = {
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 52, NULL, 1, GPIO_ACTIVE_LOW),
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 53, NULL, 2, GPIO_ACTIVE_LOW),
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 57, NULL, 3, GPIO_ACTIVE_LOW),
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 58, NULL, 4, GPIO_ACTIVE_LOW),
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 60, NULL, 5, GPIO_ACTIVE_LOW),
- GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 51, NULL, 0, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 52, NULL, 0, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 53, NULL, 1, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 57, NULL, 2, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 58, NULL, 3, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 60, NULL, 4, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 51, NULL, 5, GPIO_ACTIVE_LOW),
GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 56, NULL, 6, GPIO_ACTIVE_LOW),
GPIO_LOOKUP_IDX("apollolake-pinctrl.0", 59, NULL, 7, GPIO_ACTIVE_HIGH),
},
{
BUG_ON(b->hold_count);
- if (!b->state) /* fast case */
+ /* smp_load_acquire() pairs with read_endio()'s smp_mb__before_atomic() */
+ if (!smp_load_acquire(&b->state)) /* fast case */
return;
wait_on_bit_io(&b->state, B_READING, TASK_UNINTERRUPTIBLE);
BUG_ON(test_bit(B_DIRTY, &b->state));
if (static_branch_unlikely(&no_sleep_enabled) && c->no_sleep &&
- unlikely(test_bit(B_READING, &b->state)))
+ unlikely(test_bit_acquire(B_READING, &b->state)))
continue;
if (!b->hold_count) {
* If the user called both dm_bufio_prefetch and dm_bufio_get on
* the same buffer, it would deadlock if we waited.
*/
- if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
+ if (nf == NF_GET && unlikely(test_bit_acquire(B_READING, &b->state)))
return NULL;
b->hold_count++;
* invalid buffer.
*/
if ((b->read_error || b->write_error) &&
- !test_bit(B_READING, &b->state) &&
+ !test_bit_acquire(B_READING, &b->state) &&
!test_bit(B_WRITING, &b->state) &&
!test_bit(B_DIRTY, &b->state)) {
__unlink_buffer(b);
static void forget_buffer_locked(struct dm_buffer *b)
{
- if (likely(!b->hold_count) && likely(!b->state)) {
+ if (likely(!b->hold_count) && likely(!smp_load_acquire(&b->state))) {
__unlink_buffer(b);
__free_buffer_wake(b);
}
{
if (!(gfp & __GFP_FS) ||
(static_branch_unlikely(&no_sleep_enabled) && b->c->no_sleep)) {
- if (test_bit(B_READING, &b->state) ||
+ if (test_bit_acquire(B_READING, &b->state) ||
test_bit(B_WRITING, &b->state) ||
test_bit(B_DIRTY, &b->state))
return false;
struct dm_cache_policy_type *real;
/*
- * Policies may store a hint for each each cache block.
+ * Policies may store a hint for each cache block.
* Currently the size of this hint must be 0 or 4 bytes but we
* expect to relax this in future.
*/
reason = "max discard sectors smaller than a region";
if (reason) {
- DMWARN("Destination device (%pd) %s: Disabling discard passdown.",
+ DMWARN("Destination device (%pg) %s: Disabling discard passdown.",
dest_dev, reason);
clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
}
hc = __get_name_cell(new);
if (hc) {
- DMWARN("Unable to change %s on mapped device %s to one that "
- "already exists: %s",
- change_uuid ? "uuid" : "name",
- param->name, new);
+ DMERR("Unable to change %s on mapped device %s to one that "
+ "already exists: %s",
+ change_uuid ? "uuid" : "name",
+ param->name, new);
dm_put(hc->md);
up_write(&_hash_lock);
kfree(new_data);
*/
hc = __get_name_cell(param->name);
if (!hc) {
- DMWARN("Unable to rename non-existent device, %s to %s%s",
- param->name, change_uuid ? "uuid " : "", new);
+ DMERR("Unable to rename non-existent device, %s to %s%s",
+ param->name, change_uuid ? "uuid " : "", new);
up_write(&_hash_lock);
kfree(new_data);
return ERR_PTR(-ENXIO);
* Does this device already have a uuid?
*/
if (change_uuid && hc->uuid) {
- DMWARN("Unable to change uuid of mapped device %s to %s "
- "because uuid is already set to %s",
- param->name, new, hc->uuid);
+ DMERR("Unable to change uuid of mapped device %s to %s "
+ "because uuid is already set to %s",
+ param->name, new, hc->uuid);
dm_put(hc->md);
up_write(&_hash_lock);
kfree(new_data);
static int check_name(const char *name)
{
if (strchr(name, '/')) {
- DMWARN("invalid device name");
+ DMERR("invalid device name");
return -EINVAL;
}
down_read(&_hash_lock);
hc = dm_get_mdptr(md);
if (!hc || hc->md != md) {
- DMWARN("device has been removed from the dev hash table.");
+ DMERR("device has been removed from the dev hash table.");
goto out;
}
if (new_data < param->data ||
invalid_str(new_data, (void *) param + param_size) || !*new_data ||
strlen(new_data) > (change_uuid ? DM_UUID_LEN - 1 : DM_NAME_LEN - 1)) {
- DMWARN("Invalid new mapped device name or uuid string supplied.");
+ DMERR("Invalid new mapped device name or uuid string supplied.");
return -EINVAL;
}
if (geostr < param->data ||
invalid_str(geostr, (void *) param + param_size)) {
- DMWARN("Invalid geometry supplied.");
+ DMERR("Invalid geometry supplied.");
goto out;
}
indata + 1, indata + 2, indata + 3, &dummy);
if (x != 4) {
- DMWARN("Unable to interpret geometry settings.");
+ DMERR("Unable to interpret geometry settings.");
goto out;
}
if (indata[0] > 65535 || indata[1] > 255 ||
indata[2] > 255 || indata[3] > ULONG_MAX) {
- DMWARN("Geometry exceeds range limits.");
+ DMERR("Geometry exceeds range limits.");
goto out;
}
char *target_params;
if (!param->target_count) {
- DMWARN("populate_table: no targets specified");
+ DMERR("populate_table: no targets specified");
return -EINVAL;
}
r = next_target(spec, next, end, &spec, &target_params);
if (r) {
- DMWARN("unable to find target");
+ DMERR("unable to find target");
return r;
}
(sector_t) spec->length,
target_params);
if (r) {
- DMWARN("error adding target to table");
+ DMERR("error adding target to table");
return r;
}
if (immutable_target_type &&
(immutable_target_type != dm_table_get_immutable_target_type(t)) &&
!dm_table_get_wildcard_target(t)) {
- DMWARN("can't replace immutable target type %s",
- immutable_target_type->name);
+ DMERR("can't replace immutable target type %s",
+ immutable_target_type->name);
r = -EINVAL;
goto err_unlock_md_type;
}
/* setup md->queue to reflect md's type (may block) */
r = dm_setup_md_queue(md, t);
if (r) {
- DMWARN("unable to set up device queue for new table.");
+ DMERR("unable to set up device queue for new table.");
goto err_unlock_md_type;
}
} else if (!is_valid_type(dm_get_md_type(md), dm_table_get_type(t))) {
- DMWARN("can't change device type (old=%u vs new=%u) after initial table load.",
- dm_get_md_type(md), dm_table_get_type(t));
+ DMERR("can't change device type (old=%u vs new=%u) after initial table load.",
+ dm_get_md_type(md), dm_table_get_type(t));
r = -EINVAL;
goto err_unlock_md_type;
}
down_write(&_hash_lock);
hc = dm_get_mdptr(md);
if (!hc || hc->md != md) {
- DMWARN("device has been removed from the dev hash table.");
+ DMERR("device has been removed from the dev hash table.");
up_write(&_hash_lock);
r = -ENXIO;
goto err_destroy_table;
if (tmsg < (struct dm_target_msg *) param->data ||
invalid_str(tmsg->message, (void *) param + param_size)) {
- DMWARN("Invalid target message parameters.");
+ DMERR("Invalid target message parameters.");
r = -EINVAL;
goto out;
}
r = dm_split_args(&argc, &argv, tmsg->message);
if (r) {
- DMWARN("Failed to split target message parameters");
+ DMERR("Failed to split target message parameters");
goto out;
}
if (!argc) {
- DMWARN("Empty message received.");
+ DMERR("Empty message received.");
r = -EINVAL;
goto out_argv;
}
ti = dm_table_find_target(table, tmsg->sector);
if (!ti) {
- DMWARN("Target message sector outside device.");
+ DMERR("Target message sector outside device.");
r = -EINVAL;
} else if (ti->type->message)
r = ti->type->message(ti, argc, argv, result, maxlen);
else {
- DMWARN("Target type does not support messages");
+ DMERR("Target type does not support messages");
r = -EINVAL;
}
if ((DM_VERSION_MAJOR != version[0]) ||
(DM_VERSION_MINOR < version[1])) {
- DMWARN("ioctl interface mismatch: "
- "kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
- DM_VERSION_MAJOR, DM_VERSION_MINOR,
- DM_VERSION_PATCHLEVEL,
- version[0], version[1], version[2], cmd);
+ DMERR("ioctl interface mismatch: "
+ "kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
+ DM_VERSION_MAJOR, DM_VERSION_MINOR,
+ DM_VERSION_PATCHLEVEL,
+ version[0], version[1], version[2], cmd);
r = -EINVAL;
}
if (cmd == DM_DEV_CREATE_CMD) {
if (!*param->name) {
- DMWARN("name not supplied when creating device");
+ DMERR("name not supplied when creating device");
return -EINVAL;
}
} else if (*param->uuid && *param->name) {
- DMWARN("only supply one of name or uuid, cmd(%u)", cmd);
+ DMERR("only supply one of name or uuid, cmd(%u)", cmd);
return -EINVAL;
}
fn = lookup_ioctl(cmd, &ioctl_flags);
if (!fn) {
- DMWARN("dm_ctl_ioctl: unknown command 0x%x", command);
+ DMERR("dm_ctl_ioctl: unknown command 0x%x", command);
return -ENOTTY;
}
(sector_t) spec_array[i]->length,
target_params_array[i]);
if (r) {
- DMWARN("error adding target to table");
+ DMERR("error adding target to table");
goto err_destroy_table;
}
}
/* setup md->queue to reflect md's type (may block) */
r = dm_setup_md_queue(md, t);
if (r) {
- DMWARN("unable to set up device queue for new table.");
+ DMERR("unable to set up device queue for new table.");
goto err_destroy_table;
}
* of the "sync" directive.
*
* With reshaping capability added, we must ensure that
- * that the "sync" directive is disallowed during the reshape.
+ * the "sync" directive is disallowed during the reshape.
*/
if (test_bit(__CTR_FLAG_SYNC, &rs->ctr_flags))
continue;
/*
* Adjust data_offset and new_data_offset on all disk members of @rs
- * for out of place reshaping if requested by contructor
+ * for out of place reshaping if requested by constructor
*
* We need free space at the beginning of each raid disk for forward
* and at the end for backward reshapes which userspace has to provide
dm_requeue_original_request(tio, true);
break;
default:
- DMWARN("unimplemented target endio return value: %d", r);
+ DMCRIT("unimplemented target endio return value: %d", r);
BUG();
}
}
dm_kill_unmapped_request(rq, BLK_STS_IOERR);
break;
default:
- DMWARN("unimplemented target map return value: %d", r);
+ DMCRIT("unimplemented target map return value: %d", r);
BUG();
}
return 2; /* this wasn't a stats message */
if (r == -EINVAL)
- DMWARN("Invalid parameters for message %s", argv[0]);
+ DMCRIT("Invalid parameters for message %s", argv[0]);
return r;
}
return 0;
if ((start >= dev_size) || (start + len > dev_size)) {
- DMWARN("%s: %pg too small for target: "
- "start=%llu, len=%llu, dev_size=%llu",
- dm_device_name(ti->table->md), bdev,
- (unsigned long long)start,
- (unsigned long long)len,
- (unsigned long long)dev_size);
+ DMERR("%s: %pg too small for target: "
+ "start=%llu, len=%llu, dev_size=%llu",
+ dm_device_name(ti->table->md), bdev,
+ (unsigned long long)start,
+ (unsigned long long)len,
+ (unsigned long long)dev_size);
return 1;
}
unsigned int zone_sectors = bdev_zone_sectors(bdev);
if (start & (zone_sectors - 1)) {
- DMWARN("%s: start=%llu not aligned to h/w zone size %u of %pg",
- dm_device_name(ti->table->md),
- (unsigned long long)start,
- zone_sectors, bdev);
+ DMERR("%s: start=%llu not aligned to h/w zone size %u of %pg",
+ dm_device_name(ti->table->md),
+ (unsigned long long)start,
+ zone_sectors, bdev);
return 1;
}
* the sector range.
*/
if (len & (zone_sectors - 1)) {
- DMWARN("%s: len=%llu not aligned to h/w zone size %u of %pg",
- dm_device_name(ti->table->md),
- (unsigned long long)len,
- zone_sectors, bdev);
+ DMERR("%s: len=%llu not aligned to h/w zone size %u of %pg",
+ dm_device_name(ti->table->md),
+ (unsigned long long)len,
+ zone_sectors, bdev);
return 1;
}
}
return 0;
if (start & (logical_block_size_sectors - 1)) {
- DMWARN("%s: start=%llu not aligned to h/w "
- "logical block size %u of %pg",
- dm_device_name(ti->table->md),
- (unsigned long long)start,
- limits->logical_block_size, bdev);
+ DMERR("%s: start=%llu not aligned to h/w "
+ "logical block size %u of %pg",
+ dm_device_name(ti->table->md),
+ (unsigned long long)start,
+ limits->logical_block_size, bdev);
return 1;
}
if (len & (logical_block_size_sectors - 1)) {
- DMWARN("%s: len=%llu not aligned to h/w "
- "logical block size %u of %pg",
- dm_device_name(ti->table->md),
- (unsigned long long)len,
- limits->logical_block_size, bdev);
+ DMERR("%s: len=%llu not aligned to h/w "
+ "logical block size %u of %pg",
+ dm_device_name(ti->table->md),
+ (unsigned long long)len,
+ limits->logical_block_size, bdev);
return 1;
}
}
}
if (!found) {
- DMWARN("%s: device %s not in table devices list",
- dm_device_name(ti->table->md), d->name);
+ DMERR("%s: device %s not in table devices list",
+ dm_device_name(ti->table->md), d->name);
return;
}
if (refcount_dec_and_test(&dd->count)) {
}
if (remaining) {
- DMWARN("%s: table line %u (start sect %llu len %llu) "
- "not aligned to h/w logical block size %u",
- dm_device_name(t->md), i,
- (unsigned long long) ti->begin,
- (unsigned long long) ti->len,
- limits->logical_block_size);
+ DMERR("%s: table line %u (start sect %llu len %llu) "
+ "not aligned to h/w logical block size %u",
+ dm_device_name(t->md), i,
+ (unsigned long long) ti->begin,
+ (unsigned long long) ti->len,
+ limits->logical_block_size);
return -EINVAL;
}
struct dm_md_mempools *pools;
if (unlikely(type == DM_TYPE_NONE)) {
- DMWARN("no table type is set, can't allocate mempools");
+ DMERR("no table type is set, can't allocate mempools");
return -EINVAL;
}
* Get a disk whose integrity profile reflects the table's profile.
* Returns NULL if integrity support was inconsistent or unavailable.
*/
-static struct gendisk * dm_table_get_integrity_disk(struct dm_table *t)
+static struct gendisk *dm_table_get_integrity_disk(struct dm_table *t)
{
struct list_head *devices = dm_table_get_devices(t);
struct dm_dev_internal *dd = NULL;
* profile the new profile should not conflict.
*/
if (blk_integrity_compare(dm_disk(md), template_disk) < 0) {
- DMWARN("%s: conflict with existing integrity profile: "
- "%s profile mismatch",
- dm_device_name(t->md),
- template_disk->disk_name);
+ DMERR("%s: conflict with existing integrity profile: "
+ "%s profile mismatch",
+ dm_device_name(t->md),
+ template_disk->disk_name);
return 1;
}
if (t->md->queue &&
!blk_crypto_has_capabilities(profile,
t->md->queue->crypto_profile)) {
- DMWARN("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
+ DMERR("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
dm_destroy_crypto_profile(profile);
return -EINVAL;
}
/* WQ_UNBOUND greatly improves performance when running on ramdisk */
wq_flags = WQ_MEM_RECLAIM | WQ_UNBOUND;
- if (v->use_tasklet) {
- /*
- * Allow verify_wq to preempt softirq since verification in
- * tasklet will fall-back to using it for error handling
- * (or if the bufio cache doesn't have required hashes).
- */
- wq_flags |= WQ_HIGHPRI;
- }
+ /*
+ * Using WQ_HIGHPRI improves throughput and completion latency by
+ * reducing wait times when reading from a dm-verity device.
+ *
+ * Also as required for the "try_verify_in_tasklet" feature: WQ_HIGHPRI
+ * allows verify_wq to preempt softirq since verification in tasklet
+ * will fall-back to using it for error handling (or if the bufio cache
+ * doesn't have required hashes).
+ */
+ wq_flags |= WQ_HIGHPRI;
v->verify_wq = alloc_workqueue("kverityd", wq_flags, num_online_cpus());
if (!v->verify_wq) {
ti->error = "Cannot allocate workqueue";
sector_t sz = (sector_t)geo->cylinders * geo->heads * geo->sectors;
if (geo->start > sz) {
- DMWARN("Start sector is beyond the geometry limits.");
+ DMERR("Start sector is beyond the geometry limits.");
return -EINVAL;
}
/* The target will handle the io */
return;
default:
- DMWARN("unimplemented target endio return value: %d", r);
+ DMCRIT("unimplemented target endio return value: %d", r);
BUG();
}
}
dm_io_dec_pending(io, BLK_STS_DM_REQUEUE);
break;
default:
- DMWARN("unimplemented target map return value: %d", r);
+ DMCRIT("unimplemented target map return value: %d", r);
BUG();
}
}
md = kvzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id);
if (!md) {
- DMWARN("unable to allocate device, out of memory.");
+ DMERR("unable to allocate device, out of memory.");
return NULL;
}
md->disk->minors = 1;
md->disk->flags |= GENHD_FL_NO_PART;
md->disk->fops = &dm_blk_dops;
- md->disk->queue = md->queue;
md->disk->private_data = md;
sprintf(md->disk->disk_name, "dm-%d", minor);
config MEDIA_SUPPORT_FILTER
bool "Filter media drivers"
- default y if !EMBEDDED && !EXPERT
+ default y if !EXPERT
help
Configuring the media subsystem can be complex, as there are
hundreds of drivers and other config options.
[CEC_MSG_REPORT_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
[CEC_MSG_REQUEST_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
[CEC_MSG_SET_SYSTEM_AUDIO_MODE] = 3 | BOTH,
+ [CEC_MSG_SET_AUDIO_VOLUME_LEVEL] = 3 | DIRECTED,
[CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST] = 2 | DIRECTED,
[CEC_MSG_SYSTEM_AUDIO_MODE_STATUS] = 3 | DIRECTED,
[CEC_MSG_SET_AUDIO_RATE] = 3 | DIRECTED,
uint8_t *cec_message = cros_ec->event_data.data.cec_message;
unsigned int len = cros_ec->event_size;
+ if (len > CEC_MAX_MSG_SIZE)
+ len = CEC_MAX_MSG_SIZE;
cros_ec_cec->rx_msg.len = len;
memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);
{ "Google", "Moli", "0000:00:02.0", "Port B" },
/* Google Kinox */
{ "Google", "Kinox", "0000:00:02.0", "Port B" },
+ /* Google Kuldax */
+ { "Google", "Kuldax", "0000:00:02.0", "Port B" },
};
static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n");
cec->rx = STATE_BUSY;
cec->msg.len = status >> 24;
+ if (cec->msg.len > CEC_MAX_MSG_SIZE)
+ cec->msg.len = CEC_MAX_MSG_SIZE;
cec->msg.rx_status = CEC_RX_STATUS_OK;
s5p_cec_get_rx_buf(cec, cec->msg.len,
cec->msg.msg);
static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
{
struct drxk_state *state = fe->demodulator_priv;
- u16 err;
+ u16 err = 0;
dprintk(1, "\n");
struct v4l2_subdev_format *format)
{
struct ar0521_dev *sensor = to_ar0521_dev(sd);
- int ret = 0;
ar0521_adj_fmt(&format->format);
}
mutex_unlock(&sensor->lock);
- return ret;
+ return 0;
}
static int ar0521_s_ctrl(struct v4l2_ctrl *ctrl)
gpiod_set_value(sensor->reset_gpio, 0);
usleep_range(4500, 5000); /* min 45000 clocks */
- for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++)
- if (ar0521_write_regs(sensor, initial_regs[cnt].data,
- initial_regs[cnt].count))
+ for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) {
+ ret = ar0521_write_regs(sensor, initial_regs[cnt].data,
+ initial_regs[cnt].count);
+ if (ret)
goto off;
+ }
ret = ar0521_write_reg(sensor, AR0521_REG_SERIAL_FORMAT,
AR0521_REG_SERIAL_FORMAT_MIPI |
return 1;
}
+static int get_key_geniatech(struct IR_i2c *ir, enum rc_proto *protocol,
+ u32 *scancode, u8 *toggle)
+{
+ int i, rc;
+ unsigned char b;
+
+ /* poll IR chip */
+ for (i = 0; i < 4; i++) {
+ rc = i2c_master_recv(ir->c, &b, 1);
+ if (rc == 1)
+ break;
+ msleep(20);
+ }
+ if (rc != 1) {
+ dev_dbg(&ir->rc->dev, "read error\n");
+ if (rc < 0)
+ return rc;
+ return -EIO;
+ }
+
+ /* don't repeat the key */
+ if (ir->old == b)
+ return 0;
+ ir->old = b;
+
+ /* decode to RC5 */
+ b &= 0x7f;
+ b = (b - 1) / 2;
+
+ dev_dbg(&ir->rc->dev, "key %02x\n", b);
+
+ *protocol = RC_PROTO_RC5;
+ *scancode = b;
+ *toggle = ir->old >> 7;
+ return 1;
+}
+
static int get_key_avermedia_cardbus(struct IR_i2c *ir, enum rc_proto *protocol,
u32 *scancode, u8 *toggle)
{
rc_proto = RC_PROTO_BIT_OTHER;
ir_codes = RC_MAP_EMPTY;
break;
+ case 0x33:
+ name = "Geniatech";
+ ir->get_key = get_key_geniatech;
+ rc_proto = RC_PROTO_BIT_RC5;
+ ir_codes = RC_MAP_TOTAL_MEDIA_IN_HAND_02;
+ ir->old = 0xfc;
+ break;
case 0x6b:
name = "FusionHDTV";
ir->get_key = get_key_fusionhdtv;
case IR_KBD_GET_KEY_KNC1:
ir->get_key = get_key_knc1;
break;
+ case IR_KBD_GET_KEY_GENIATECH:
+ ir->get_key = get_key_geniatech;
+ break;
case IR_KBD_GET_KEY_FUSIONHDTV:
ir->get_key = get_key_fusionhdtv;
break;
#include <linux/bitfield.h>
#include <linux/delay.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/module.h>
#include <linux/of_graph.h>
/*
* Set pixel integration time to the whole frame time.
- * This value controls the the shutter delay when running with AE
+ * This value controls the shutter delay when running with AE
* disabled. If longer than frame time, it affects the output
* frame rate.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of_device.h>
+#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/slab.h>
#include <linux/types.h>
/* lock to protect all members below */
struct mutex lock;
- int power_count;
-
struct v4l2_mbus_framefmt fmt;
bool pending_fmt_change;
return ret;
}
-/* --------------- Subdev Operations --------------- */
-
-static int ov5640_s_power(struct v4l2_subdev *sd, int on)
+static int ov5640_sensor_suspend(struct device *dev)
{
- struct ov5640_dev *sensor = to_ov5640_dev(sd);
- int ret = 0;
-
- mutex_lock(&sensor->lock);
-
- /*
- * If the power count is modified from 0 to != 0 or from != 0 to 0,
- * update the power state.
- */
- if (sensor->power_count == !on) {
- ret = ov5640_set_power(sensor, !!on);
- if (ret)
- goto out;
- }
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
- /* Update the power count. */
- sensor->power_count += on ? 1 : -1;
- WARN_ON(sensor->power_count < 0);
-out:
- mutex_unlock(&sensor->lock);
+ return ov5640_set_power(ov5640, false);
+}
- if (on && !ret && sensor->power_count == 1) {
- /* restore controls */
- ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
- }
+static int ov5640_sensor_resume(struct device *dev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
- return ret;
+ return ov5640_set_power(ov5640, true);
}
+/* --------------- Subdev Operations --------------- */
+
static int ov5640_try_frame_interval(struct ov5640_dev *sensor,
struct v4l2_fract *fi,
u32 width, u32 height)
/* v4l2_ctrl_lock() locks our own mutex */
+ if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
+ return 0;
+
switch (ctrl->id) {
case V4L2_CID_AUTOGAIN:
val = ov5640_get_gain(sensor);
break;
}
+ pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
return 0;
}
/*
* If the device is not powered up by the host driver do
* not apply any controls to H/W at this time. Instead
- * the controls will be restored right after power-up.
+ * the controls will be restored at start streaming time.
*/
- if (sensor->power_count == 0)
+ if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
return 0;
switch (ctrl->id) {
break;
}
+ pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
return ret;
}
struct ov5640_dev *sensor = to_ov5640_dev(sd);
int ret = 0;
+ if (enable) {
+ ret = pm_runtime_resume_and_get(&sensor->i2c_client->dev);
+ if (ret < 0)
+ return ret;
+
+ ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+ if (ret) {
+ pm_runtime_put(&sensor->i2c_client->dev);
+ return ret;
+ }
+ }
+
mutex_lock(&sensor->lock);
if (sensor->streaming == !enable) {
if (!ret)
sensor->streaming = enable;
}
+
out:
mutex_unlock(&sensor->lock);
+
+ if (!enable || ret)
+ pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
return ret;
}
}
static const struct v4l2_subdev_core_ops ov5640_core_ops = {
- .s_power = ov5640_s_power,
.log_status = v4l2_ctrl_subdev_log_status,
.subscribe_event = v4l2_ctrl_subdev_subscribe_event,
.unsubscribe_event = v4l2_event_subdev_unsubscribe,
int ret = 0;
u16 chip_id;
- ret = ov5640_set_power_on(sensor);
- if (ret)
- return ret;
-
ret = ov5640_read_reg16(sensor, OV5640_REG_CHIP_ID, &chip_id);
if (ret) {
dev_err(&client->dev, "%s: failed to read chip identifier\n",
__func__);
- goto power_off;
+ return ret;
}
if (chip_id != 0x5640) {
dev_err(&client->dev, "%s: wrong chip identifier, expected 0x5640, got 0x%x\n",
__func__, chip_id);
- ret = -ENXIO;
+ return -ENXIO;
}
-power_off:
- ov5640_set_power_off(sensor);
- return ret;
+ return 0;
}
static int ov5640_probe(struct i2c_client *client)
ret = ov5640_get_regulators(sensor);
if (ret)
- return ret;
+ goto entity_cleanup;
mutex_init(&sensor->lock);
- ret = ov5640_check_chip_id(sensor);
+ ret = ov5640_init_controls(sensor);
if (ret)
goto entity_cleanup;
- ret = ov5640_init_controls(sensor);
- if (ret)
+ ret = ov5640_sensor_resume(dev);
+ if (ret) {
+ dev_err(dev, "failed to power on\n");
goto entity_cleanup;
+ }
+
+ pm_runtime_set_active(dev);
+ pm_runtime_get_noresume(dev);
+ pm_runtime_enable(dev);
+
+ ret = ov5640_check_chip_id(sensor);
+ if (ret)
+ goto err_pm_runtime;
ret = v4l2_async_register_subdev_sensor(&sensor->sd);
if (ret)
- goto free_ctrls;
+ goto err_pm_runtime;
+
+ pm_runtime_set_autosuspend_delay(dev, 1000);
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_put_autosuspend(dev);
return 0;
-free_ctrls:
+err_pm_runtime:
+ pm_runtime_put_noidle(dev);
+ pm_runtime_disable(dev);
v4l2_ctrl_handler_free(&sensor->ctrls.handler);
+ ov5640_sensor_suspend(dev);
entity_cleanup:
media_entity_cleanup(&sensor->sd.entity);
mutex_destroy(&sensor->lock);
{
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov5640_dev *sensor = to_ov5640_dev(sd);
+ struct device *dev = &client->dev;
+
+ pm_runtime_disable(dev);
+ if (!pm_runtime_status_suspended(dev))
+ ov5640_sensor_suspend(dev);
+ pm_runtime_set_suspended(dev);
v4l2_async_unregister_subdev(&sensor->sd);
media_entity_cleanup(&sensor->sd.entity);
mutex_destroy(&sensor->lock);
}
+static const struct dev_pm_ops ov5640_pm_ops = {
+ SET_RUNTIME_PM_OPS(ov5640_sensor_suspend, ov5640_sensor_resume, NULL)
+};
+
static const struct i2c_device_id ov5640_id[] = {
{"ov5640", 0},
{},
.driver = {
.name = "ov5640",
.of_match_table = ov5640_dt_ids,
+ .pm = &ov5640_pm_ops,
},
.id_table = ov5640_id,
.probe_new = ov5640_probe,
&rate);
if (!ret && sensor->extclk) {
ret = clk_set_rate(sensor->extclk, rate);
- if (ret)
- return dev_err_probe(dev, ret,
- "failed to set clock rate\n");
+ if (ret) {
+ dev_err_probe(dev, ret, "failed to set clock rate\n");
+ goto error_endpoint;
+ }
} else if (ret && !sensor->extclk) {
- return dev_err_probe(dev, ret, "invalid clock config\n");
+ dev_err_probe(dev, ret, "invalid clock config\n");
+ goto error_endpoint;
}
sensor->extclk_rate = rate ? rate : clk_get_rate(sensor->extclk);
struct media_device *mdev = entity->graph_obj.mdev;
struct media_link *link, *tmp;
struct media_interface *intf;
- unsigned int i;
+ struct media_pad *iter;
ida_free(&mdev->entity_internal_idx, entity->internal_idx);
__media_entity_remove_links(entity);
/* Remove all pads that belong to this entity */
- for (i = 0; i < entity->num_pads; i++)
- media_gobj_destroy(&entity->pads[i].graph_obj);
+ media_entity_for_each_pad(entity, iter)
+ media_gobj_destroy(&iter->graph_obj);
/* Remove the entity */
media_gobj_destroy(&entity->graph_obj);
struct media_entity *entity)
{
struct media_entity_notify *notify, *next;
- unsigned int i;
+ struct media_pad *iter;
int ret;
if (entity->function == MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN ||
media_gobj_create(mdev, MEDIA_GRAPH_ENTITY, &entity->graph_obj);
/* Initialize objects at the pads */
- for (i = 0; i < entity->num_pads; i++)
- media_gobj_create(mdev, MEDIA_GRAPH_PAD,
- &entity->pads[i].graph_obj);
+ media_entity_for_each_pad(entity, iter)
+ media_gobj_create(mdev, MEDIA_GRAPH_PAD, &iter->graph_obj);
/* invoke entity_notify callbacks */
list_for_each_entry_safe(notify, next, &mdev->entity_notify, list)
}
}
-__must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
- int idx_max)
+__must_check int media_entity_enum_init(struct media_entity_enum *ent_enum,
+ struct media_device *mdev)
{
- idx_max = ALIGN(idx_max, BITS_PER_LONG);
+ int idx_max;
+
+ idx_max = ALIGN(mdev->entity_internal_idx_max + 1, BITS_PER_LONG);
ent_enum->bmap = bitmap_zalloc(idx_max, GFP_KERNEL);
if (!ent_enum->bmap)
return -ENOMEM;
return 0;
}
-EXPORT_SYMBOL_GPL(__media_entity_enum_init);
+EXPORT_SYMBOL_GPL(media_entity_enum_init);
void media_entity_enum_cleanup(struct media_entity_enum *ent_enum)
{
struct media_pad *pads)
{
struct media_device *mdev = entity->graph_obj.mdev;
- unsigned int i;
+ struct media_pad *iter;
+ unsigned int i = 0;
if (num_pads >= MEDIA_ENTITY_MAX_PADS)
return -E2BIG;
if (mdev)
mutex_lock(&mdev->graph_mutex);
- for (i = 0; i < num_pads; i++) {
- pads[i].entity = entity;
- pads[i].index = i;
+ media_entity_for_each_pad(entity, iter) {
+ iter->entity = entity;
+ iter->index = i++;
if (mdev)
media_gobj_create(mdev, MEDIA_GRAPH_PAD,
- &entity->pads[i].graph_obj);
+ &iter->graph_obj);
}
if (mdev)
* Graph traversal
*/
+/*
+ * This function checks the interdependency inside the entity between @pad0
+ * and @pad1. If two pads are interdependent they are part of the same pipeline
+ * and enabling one of the pads means that the other pad will become "locked"
+ * and doesn't allow configuration changes.
+ *
+ * This function uses the &media_entity_operations.has_pad_interdep() operation
+ * to check the dependency inside the entity between @pad0 and @pad1. If the
+ * has_pad_interdep operation is not implemented, all pads of the entity are
+ * considered to be interdependent.
+ */
+static bool media_entity_has_pad_interdep(struct media_entity *entity,
+ unsigned int pad0, unsigned int pad1)
+{
+ if (pad0 >= entity->num_pads || pad1 >= entity->num_pads)
+ return false;
+
+ if (entity->pads[pad0].flags & entity->pads[pad1].flags &
+ (MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_SOURCE))
+ return false;
+
+ if (!entity->ops || !entity->ops->has_pad_interdep)
+ return true;
+
+ return entity->ops->has_pad_interdep(entity, pad0, pad1);
+}
+
static struct media_entity *
media_entity_other(struct media_entity *entity, struct media_link *link)
{
}
EXPORT_SYMBOL_GPL(media_graph_walk_next);
-int media_entity_get_fwnode_pad(struct media_entity *entity,
- struct fwnode_handle *fwnode,
- unsigned long direction_flags)
+/* -----------------------------------------------------------------------------
+ * Pipeline management
+ */
+
+/*
+ * The pipeline traversal stack stores pads that are reached during graph
+ * traversal, with a list of links to be visited to continue the traversal.
+ * When a new pad is reached, an entry is pushed on the top of the stack and
+ * points to the incoming pad and the first link of the entity.
+ *
+ * To find further pads in the pipeline, the traversal algorithm follows
+ * internal pad dependencies in the entity, and then links in the graph. It
+ * does so by iterating over all links of the entity, and following enabled
+ * links that originate from a pad that is internally connected to the incoming
+ * pad, as reported by the media_entity_has_pad_interdep() function.
+ */
+
+/**
+ * struct media_pipeline_walk_entry - Entry in the pipeline traversal stack
+ *
+ * @pad: The media pad being visited
+ * @links: Links left to be visited
+ */
+struct media_pipeline_walk_entry {
+ struct media_pad *pad;
+ struct list_head *links;
+};
+
+/**
+ * struct media_pipeline_walk - State used by the media pipeline traversal
+ * algorithm
+ *
+ * @mdev: The media device
+ * @stack: Depth-first search stack
+ * @stack.size: Number of allocated entries in @stack.entries
+ * @stack.top: Index of the top stack entry (-1 if the stack is empty)
+ * @stack.entries: Stack entries
+ */
+struct media_pipeline_walk {
+ struct media_device *mdev;
+
+ struct {
+ unsigned int size;
+ int top;
+ struct media_pipeline_walk_entry *entries;
+ } stack;
+};
+
+#define MEDIA_PIPELINE_STACK_GROW_STEP 16
+
+static struct media_pipeline_walk_entry *
+media_pipeline_walk_top(struct media_pipeline_walk *walk)
{
- struct fwnode_endpoint endpoint;
- unsigned int i;
+ return &walk->stack.entries[walk->stack.top];
+}
+
+static bool media_pipeline_walk_empty(struct media_pipeline_walk *walk)
+{
+ return walk->stack.top == -1;
+}
+
+/* Increase the stack size by MEDIA_PIPELINE_STACK_GROW_STEP elements. */
+static int media_pipeline_walk_resize(struct media_pipeline_walk *walk)
+{
+ struct media_pipeline_walk_entry *entries;
+ unsigned int new_size;
+
+ /* Safety check, to avoid stack overflows in case of bugs. */
+ if (walk->stack.size >= 256)
+ return -E2BIG;
+
+ new_size = walk->stack.size + MEDIA_PIPELINE_STACK_GROW_STEP;
+
+ entries = krealloc(walk->stack.entries,
+ new_size * sizeof(*walk->stack.entries),
+ GFP_KERNEL);
+ if (!entries)
+ return -ENOMEM;
+
+ walk->stack.entries = entries;
+ walk->stack.size = new_size;
+
+ return 0;
+}
+
+/* Push a new entry on the stack. */
+static int media_pipeline_walk_push(struct media_pipeline_walk *walk,
+ struct media_pad *pad)
+{
+ struct media_pipeline_walk_entry *entry;
int ret;
- if (!entity->ops || !entity->ops->get_fwnode_pad) {
- for (i = 0; i < entity->num_pads; i++) {
- if (entity->pads[i].flags & direction_flags)
- return i;
+ if (walk->stack.top + 1 >= walk->stack.size) {
+ ret = media_pipeline_walk_resize(walk);
+ if (ret)
+ return ret;
+ }
+
+ walk->stack.top++;
+ entry = media_pipeline_walk_top(walk);
+ entry->pad = pad;
+ entry->links = pad->entity->links.next;
+
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: pushed entry %u: '%s':%u\n",
+ walk->stack.top, pad->entity->name, pad->index);
+
+ return 0;
+}
+
+/*
+ * Move the top entry link cursor to the next link. If all links of the entry
+ * have been visited, pop the entry itself.
+ */
+static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
+{
+ struct media_pipeline_walk_entry *entry;
+
+ if (WARN_ON(walk->stack.top < 0))
+ return;
+
+ entry = media_pipeline_walk_top(walk);
+
+ if (entry->links->next == &entry->pad->entity->links) {
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: entry %u has no more links, popping\n",
+ walk->stack.top);
+
+ walk->stack.top--;
+ return;
+ }
+
+ entry->links = entry->links->next;
+
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: moved entry %u to next link\n",
+ walk->stack.top);
+}
+
+/* Free all memory allocated while walking the pipeline. */
+static void media_pipeline_walk_destroy(struct media_pipeline_walk *walk)
+{
+ kfree(walk->stack.entries);
+}
+
+/* Add a pad to the pipeline and push it to the stack. */
+static int media_pipeline_add_pad(struct media_pipeline *pipe,
+ struct media_pipeline_walk *walk,
+ struct media_pad *pad)
+{
+ struct media_pipeline_pad *ppad;
+
+ list_for_each_entry(ppad, &pipe->pads, list) {
+ if (ppad->pad == pad) {
+ dev_dbg(pad->graph_obj.mdev->dev,
+ "media pipeline: already contains pad '%s':%u\n",
+ pad->entity->name, pad->index);
+ return 0;
}
+ }
- return -ENXIO;
+ ppad = kzalloc(sizeof(*ppad), GFP_KERNEL);
+ if (!ppad)
+ return -ENOMEM;
+
+ ppad->pipe = pipe;
+ ppad->pad = pad;
+
+ list_add_tail(&ppad->list, &pipe->pads);
+
+ dev_dbg(pad->graph_obj.mdev->dev,
+ "media pipeline: added pad '%s':%u\n",
+ pad->entity->name, pad->index);
+
+ return media_pipeline_walk_push(walk, pad);
+}
+
+/* Explore the next link of the entity at the top of the stack. */
+static int media_pipeline_explore_next_link(struct media_pipeline *pipe,
+ struct media_pipeline_walk *walk)
+{
+ struct media_pipeline_walk_entry *entry = media_pipeline_walk_top(walk);
+ struct media_pad *pad;
+ struct media_link *link;
+ struct media_pad *local;
+ struct media_pad *remote;
+ int ret;
+
+ pad = entry->pad;
+ link = list_entry(entry->links, typeof(*link), list);
+ media_pipeline_walk_pop(walk);
+
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: exploring link '%s':%u -> '%s':%u\n",
+ link->source->entity->name, link->source->index,
+ link->sink->entity->name, link->sink->index);
+
+ /* Skip links that are not enabled. */
+ if (!(link->flags & MEDIA_LNK_FL_ENABLED)) {
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: skipping link (disabled)\n");
+ return 0;
}
- ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
+ /* Get the local pad and remote pad. */
+ if (link->source->entity == pad->entity) {
+ local = link->source;
+ remote = link->sink;
+ } else {
+ local = link->sink;
+ remote = link->source;
+ }
+
+ /*
+ * Skip links that originate from a different pad than the incoming pad
+ * that is not connected internally in the entity to the incoming pad.
+ */
+ if (pad != local &&
+ !media_entity_has_pad_interdep(pad->entity, pad->index, local->index)) {
+ dev_dbg(walk->mdev->dev,
+ "media pipeline: skipping link (no route)\n");
+ return 0;
+ }
+
+ /*
+ * Add the local and remote pads of the link to the pipeline and push
+ * them to the stack, if they're not already present.
+ */
+ ret = media_pipeline_add_pad(pipe, walk, local);
if (ret)
return ret;
- ret = entity->ops->get_fwnode_pad(entity, &endpoint);
- if (ret < 0)
+ ret = media_pipeline_add_pad(pipe, walk, remote);
+ if (ret)
return ret;
- if (ret >= entity->num_pads)
- return -ENXIO;
+ return 0;
+}
- if (!(entity->pads[ret].flags & direction_flags))
- return -ENXIO;
+static void media_pipeline_cleanup(struct media_pipeline *pipe)
+{
+ while (!list_empty(&pipe->pads)) {
+ struct media_pipeline_pad *ppad;
- return ret;
+ ppad = list_first_entry(&pipe->pads, typeof(*ppad), list);
+ list_del(&ppad->list);
+ kfree(ppad);
+ }
}
-EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
-/* -----------------------------------------------------------------------------
- * Pipeline management
- */
+static int media_pipeline_populate(struct media_pipeline *pipe,
+ struct media_pad *pad)
+{
+ struct media_pipeline_walk walk = { };
+ struct media_pipeline_pad *ppad;
+ int ret;
+
+ /*
+ * Populate the media pipeline by walking the media graph, starting
+ * from @pad.
+ */
+ INIT_LIST_HEAD(&pipe->pads);
+ pipe->mdev = pad->graph_obj.mdev;
+
+ walk.mdev = pipe->mdev;
+ walk.stack.top = -1;
+ ret = media_pipeline_add_pad(pipe, &walk, pad);
+ if (ret)
+ goto done;
+
+ /*
+ * Use a depth-first search algorithm: as long as the stack is not
+ * empty, explore the next link of the top entry. The
+ * media_pipeline_explore_next_link() function will either move to the
+ * next link, pop the entry if fully visited, or add new entries on
+ * top.
+ */
+ while (!media_pipeline_walk_empty(&walk)) {
+ ret = media_pipeline_explore_next_link(pipe, &walk);
+ if (ret)
+ goto done;
+ }
+
+ dev_dbg(pad->graph_obj.mdev->dev,
+ "media pipeline populated, found pads:\n");
+
+ list_for_each_entry(ppad, &pipe->pads, list)
+ dev_dbg(pad->graph_obj.mdev->dev, "- '%s':%u\n",
+ ppad->pad->entity->name, ppad->pad->index);
+
+ WARN_ON(walk.stack.top != -1);
-__must_check int __media_pipeline_start(struct media_entity *entity,
+ ret = 0;
+
+done:
+ media_pipeline_walk_destroy(&walk);
+
+ if (ret)
+ media_pipeline_cleanup(pipe);
+
+ return ret;
+}
+
+__must_check int __media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe)
{
- struct media_device *mdev = entity->graph_obj.mdev;
- struct media_graph *graph = &pipe->graph;
- struct media_entity *entity_err = entity;
- struct media_link *link;
+ struct media_device *mdev = pad->entity->graph_obj.mdev;
+ struct media_pipeline_pad *err_ppad;
+ struct media_pipeline_pad *ppad;
int ret;
- if (pipe->streaming_count) {
- pipe->streaming_count++;
+ lockdep_assert_held(&mdev->graph_mutex);
+
+ /*
+ * If the entity is already part of a pipeline, that pipeline must
+ * be the same as the pipe given to media_pipeline_start().
+ */
+ if (WARN_ON(pad->pipe && pad->pipe != pipe))
+ return -EINVAL;
+
+ /*
+ * If the pipeline has already been started, it is guaranteed to be
+ * valid, so just increase the start count.
+ */
+ if (pipe->start_count) {
+ pipe->start_count++;
return 0;
}
- ret = media_graph_walk_init(&pipe->graph, mdev);
+ /*
+ * Populate the pipeline. This populates the media_pipeline pads list
+ * with media_pipeline_pad instances for each pad found during graph
+ * walk.
+ */
+ ret = media_pipeline_populate(pipe, pad);
if (ret)
return ret;
- media_graph_walk_start(&pipe->graph, entity);
+ /*
+ * Now that all the pads in the pipeline have been gathered, perform
+ * the validation steps.
+ */
+
+ list_for_each_entry(ppad, &pipe->pads, list) {
+ struct media_pad *pad = ppad->pad;
+ struct media_entity *entity = pad->entity;
+ bool has_enabled_link = false;
+ bool has_link = false;
+ struct media_link *link;
- while ((entity = media_graph_walk_next(graph))) {
- DECLARE_BITMAP(active, MEDIA_ENTITY_MAX_PADS);
- DECLARE_BITMAP(has_no_links, MEDIA_ENTITY_MAX_PADS);
+ dev_dbg(mdev->dev, "Validating pad '%s':%u\n", pad->entity->name,
+ pad->index);
- if (entity->pipe && entity->pipe != pipe) {
- pr_err("Pipe active for %s. Can't start for %s\n",
- entity->name,
- entity_err->name);
+ /*
+ * 1. Ensure that the pad doesn't already belong to a different
+ * pipeline.
+ */
+ if (pad->pipe) {
+ dev_dbg(mdev->dev, "Failed to start pipeline: pad '%s':%u busy\n",
+ pad->entity->name, pad->index);
ret = -EBUSY;
goto error;
}
- /* Already streaming --- no need to check. */
- if (entity->pipe)
- continue;
-
- entity->pipe = pipe;
-
- if (!entity->ops || !entity->ops->link_validate)
- continue;
-
- bitmap_zero(active, entity->num_pads);
- bitmap_fill(has_no_links, entity->num_pads);
-
+ /*
+ * 2. Validate all active links whose sink is the current pad.
+ * Validation of the source pads is performed in the context of
+ * the connected sink pad to avoid duplicating checks.
+ */
for_each_media_entity_data_link(entity, link) {
- struct media_pad *pad = link->sink->entity == entity
- ? link->sink : link->source;
+ /* Skip links unrelated to the current pad. */
+ if (link->sink != pad && link->source != pad)
+ continue;
- /* Mark that a pad is connected by a link. */
- bitmap_clear(has_no_links, pad->index, 1);
+ /* Record if the pad has links and enabled links. */
+ if (link->flags & MEDIA_LNK_FL_ENABLED)
+ has_enabled_link = true;
+ has_link = true;
/*
- * Pads that either do not need to connect or
- * are connected through an enabled link are
- * fine.
+ * Validate the link if it's enabled and has the
+ * current pad as its sink.
*/
- if (!(pad->flags & MEDIA_PAD_FL_MUST_CONNECT) ||
- link->flags & MEDIA_LNK_FL_ENABLED)
- bitmap_set(active, pad->index, 1);
+ if (!(link->flags & MEDIA_LNK_FL_ENABLED))
+ continue;
- /*
- * Link validation will only take place for
- * sink ends of the link that are enabled.
- */
- if (link->sink != pad ||
- !(link->flags & MEDIA_LNK_FL_ENABLED))
+ if (link->sink != pad)
+ continue;
+
+ if (!entity->ops || !entity->ops->link_validate)
continue;
ret = entity->ops->link_validate(link);
- if (ret < 0 && ret != -ENOIOCTLCMD) {
- dev_dbg(entity->graph_obj.mdev->dev,
- "link validation failed for '%s':%u -> '%s':%u, error %d\n",
+ if (ret) {
+ dev_dbg(mdev->dev,
+ "Link '%s':%u -> '%s':%u failed validation: %d\n",
link->source->entity->name,
link->source->index,
- entity->name, link->sink->index, ret);
+ link->sink->entity->name,
+ link->sink->index, ret);
goto error;
}
- }
- /* Either no links or validated links are fine. */
- bitmap_or(active, active, has_no_links, entity->num_pads);
+ dev_dbg(mdev->dev,
+ "Link '%s':%u -> '%s':%u is valid\n",
+ link->source->entity->name,
+ link->source->index,
+ link->sink->entity->name,
+ link->sink->index);
+ }
- if (!bitmap_full(active, entity->num_pads)) {
+ /*
+ * 3. If the pad has the MEDIA_PAD_FL_MUST_CONNECT flag set,
+ * ensure that it has either no link or an enabled link.
+ */
+ if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) && has_link &&
+ !has_enabled_link) {
+ dev_dbg(mdev->dev,
+ "Pad '%s':%u must be connected by an enabled link\n",
+ pad->entity->name, pad->index);
ret = -ENOLINK;
- dev_dbg(entity->graph_obj.mdev->dev,
- "'%s':%u must be connected by an enabled link\n",
- entity->name,
- (unsigned)find_first_zero_bit(
- active, entity->num_pads));
goto error;
}
+
+ /* Validation passed, store the pipe pointer in the pad. */
+ pad->pipe = pipe;
}
- pipe->streaming_count++;
+ pipe->start_count++;
return 0;
* Link validation on graph failed. We revert what we did and
* return the error.
*/
- media_graph_walk_start(graph, entity_err);
- while ((entity_err = media_graph_walk_next(graph))) {
- entity_err->pipe = NULL;
-
- /*
- * We haven't started entities further than this so we quit
- * here.
- */
- if (entity_err == entity)
+ list_for_each_entry(err_ppad, &pipe->pads, list) {
+ if (err_ppad == ppad)
break;
+
+ err_ppad->pad->pipe = NULL;
}
- media_graph_walk_cleanup(graph);
+ media_pipeline_cleanup(pipe);
return ret;
}
EXPORT_SYMBOL_GPL(__media_pipeline_start);
-__must_check int media_pipeline_start(struct media_entity *entity,
+__must_check int media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe)
{
- struct media_device *mdev = entity->graph_obj.mdev;
+ struct media_device *mdev = pad->entity->graph_obj.mdev;
int ret;
mutex_lock(&mdev->graph_mutex);
- ret = __media_pipeline_start(entity, pipe);
+ ret = __media_pipeline_start(pad, pipe);
mutex_unlock(&mdev->graph_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(media_pipeline_start);
-void __media_pipeline_stop(struct media_entity *entity)
+void __media_pipeline_stop(struct media_pad *pad)
{
- struct media_graph *graph = &entity->pipe->graph;
- struct media_pipeline *pipe = entity->pipe;
+ struct media_pipeline *pipe = pad->pipe;
+ struct media_pipeline_pad *ppad;
/*
* If the following check fails, the driver has performed an
if (WARN_ON(!pipe))
return;
- if (--pipe->streaming_count)
+ if (--pipe->start_count)
return;
- media_graph_walk_start(graph, entity);
-
- while ((entity = media_graph_walk_next(graph)))
- entity->pipe = NULL;
+ list_for_each_entry(ppad, &pipe->pads, list)
+ ppad->pad->pipe = NULL;
- media_graph_walk_cleanup(graph);
+ media_pipeline_cleanup(pipe);
+ if (pipe->allocated)
+ kfree(pipe);
}
EXPORT_SYMBOL_GPL(__media_pipeline_stop);
-void media_pipeline_stop(struct media_entity *entity)
+void media_pipeline_stop(struct media_pad *pad)
{
- struct media_device *mdev = entity->graph_obj.mdev;
+ struct media_device *mdev = pad->entity->graph_obj.mdev;
mutex_lock(&mdev->graph_mutex);
- __media_pipeline_stop(entity);
+ __media_pipeline_stop(pad);
mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_pipeline_stop);
+__must_check int media_pipeline_alloc_start(struct media_pad *pad)
+{
+ struct media_device *mdev = pad->entity->graph_obj.mdev;
+ struct media_pipeline *new_pipe = NULL;
+ struct media_pipeline *pipe;
+ int ret;
+
+ mutex_lock(&mdev->graph_mutex);
+
+ /*
+ * Is the entity already part of a pipeline? If not, we need to allocate
+ * a pipe.
+ */
+ pipe = media_pad_pipeline(pad);
+ if (!pipe) {
+ new_pipe = kzalloc(sizeof(*new_pipe), GFP_KERNEL);
+ if (!new_pipe) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ pipe = new_pipe;
+ pipe->allocated = true;
+ }
+
+ ret = __media_pipeline_start(pad, pipe);
+ if (ret)
+ kfree(new_pipe);
+
+out:
+ mutex_unlock(&mdev->graph_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(media_pipeline_alloc_start);
+
/* -----------------------------------------------------------------------------
* Links management
*/
{
const u32 mask = MEDIA_LNK_FL_ENABLED;
struct media_device *mdev;
- struct media_entity *source, *sink;
+ struct media_pad *source, *sink;
int ret = -EBUSY;
if (link == NULL)
if (link->flags == flags)
return 0;
- source = link->source->entity;
- sink = link->sink->entity;
+ source = link->source;
+ sink = link->sink;
if (!(link->flags & MEDIA_LNK_FL_DYNAMIC) &&
- (media_entity_is_streaming(source) ||
- media_entity_is_streaming(sink)))
+ (media_pad_is_streaming(source) || media_pad_is_streaming(sink)))
return -EBUSY;
mdev = source->graph_obj.mdev;
}
EXPORT_SYMBOL_GPL(media_pad_remote_pad_unique);
+int media_entity_get_fwnode_pad(struct media_entity *entity,
+ struct fwnode_handle *fwnode,
+ unsigned long direction_flags)
+{
+ struct fwnode_endpoint endpoint;
+ unsigned int i;
+ int ret;
+
+ if (!entity->ops || !entity->ops->get_fwnode_pad) {
+ for (i = 0; i < entity->num_pads; i++) {
+ if (entity->pads[i].flags & direction_flags)
+ return i;
+ }
+
+ return -ENXIO;
+ }
+
+ ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
+ if (ret)
+ return ret;
+
+ ret = entity->ops->get_fwnode_pad(entity, &endpoint);
+ if (ret < 0)
+ return ret;
+
+ if (ret >= entity->num_pads)
+ return -ENXIO;
+
+ if (!(entity->pads[ret].flags & direction_flags))
+ return -ENXIO;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
+
+struct media_pipeline *media_entity_pipeline(struct media_entity *entity)
+{
+ struct media_pad *pad;
+
+ media_entity_for_each_pad(entity, pad) {
+ if (pad->pipe)
+ return pad->pipe;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(media_entity_pipeline);
+
+struct media_pipeline *media_pad_pipeline(struct media_pad *pad)
+{
+ return pad->pipe;
+}
+EXPORT_SYMBOL_GPL(media_pad_pipeline);
+
static void media_interface_init(struct media_device *mdev,
struct media_interface *intf,
u32 gobj_type,
/*
* For a 13.5 Mpps clock and 15,625 Hz line rate, a line is
- * is 864 pixels = 720 active + 144 blanking. ITU-R BT.601
+ * 864 pixels = 720 active + 144 blanking. ITU-R BT.601
* specifies 12 luma clock periods or ~ 0.9 * 13.5 Mpps after
* the end of active video to start a horizontal line, so that
* leaves 132 pixels of hblank to ignore.
/*
* For a 13.5 Mpps clock and 15,734.26 Hz line rate, a line is
- * is 858 pixels = 720 active + 138 blanking. The Hsync leading
+ * 858 pixels = 720 active + 138 blanking. The Hsync leading
* edge should happen 1.2 us * 13.5 Mpps ~= 16 pixels after the
* end of active video, leaving 122 pixels of hblank to ignore
* before active video starts.
{
struct i2c_board_info info;
static const unsigned short default_addr_list[] = {
- 0x18, 0x6b, 0x71,
+ 0x18, 0x33, 0x6b, 0x71,
I2C_CLIENT_END
};
static const unsigned short pvr2000_addr_list[] = {
}
fallthrough;
case CX88_BOARD_DVICO_FUSIONHDTV_5_PCI_NANO:
+ case CX88_BOARD_NOTONLYTV_LV3H:
request_module("ir-kbd-i2c");
}
return r;
}
- r = media_pipeline_start(&q->vdev.entity, &q->pipe);
+ r = video_device_pipeline_start(&q->vdev, &q->pipe);
if (r)
goto fail_pipeline;
fail_csi2_subdev:
cio2_hw_exit(cio2, q);
fail_hw:
- media_pipeline_stop(&q->vdev.entity);
+ video_device_pipeline_stop(&q->vdev);
fail_pipeline:
dev_dbg(dev, "failed to start streaming (%d)\n", r);
cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_QUEUED);
cio2_hw_exit(cio2, q);
synchronize_irq(cio2->pci_dev->irq);
cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_ERROR);
- media_pipeline_stop(&q->vdev.entity);
+ video_device_pipeline_stop(&q->vdev);
pm_runtime_put(dev);
cio2->streaming = false;
}
inst->workqueue = NULL;
}
+ if (inst->fh.m2m_ctx) {
+ v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
+ inst->fh.m2m_ctx = NULL;
+ }
v4l2_ctrl_handler_free(&inst->ctrl_handler);
mutex_destroy(&inst->lock);
v4l2_fh_del(&inst->fh);
vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst);
- vpu_inst_lock(inst);
- if (inst->fh.m2m_ctx) {
- v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
- inst->fh.m2m_ctx = NULL;
- }
- vpu_inst_unlock(inst);
-
call_void_vop(inst, release);
vpu_inst_unregister(inst);
vpu_inst_put(inst);
coda_write(dev, (s32)values[i], CODA9_REG_JPEG_HUFF_DATA);
}
-static int coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx)
+static void coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx)
{
struct coda_huff_tab *huff_tab = ctx->params.jpeg_huff_tab;
struct coda_dev *dev = ctx->dev;
coda9_jpeg_write_huff_values(dev, huff_tab->luma_ac, 162);
coda9_jpeg_write_huff_values(dev, huff_tab->chroma_ac, 162);
coda_write(dev, 0x000, CODA9_REG_JPEG_HUFF_CTRL);
- return 0;
}
static inline void coda9_jpeg_write_qmat_tab(struct coda_dev *dev,
coda_write(dev, ctx->params.jpeg_restart_interval,
CODA9_REG_JPEG_RST_INTVAL);
- if (ctx->params.jpeg_huff_tab) {
- ret = coda9_jpeg_dec_huff_setup(ctx);
- if (ret < 0) {
- v4l2_err(&dev->v4l2_dev,
- "failed to set up Huffman tables: %d\n", ret);
- return ret;
- }
- }
+ if (ctx->params.jpeg_huff_tab)
+ coda9_jpeg_dec_huff_setup(ctx);
coda9_jpeg_qmat_setup(ctx);
kfree(path);
atomic_dec(&mdp->job_count);
wake_up(&mdp->callback_wq);
- if (cmd->pkt.buf_size > 0)
+ if (cmd && cmd->pkt.buf_size > 0)
mdp_cmdq_pkt_destroy(&cmd->pkt);
kfree(comps);
kfree(cmd);
int i, ret;
if (comp->comp_dev) {
- ret = pm_runtime_get_sync(comp->comp_dev);
+ ret = pm_runtime_resume_and_get(comp->comp_dev);
if (ret < 0) {
dev_err(dev,
"Failed to get power, err %d. type:%d id:%d\n",
dev_err(dev,
"Failed to enable clk %d. type:%d id:%d\n",
i, comp->type, comp->id);
+ pm_runtime_put(comp->comp_dev);
return ret;
}
}
ret = mdp_comp_init(mdp, node, comp, id);
if (ret) {
- kfree(comp);
+ devm_kfree(dev, comp);
return ERR_PTR(ret);
}
mdp->comp[id] = comp;
if (mdp->comp[i]) {
pm_runtime_disable(mdp->comp[i]->comp_dev);
mdp_comp_deinit(mdp->comp[i]);
- kfree(mdp->comp[i]);
+ devm_kfree(mdp->comp[i]->comp_dev, mdp->comp[i]);
mdp->comp[i] = NULL;
}
}
mdp_comp_destroy(mdp);
err_return:
for (i = 0; i < MDP_PIPE_MAX; i++)
- mtk_mutex_put(mdp->mdp_mutex[i]);
+ if (mdp)
+ mtk_mutex_put(mdp->mdp_mutex[i]);
kfree(mdp);
dev_dbg(dev, "Errno %d\n", ret);
return ret;
/* vpu work_size was set in mdp_vpu_ipi_handle_init_ack */
mem_size = vpu_alloc_size;
- if (mdp_vpu_shared_mem_alloc(vpu)) {
+ err = mdp_vpu_shared_mem_alloc(vpu);
+ if (err) {
dev_err(&mdp->pdev->dev, "VPU memory alloc fail!");
goto err_mem_alloc;
}
* The coordinates are saved in UQ12.4 fixed point format.
*/
static void dw100_ctrl_dewarping_map_init(const struct v4l2_ctrl *ctrl,
- u32 from_idx, u32 elems,
+ u32 from_idx,
union v4l2_ctrl_ptr ptr)
{
struct dw100_ctx *ctx =
ctx->map_height = mh;
ctx->map_size = mh * mw * sizeof(u32);
- for (idx = from_idx; idx < elems; idx++) {
+ for (idx = from_idx; idx < ctrl->elems; idx++) {
qy = min_t(u32, (idx / mw) * qdy, qsh);
qx = min_t(u32, (idx % mw) * qdx, qsw);
map[idx] = dw100_map_format_coordinates(qx, qy);
struct v4l2_subdev *subdev;
int ret;
- ret = media_pipeline_start(&vdev->entity, &video->pipe);
+ ret = video_device_pipeline_start(vdev, &video->pipe);
if (ret < 0)
return ret;
return 0;
error:
- media_pipeline_stop(&vdev->entity);
+ video_device_pipeline_stop(vdev);
video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED);
v4l2_subdev_call(subdev, video, s_stream, 0);
}
- media_pipeline_stop(&vdev->entity);
+ video_device_pipeline_stop(vdev);
video->ops->flush_buffers(video, VB2_BUF_STATE_ERROR);
}
struct venus_core *core = inst->core;
u32 fmt = to_hfi_raw_fmt(v4l2_pixfmt);
struct hfi_plat_caps *caps;
- u32 buftype;
+ bool found;
if (!fmt)
return false;
if (!caps)
return false;
- if (inst->session_type == VIDC_SESSION_TYPE_DEC)
- buftype = HFI_BUFFER_OUTPUT2;
- else
- buftype = HFI_BUFFER_OUTPUT;
+ found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT, fmt);
+ if (found)
+ goto done;
- return find_fmt_from_caps(caps, buftype, fmt);
+ found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT2, fmt);
+done:
+ return found;
}
EXPORT_SYMBOL_GPL(venus_helper_check_format);
int hfi_create(struct venus_core *core, const struct hfi_core_ops *ops)
{
- int ret;
-
if (!ops)
return -EINVAL;
core->state = CORE_UNINIT;
init_completion(&core->done);
pkt_set_version(core->res->hfi_version);
- ret = venus_hfi_create(core);
- return ret;
+ return venus_hfi_create(core);
}
void hfi_destroy(struct venus_core *core)
else
return NULL;
fmt = find_format(inst, pixmp->pixelformat, f->type);
+ if (!fmt)
+ return NULL;
}
pixmp->width = clamp(pixmp->width, frame_width_min(inst),
pixmp->height = clamp(pixmp->height, frame_height_min(inst),
frame_height_max(inst));
- if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
- pixmp->width = ALIGN(pixmp->width, 128);
- pixmp->height = ALIGN(pixmp->height, 32);
- }
+ pixmp->width = ALIGN(pixmp->width, 128);
+ pixmp->height = ALIGN(pixmp->height, 32);
pixmp->width = ALIGN(pixmp->width, 2);
pixmp->height = ALIGN(pixmp->height, 2);
struct v4l2_fract *timeperframe = &out->timeperframe;
u64 us_per_frame, fps;
- if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
+ if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT &&
a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
return -EINVAL;
{
struct venus_inst *inst = to_inst(file);
- if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
+ if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT &&
a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
return -EINVAL;
return 0;
}
+static int venc_subscribe_event(struct v4l2_fh *fh,
+ const struct v4l2_event_subscription *sub)
+{
+ switch (sub->type) {
+ case V4L2_EVENT_EOS:
+ return v4l2_event_subscribe(fh, sub, 2, NULL);
+ case V4L2_EVENT_CTRL:
+ return v4l2_ctrl_subscribe_event(fh, sub);
+ default:
+ return -EINVAL;
+ }
+}
+
static const struct v4l2_ioctl_ops venc_ioctl_ops = {
.vidioc_querycap = venc_querycap,
.vidioc_enum_fmt_vid_cap = venc_enum_fmt,
.vidioc_g_parm = venc_g_parm,
.vidioc_enum_framesizes = venc_enum_framesizes,
.vidioc_enum_frameintervals = venc_enum_frameintervals,
- .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+ .vidioc_subscribe_event = venc_subscribe_event,
.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+ .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
};
static int venc_pm_get(struct venus_inst *inst)
return ret;
}
- if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC) {
+ if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC &&
+ ctr->profile.hevc == V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10) {
struct hfi_hdr10_pq_sei hdr10;
unsigned int c;
#include "core.h"
#include "venc.h"
+#include "helpers.h"
#define BITRATE_MIN 32000
#define BITRATE_MAX 160000000
* if we disable 8x8 transform for HP.
*/
- if (ctrl->val == 0)
- return -EINVAL;
ctr->h264_8x8_transform = ctrl->val;
break;
return 0;
}
+static int venc_op_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct venus_inst *inst = ctrl_to_inst(ctrl);
+ struct hfi_buffer_requirements bufreq;
+ enum hfi_version ver = inst->core->res->hfi_version;
+ int ret;
+
+ switch (ctrl->id) {
+ case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
+ ret = venus_helper_get_bufreq(inst, HFI_BUFFER_INPUT, &bufreq);
+ if (!ret)
+ ctrl->val = HFI_BUFREQ_COUNT_MIN(&bufreq, ver);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static const struct v4l2_ctrl_ops venc_ctrl_ops = {
.s_ctrl = venc_op_s_ctrl,
+ .g_volatile_ctrl = venc_op_g_volatile_ctrl,
};
int venc_ctrl_init(struct venus_inst *inst)
{
int ret;
+ struct v4l2_ctrl_hdr10_mastering_display p_hdr10_mastering = {
+ { 34000, 13250, 7500 },
+ { 16000, 34500, 3000 }, 15635, 16450, 10000000, 500,
+ };
+ struct v4l2_ctrl_hdr10_cll_info p_hdr10_cll = { 1000, 400 };
- ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 58);
+ ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 59);
if (ret)
return ret;
V4L2_MPEG_VIDEO_VP8_PROFILE_3,
0, V4L2_MPEG_VIDEO_VP8_PROFILE_0);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 4, 11, 1, 4);
+
v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
V4L2_CID_MPEG_VIDEO_BITRATE, BITRATE_MIN, BITRATE_MAX,
BITRATE_STEP, BITRATE_DEFAULT);
v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops,
V4L2_CID_COLORIMETRY_HDR10_CLL_INFO,
- v4l2_ctrl_ptr_create(NULL));
+ v4l2_ctrl_ptr_create(&p_hdr10_cll));
v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops,
V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY,
- v4l2_ctrl_ptr_create(NULL));
+ v4l2_ctrl_ptr_create((void *)&p_hdr10_mastering));
v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE,
return 0;
/*
- * Don't allow link changes if any entity in the graph is
- * streaming, modifying the CHSEL register fields can disrupt
- * running streams.
+ * Don't allow link changes if any stream in the graph is active as
+ * modifying the CHSEL register fields can disrupt running streams.
*/
media_device_for_each_entity(entity, &group->mdev)
if (media_entity_is_streaming(entity))
static int rvin_set_stream(struct rvin_dev *vin, int on)
{
- struct media_pipeline *pipe;
- struct media_device *mdev;
struct v4l2_subdev *sd;
struct media_pad *pad;
int ret;
sd = media_entity_to_v4l2_subdev(pad->entity);
if (!on) {
- media_pipeline_stop(&vin->vdev.entity);
+ video_device_pipeline_stop(&vin->vdev);
return v4l2_subdev_call(sd, video, s_stream, 0);
}
if (ret)
return ret;
- /*
- * The graph lock needs to be taken to protect concurrent
- * starts of multiple VIN instances as they might share
- * a common subdevice down the line and then should use
- * the same pipe.
- */
- mdev = vin->vdev.entity.graph_obj.mdev;
- mutex_lock(&mdev->graph_mutex);
- pipe = sd->entity.pipe ? sd->entity.pipe : &vin->vdev.pipe;
- ret = __media_pipeline_start(&vin->vdev.entity, pipe);
- mutex_unlock(&mdev->graph_mutex);
+ ret = video_device_pipeline_alloc_start(&vin->vdev);
if (ret)
return ret;
if (ret == -ENOIOCTLCMD)
ret = 0;
if (ret)
- media_pipeline_stop(&vin->vdev.entity);
+ video_device_pipeline_stop(&vin->vdev);
return ret;
}
}
mutex_unlock(&pipe->lock);
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
vsp1_video_release_buffers(video);
vsp1_video_pipeline_put(pipe);
}
return PTR_ERR(pipe);
}
- ret = __media_pipeline_start(&video->video.entity, &pipe->pipe);
+ ret = __video_device_pipeline_start(&video->video, &pipe->pipe);
if (ret < 0) {
mutex_unlock(&mdev->graph_mutex);
goto err_pipe;
return 0;
err_stop:
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
err_pipe:
vsp1_video_pipeline_put(pipe);
return ret;
*
* Call s_stream(false) in the reverse order from
* rkisp1_pipeline_stream_enable() and disable the DMA engine.
- * Should be called before media_pipeline_stop()
+ * Should be called before video_device_pipeline_stop()
*/
static void rkisp1_pipeline_stream_disable(struct rkisp1_capture *cap)
__must_hold(&cap->rkisp1->stream_lock)
* If the other capture is streaming, isp and sensor nodes shouldn't
* be disabled, skip them.
*/
- if (rkisp1->pipe.streaming_count < 2)
+ if (rkisp1->pipe.start_count < 2)
v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, false);
v4l2_subdev_call(&rkisp1->resizer_devs[cap->id].sd, video, s_stream,
* rkisp1_pipeline_stream_enable - enable nodes in the pipeline
*
* Enable the DMA Engine and call s_stream(true) through the pipeline.
- * Should be called after media_pipeline_start()
+ * Should be called after video_device_pipeline_start()
*/
static int rkisp1_pipeline_stream_enable(struct rkisp1_capture *cap)
__must_hold(&cap->rkisp1->stream_lock)
* If the other capture is streaming, isp and sensor nodes are already
* enabled, skip them.
*/
- if (rkisp1->pipe.streaming_count > 1)
+ if (rkisp1->pipe.start_count > 1)
return 0;
ret = v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, true);
rkisp1_dummy_buf_destroy(cap);
- media_pipeline_stop(&node->vdev.entity);
+ video_device_pipeline_stop(&node->vdev);
mutex_unlock(&cap->rkisp1->stream_lock);
}
mutex_lock(&cap->rkisp1->stream_lock);
- ret = media_pipeline_start(entity, &cap->rkisp1->pipe);
+ ret = video_device_pipeline_start(&cap->vnode.vdev, &cap->rkisp1->pipe);
if (ret) {
dev_err(cap->rkisp1->dev, "start pipeline failed %d\n", ret);
goto err_ret_buffers;
err_destroy_dummy:
rkisp1_dummy_buf_destroy(cap);
err_pipeline_stop:
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&cap->vnode.vdev);
err_ret_buffers:
rkisp1_return_all_buffers(cap, VB2_BUF_STATE_QUEUED);
mutex_unlock(&cap->rkisp1->stream_lock);
struct rkisp1_capture *cap = video_get_drvdata(vdev);
const struct rkisp1_capture_fmt_cfg *fmt =
rkisp1_find_fmt_cfg(cap, cap->pix.fmt.pixelformat);
- struct v4l2_subdev_format sd_fmt;
+ struct v4l2_subdev_format sd_fmt = {
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+ .pad = link->source->index,
+ };
int ret;
- sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
- sd_fmt.pad = link->source->index;
ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sd_fmt);
if (ret)
return ret;
struct v4l2_format vdev_fmt;
enum v4l2_quantization quantization;
+ enum v4l2_ycbcr_encoding ycbcr_encoding;
enum rkisp1_fmt_raw_pat_type raw_type;
};
*/
const struct rkisp1_mbus_info *rkisp1_mbus_info_get_by_code(u32 mbus_code);
-/* rkisp1_params_configure - configure the params when stream starts.
- * This function is called by the isp entity upon stream starts.
- * The function applies the initial configuration of the parameters.
+/*
+ * rkisp1_params_pre_configure - Configure the params before stream start
*
- * @params: pointer to rkisp1_params.
+ * @params: pointer to rkisp1_params
* @bayer_pat: the bayer pattern on the isp video sink pad
* @quantization: the quantization configured on the isp's src pad
+ * @ycbcr_encoding: the ycbcr_encoding configured on the isp's src pad
+ *
+ * This function is called by the ISP entity just before the ISP gets started.
+ * It applies the initial ISP parameters from the first params buffer, but
+ * skips LSC as it needs to be configured after the ISP is started.
+ */
+void rkisp1_params_pre_configure(struct rkisp1_params *params,
+ enum rkisp1_fmt_raw_pat_type bayer_pat,
+ enum v4l2_quantization quantization,
+ enum v4l2_ycbcr_encoding ycbcr_encoding);
+
+/*
+ * rkisp1_params_post_configure - Configure the params after stream start
+ *
+ * @params: pointer to rkisp1_params
+ *
+ * This function is called by the ISP entity just after the ISP gets started.
+ * It applies the initial ISP LSC parameters from the first params buffer.
*/
-void rkisp1_params_configure(struct rkisp1_params *params,
- enum rkisp1_fmt_raw_pat_type bayer_pat,
- enum v4l2_quantization quantization);
+void rkisp1_params_post_configure(struct rkisp1_params *params);
/* rkisp1_params_disable - disable all parameters.
* This function is called by the isp entity upon stream start
struct v4l2_mbus_framefmt *src_frm;
src_frm = rkisp1_isp_get_pad_fmt(isp, NULL,
- RKISP1_ISP_PAD_SINK_VIDEO,
+ RKISP1_ISP_PAD_SOURCE_VIDEO,
V4L2_SUBDEV_FORMAT_ACTIVE);
- rkisp1_params_configure(&rkisp1->params, sink_fmt->bayer_pat,
- src_frm->quantization);
+ rkisp1_params_pre_configure(&rkisp1->params, sink_fmt->bayer_pat,
+ src_frm->quantization,
+ src_frm->ycbcr_enc);
}
return 0;
RKISP1_CIF_ISP_CTRL_ISP_ENABLE |
RKISP1_CIF_ISP_CTRL_ISP_INFORM_ENABLE;
rkisp1_write(rkisp1, RKISP1_CIF_ISP_CTRL, val);
+
+ if (isp->src_fmt->pixel_enc != V4L2_PIXEL_ENC_BAYER)
+ rkisp1_params_post_configure(&rkisp1->params);
}
/* ----------------------------------------------------------------------------
struct v4l2_mbus_framefmt *sink_fmt, *src_fmt;
struct v4l2_rect *sink_crop, *src_crop;
+ /* Video. */
sink_fmt = v4l2_subdev_get_try_format(sd, sd_state,
RKISP1_ISP_PAD_SINK_VIDEO);
sink_fmt->width = RKISP1_DEFAULT_WIDTH;
sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
sink_fmt->field = V4L2_FIELD_NONE;
sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT;
+ sink_fmt->colorspace = V4L2_COLORSPACE_RAW;
+ sink_fmt->xfer_func = V4L2_XFER_FUNC_NONE;
+ sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+ sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
sink_crop = v4l2_subdev_get_try_crop(sd, sd_state,
RKISP1_ISP_PAD_SINK_VIDEO);
RKISP1_ISP_PAD_SOURCE_VIDEO);
*src_fmt = *sink_fmt;
src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
+ src_fmt->colorspace = V4L2_COLORSPACE_SRGB;
+ src_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
+ src_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+ src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
src_crop = v4l2_subdev_get_try_crop(sd, sd_state,
RKISP1_ISP_PAD_SOURCE_VIDEO);
*src_crop = *sink_crop;
+ /* Parameters and statistics. */
sink_fmt = v4l2_subdev_get_try_format(sd, sd_state,
RKISP1_ISP_PAD_SINK_PARAMS);
src_fmt = v4l2_subdev_get_try_format(sd, sd_state,
struct v4l2_mbus_framefmt *format,
unsigned int which)
{
- const struct rkisp1_mbus_info *mbus_info;
+ const struct rkisp1_mbus_info *sink_info;
+ const struct rkisp1_mbus_info *src_info;
+ struct v4l2_mbus_framefmt *sink_fmt;
struct v4l2_mbus_framefmt *src_fmt;
const struct v4l2_rect *src_crop;
+ bool set_csc;
+ sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
+ RKISP1_ISP_PAD_SINK_VIDEO, which);
src_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
RKISP1_ISP_PAD_SOURCE_VIDEO, which);
src_crop = rkisp1_isp_get_pad_crop(isp, sd_state,
RKISP1_ISP_PAD_SOURCE_VIDEO, which);
+ /*
+ * Media bus code. The ISP can operate in pass-through mode (Bayer in,
+ * Bayer out or YUV in, YUV out) or process Bayer data to YUV, but
+ * can't convert from YUV to Bayer.
+ */
+ sink_info = rkisp1_mbus_info_get_by_code(sink_fmt->code);
+
src_fmt->code = format->code;
- mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
- if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+ src_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
+ if (!src_info || !(src_info->direction & RKISP1_ISP_SD_SRC)) {
src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
- mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
+ src_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
}
- if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
- isp->src_fmt = mbus_info;
+
+ if (sink_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
+ src_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) {
+ src_fmt->code = sink_fmt->code;
+ src_info = sink_info;
+ }
+
+ /*
+ * The source width and height must be identical to the source crop
+ * size.
+ */
src_fmt->width = src_crop->width;
src_fmt->height = src_crop->height;
/*
- * The CSC API is used to allow userspace to force full
- * quantization on YUV formats.
+ * Copy the color space for the sink pad. When converting from Bayer to
+ * YUV, default to a limited quantization range.
*/
- if (format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC &&
- format->quantization == V4L2_QUANTIZATION_FULL_RANGE &&
- mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
- src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
- else if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
+ src_fmt->colorspace = sink_fmt->colorspace;
+ src_fmt->xfer_func = sink_fmt->xfer_func;
+ src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc;
+
+ if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER &&
+ src_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
else
- src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+ src_fmt->quantization = sink_fmt->quantization;
+
+ /*
+ * Allow setting the source color space fields when the SET_CSC flag is
+ * set and the source format is YUV. If the sink format is YUV, don't
+ * set the color primaries, transfer function or YCbCr encoding as the
+ * ISP is bypassed in that case and passes YUV data through without
+ * modifications.
+ *
+ * The color primaries and transfer function are configured through the
+ * cross-talk matrix and tone curve respectively. Settings for those
+ * hardware blocks are conveyed through the ISP parameters buffer, as
+ * they need to combine color space information with other image tuning
+ * characteristics and can't thus be computed by the kernel based on the
+ * color space. The source pad colorspace and xfer_func fields are thus
+ * ignored by the driver, but can be set by userspace to propagate
+ * accurate color space information down the pipeline.
+ */
+ set_csc = format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC;
+
+ if (set_csc && src_info->pixel_enc == V4L2_PIXEL_ENC_YUV) {
+ if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) {
+ if (format->colorspace != V4L2_COLORSPACE_DEFAULT)
+ src_fmt->colorspace = format->colorspace;
+ if (format->xfer_func != V4L2_XFER_FUNC_DEFAULT)
+ src_fmt->xfer_func = format->xfer_func;
+ if (format->ycbcr_enc != V4L2_YCBCR_ENC_DEFAULT)
+ src_fmt->ycbcr_enc = format->ycbcr_enc;
+ }
+
+ if (format->quantization != V4L2_QUANTIZATION_DEFAULT)
+ src_fmt->quantization = format->quantization;
+ }
*format = *src_fmt;
+
+ /*
+ * Restore the SET_CSC flag if it was set to indicate support for the
+ * CSC setting API.
+ */
+ if (set_csc)
+ format->flags |= V4L2_MBUS_FRAMEFMT_SET_CSC;
+
+ /* Store the source format info when setting the active format. */
+ if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
+ isp->src_fmt = src_info;
}
static void rkisp1_isp_set_src_crop(struct rkisp1_isp *isp,
const struct rkisp1_mbus_info *mbus_info;
struct v4l2_mbus_framefmt *sink_fmt;
struct v4l2_rect *sink_crop;
+ bool is_yuv;
sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
RKISP1_ISP_PAD_SINK_VIDEO,
RKISP1_ISP_MIN_HEIGHT,
RKISP1_ISP_MAX_HEIGHT);
+ /*
+ * Adjust the color space fields. Accept any color primaries and
+ * transfer function for both YUV and Bayer. For YUV any YCbCr encoding
+ * and quantization range is also accepted. For Bayer formats, the YCbCr
+ * encoding isn't applicable, and the quantization range can only be
+ * full.
+ */
+ is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV;
+
+ sink_fmt->colorspace = format->colorspace ? :
+ (is_yuv ? V4L2_COLORSPACE_SRGB :
+ V4L2_COLORSPACE_RAW);
+ sink_fmt->xfer_func = format->xfer_func ? :
+ V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace);
+ if (is_yuv) {
+ sink_fmt->ycbcr_enc = format->ycbcr_enc ? :
+ V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace);
+ sink_fmt->quantization = format->quantization ? :
+ V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace,
+ sink_fmt->ycbcr_enc);
+ } else {
+ /*
+ * The YCbCr encoding isn't applicable for non-YUV formats, but
+ * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it
+ * should be ignored by userspace.
+ */
+ sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+ sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+ }
+
*format = *sink_fmt;
/* Propagate to in crop */
#define RKISP1_ISP_PARAMS_REQ_BUFS_MIN 2
#define RKISP1_ISP_PARAMS_REQ_BUFS_MAX 8
+#define RKISP1_ISP_DPCC_METHODS_SET(n) \
+ (RKISP1_CIF_ISP_DPCC_METHODS_SET_1 + 0x4 * (n))
#define RKISP1_ISP_DPCC_LINE_THRESH(n) \
(RKISP1_CIF_ISP_DPCC_LINE_THRESH_1 + 0x14 * (n))
#define RKISP1_ISP_DPCC_LINE_MAD_FAC(n) \
unsigned int i;
u32 mode;
- /* avoid to override the old enable value */
+ /*
+ * The enable bit is controlled in rkisp1_isp_isr_other_config() and
+ * must be preserved. The grayscale mode should be configured
+ * automatically based on the media bus code on the ISP sink pad, so
+ * only the STAGE1_ENABLE bit can be set by userspace.
+ */
mode = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE);
- mode &= RKISP1_CIF_ISP_DPCC_ENA;
- mode |= arg->mode & ~RKISP1_CIF_ISP_DPCC_ENA;
+ mode &= RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE;
+ mode |= arg->mode & RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE;
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE, mode);
+
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_OUTPUT_MODE,
- arg->output_mode);
+ arg->output_mode & RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK);
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_SET_USE,
- arg->set_use);
-
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_1,
- arg->methods[0].method);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_2,
- arg->methods[1].method);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_3,
- arg->methods[2].method);
+ arg->set_use & RKISP1_CIF_ISP_DPCC_SET_USE_MASK);
+
for (i = 0; i < RKISP1_CIF_ISP_DPCC_METHODS_MAX; i++) {
+ rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_METHODS_SET(i),
+ arg->methods[i].method &
+ RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK);
rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_THRESH(i),
- arg->methods[i].line_thresh);
+ arg->methods[i].line_thresh &
+ RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK);
rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_MAD_FAC(i),
- arg->methods[i].line_mad_fac);
+ arg->methods[i].line_mad_fac &
+ RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK);
rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_PG_FAC(i),
- arg->methods[i].pg_fac);
+ arg->methods[i].pg_fac &
+ RKISP1_CIF_ISP_DPCC_PG_FAC_MASK);
rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RND_THRESH(i),
- arg->methods[i].rnd_thresh);
+ arg->methods[i].rnd_thresh &
+ RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK);
rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RG_FAC(i),
- arg->methods[i].rg_fac);
+ arg->methods[i].rg_fac &
+ RKISP1_CIF_ISP_DPCC_RG_FAC_MASK);
}
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RND_OFFS,
- arg->rnd_offs);
+ arg->rnd_offs & RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK);
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RO_LIMITS,
- arg->ro_limits);
+ arg->ro_limits & RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK);
}
/* ISP black level subtraction interface function */
rkisp1_lsc_matrix_config_v10(struct rkisp1_params *params,
const struct rkisp1_cif_isp_lsc_config *pconfig)
{
- unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data;
+ struct rkisp1_device *rkisp1 = params->rkisp1;
+ u32 lsc_status, sram_addr, lsc_table_sel;
+ unsigned int i, j;
- isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
+ lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
/* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */
- sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
+ sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
/* program data tables (table size is 9 * 17 = 153) */
for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) {
+ const __u16 *r_tbl = pconfig->r_data_tbl[i];
+ const __u16 *gr_tbl = pconfig->gr_data_tbl[i];
+ const __u16 *gb_tbl = pconfig->gb_data_tbl[i];
+ const __u16 *b_tbl = pconfig->b_data_tbl[i];
+
/*
* 17 sectors with 2 values in one DWORD = 9
* DWORDs (2nd value of last DWORD unused)
*/
for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) {
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j],
- pconfig->r_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j],
- pconfig->gr_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j],
- pconfig->gb_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j],
- pconfig->b_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+ r_tbl[j], r_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+ gr_tbl[j], gr_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+ gb_tbl[j], gb_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+ b_tbl[j], b_tbl[j + 1]));
}
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
- data);
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
- data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
- data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
- data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(r_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gr_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gb_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(b_tbl[j], 0));
}
- isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
- RKISP1_CIF_ISP_LSC_TABLE_0 :
- RKISP1_CIF_ISP_LSC_TABLE_1;
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL,
- isp_lsc_table_sel);
+
+ lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+ RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1;
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel);
}
static void
rkisp1_lsc_matrix_config_v12(struct rkisp1_params *params,
const struct rkisp1_cif_isp_lsc_config *pconfig)
{
- unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data;
+ struct rkisp1_device *rkisp1 = params->rkisp1;
+ u32 lsc_status, sram_addr, lsc_table_sel;
+ unsigned int i, j;
- isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
+ lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
/* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */
- sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
- RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
- RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
+ sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+ RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
+ RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
/* program data tables (table size is 9 * 17 = 153) */
for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) {
+ const __u16 *r_tbl = pconfig->r_data_tbl[i];
+ const __u16 *gr_tbl = pconfig->gr_data_tbl[i];
+ const __u16 *gb_tbl = pconfig->gb_data_tbl[i];
+ const __u16 *b_tbl = pconfig->b_data_tbl[i];
+
/*
* 17 sectors with 2 values in one DWORD = 9
* DWORDs (2nd value of last DWORD unused)
*/
for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) {
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
- pconfig->r_data_tbl[i][j],
- pconfig->r_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
- pconfig->gr_data_tbl[i][j],
- pconfig->gr_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
- pconfig->gb_data_tbl[i][j],
- pconfig->gb_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
- pconfig->b_data_tbl[i][j],
- pconfig->b_data_tbl[i][j + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+ r_tbl[j], r_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+ gr_tbl[j], gr_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+ gb_tbl[j], gb_tbl[j + 1]));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+ b_tbl[j], b_tbl[j + 1]));
}
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->r_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
- data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gr_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
- data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gb_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
- data);
-
- data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->b_data_tbl[i][j], 0);
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
- data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(r_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gr_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gb_tbl[j], 0));
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+ RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(b_tbl[j], 0));
}
- isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
- RKISP1_CIF_ISP_LSC_TABLE_0 :
- RKISP1_CIF_ISP_LSC_TABLE_1;
- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL,
- isp_lsc_table_sel);
+
+ lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+ RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1;
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel);
}
static void rkisp1_lsc_config(struct rkisp1_params *params,
const struct rkisp1_cif_isp_lsc_config *arg)
{
- unsigned int i, data;
- u32 lsc_ctrl;
+ struct rkisp1_device *rkisp1 = params->rkisp1;
+ u32 lsc_ctrl, data;
+ unsigned int i;
/* To config must be off , store the current status firstly */
- lsc_ctrl = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_CTRL);
+ lsc_ctrl = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_CTRL);
rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
RKISP1_CIF_ISP_LSC_CTRL_ENA);
params->ops->lsc_matrix_config(params, arg);
/* program x size tables */
data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_size_tbl[i * 2],
arg->x_size_tbl[i * 2 + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_XSIZE_01 + i * 4, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XSIZE(i), data);
/* program x grad tables */
- data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_grad_tbl[i * 2],
+ data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->x_grad_tbl[i * 2],
arg->x_grad_tbl[i * 2 + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_XGRAD_01 + i * 4, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XGRAD(i), data);
/* program y size tables */
data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_size_tbl[i * 2],
arg->y_size_tbl[i * 2 + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_YSIZE_01 + i * 4, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YSIZE(i), data);
/* program y grad tables */
- data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_grad_tbl[i * 2],
+ data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->y_grad_tbl[i * 2],
arg->y_grad_tbl[i * 2 + 1]);
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_LSC_YGRAD_01 + i * 4, data);
+ rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YGRAD(i), data);
}
/* restore the lsc ctrl status */
- if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA) {
- rkisp1_param_set_bits(params,
- RKISP1_CIF_ISP_LSC_CTRL,
+ if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA)
+ rkisp1_param_set_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
RKISP1_CIF_ISP_LSC_CTRL_ENA);
- } else {
- rkisp1_param_clear_bits(params,
- RKISP1_CIF_ISP_LSC_CTRL,
+ else
+ rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
RKISP1_CIF_ISP_LSC_CTRL_ENA);
- }
}
/* ISP Filtering function */
}
}
-static void rkisp1_csm_config(struct rkisp1_params *params, bool full_range)
+static void rkisp1_csm_config(struct rkisp1_params *params)
{
- static const u16 full_range_coeff[] = {
- 0x0026, 0x004b, 0x000f,
- 0x01ea, 0x01d6, 0x0040,
- 0x0040, 0x01ca, 0x01f6
+ struct csm_coeffs {
+ u16 limited[9];
+ u16 full[9];
+ };
+ static const struct csm_coeffs rec601_coeffs = {
+ .limited = {
+ 0x0021, 0x0042, 0x000d,
+ 0x01ed, 0x01db, 0x0038,
+ 0x0038, 0x01d1, 0x01f7,
+ },
+ .full = {
+ 0x0026, 0x004b, 0x000f,
+ 0x01ea, 0x01d6, 0x0040,
+ 0x0040, 0x01ca, 0x01f6,
+ },
};
- static const u16 limited_range_coeff[] = {
- 0x0021, 0x0040, 0x000d,
- 0x01ed, 0x01db, 0x0038,
- 0x0038, 0x01d1, 0x01f7,
+ static const struct csm_coeffs rec709_coeffs = {
+ .limited = {
+ 0x0018, 0x0050, 0x0008,
+ 0x01f3, 0x01d5, 0x0038,
+ 0x0038, 0x01cd, 0x01fb,
+ },
+ .full = {
+ 0x001b, 0x005c, 0x0009,
+ 0x01f1, 0x01cf, 0x0040,
+ 0x0040, 0x01c6, 0x01fa,
+ },
};
+ static const struct csm_coeffs rec2020_coeffs = {
+ .limited = {
+ 0x001d, 0x004c, 0x0007,
+ 0x01f0, 0x01d8, 0x0038,
+ 0x0038, 0x01cd, 0x01fb,
+ },
+ .full = {
+ 0x0022, 0x0057, 0x0008,
+ 0x01ee, 0x01d2, 0x0040,
+ 0x0040, 0x01c5, 0x01fb,
+ },
+ };
+ static const struct csm_coeffs smpte240m_coeffs = {
+ .limited = {
+ 0x0018, 0x004f, 0x000a,
+ 0x01f3, 0x01d5, 0x0038,
+ 0x0038, 0x01ce, 0x01fa,
+ },
+ .full = {
+ 0x001b, 0x005a, 0x000b,
+ 0x01f1, 0x01cf, 0x0040,
+ 0x0040, 0x01c7, 0x01f9,
+ },
+ };
+
+ const struct csm_coeffs *coeffs;
+ const u16 *csm;
unsigned int i;
- if (full_range) {
- for (i = 0; i < ARRAY_SIZE(full_range_coeff); i++)
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
- full_range_coeff[i]);
+ switch (params->ycbcr_encoding) {
+ case V4L2_YCBCR_ENC_601:
+ default:
+ coeffs = &rec601_coeffs;
+ break;
+ case V4L2_YCBCR_ENC_709:
+ coeffs = &rec709_coeffs;
+ break;
+ case V4L2_YCBCR_ENC_BT2020:
+ coeffs = &rec2020_coeffs;
+ break;
+ case V4L2_YCBCR_ENC_SMPTE240M:
+ coeffs = &smpte240m_coeffs;
+ break;
+ }
+ if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE) {
+ csm = coeffs->full;
rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA |
RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA);
} else {
- for (i = 0; i < ARRAY_SIZE(limited_range_coeff); i++)
- rkisp1_write(params->rkisp1,
- RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
- limited_range_coeff[i]);
-
+ csm = coeffs->limited;
rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_CTRL,
RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA |
RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA);
}
+
+ for (i = 0; i < 9; i++)
+ rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
+ csm[i]);
}
/* ISP De-noise Pre-Filter(DPF) function */
if (module_ens & RKISP1_CIF_ISP_MODULE_DPCC)
rkisp1_param_set_bits(params,
RKISP1_CIF_ISP_DPCC_MODE,
- RKISP1_CIF_ISP_DPCC_ENA);
+ RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
else
rkisp1_param_clear_bits(params,
RKISP1_CIF_ISP_DPCC_MODE,
- RKISP1_CIF_ISP_DPCC_ENA);
+ RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
}
/* update bls config */
RKISP1_CIF_ISP_CTRL_ISP_GAMMA_IN_ENA);
}
- /* update lsc config */
- if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC)
- rkisp1_lsc_config(params,
- &new_params->others.lsc_config);
-
- if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) {
- if (module_ens & RKISP1_CIF_ISP_MODULE_LSC)
- rkisp1_param_set_bits(params,
- RKISP1_CIF_ISP_LSC_CTRL,
- RKISP1_CIF_ISP_LSC_CTRL_ENA);
- else
- rkisp1_param_clear_bits(params,
- RKISP1_CIF_ISP_LSC_CTRL,
- RKISP1_CIF_ISP_LSC_CTRL_ENA);
- }
-
/* update awb gains */
if (module_cfg_update & RKISP1_CIF_ISP_MODULE_AWB_GAIN)
params->ops->awb_gain_config(params, &new_params->others.awb_gain_config);
}
}
+static void
+rkisp1_isp_isr_lsc_config(struct rkisp1_params *params,
+ const struct rkisp1_params_cfg *new_params)
+{
+ unsigned int module_en_update, module_cfg_update, module_ens;
+
+ module_en_update = new_params->module_en_update;
+ module_cfg_update = new_params->module_cfg_update;
+ module_ens = new_params->module_ens;
+
+ /* update lsc config */
+ if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC)
+ rkisp1_lsc_config(params,
+ &new_params->others.lsc_config);
+
+ if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) {
+ if (module_ens & RKISP1_CIF_ISP_MODULE_LSC)
+ rkisp1_param_set_bits(params,
+ RKISP1_CIF_ISP_LSC_CTRL,
+ RKISP1_CIF_ISP_LSC_CTRL_ENA);
+ else
+ rkisp1_param_clear_bits(params,
+ RKISP1_CIF_ISP_LSC_CTRL,
+ RKISP1_CIF_ISP_LSC_CTRL_ENA);
+ }
+}
+
static void rkisp1_isp_isr_meas_config(struct rkisp1_params *params,
struct rkisp1_params_cfg *new_params)
{
}
}
-static void rkisp1_params_apply_params_cfg(struct rkisp1_params *params,
- unsigned int frame_sequence)
+static bool rkisp1_params_get_buffer(struct rkisp1_params *params,
+ struct rkisp1_buffer **buf,
+ struct rkisp1_params_cfg **cfg)
{
- struct rkisp1_params_cfg *new_params;
- struct rkisp1_buffer *cur_buf = NULL;
-
if (list_empty(¶ms->params))
- return;
-
- cur_buf = list_first_entry(¶ms->params,
- struct rkisp1_buffer, queue);
+ return false;
- new_params = (struct rkisp1_params_cfg *)vb2_plane_vaddr(&cur_buf->vb.vb2_buf, 0);
+ *buf = list_first_entry(¶ms->params, struct rkisp1_buffer, queue);
+ *cfg = vb2_plane_vaddr(&(*buf)->vb.vb2_buf, 0);
- rkisp1_isp_isr_other_config(params, new_params);
- rkisp1_isp_isr_meas_config(params, new_params);
-
- /* update shadow register immediately */
- rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+ return true;
+}
- list_del(&cur_buf->queue);
+static void rkisp1_params_complete_buffer(struct rkisp1_params *params,
+ struct rkisp1_buffer *buf,
+ unsigned int frame_sequence)
+{
+ list_del(&buf->queue);
- cur_buf->vb.sequence = frame_sequence;
- vb2_buffer_done(&cur_buf->vb.vb2_buf, VB2_BUF_STATE_DONE);
+ buf->vb.sequence = frame_sequence;
+ vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_DONE);
}
void rkisp1_params_isr(struct rkisp1_device *rkisp1)
{
- /*
- * This isr is called when the ISR finishes processing a frame (RKISP1_CIF_ISP_FRAME).
- * Configurations performed here will be applied on the next frame.
- * Since frame_sequence is updated on the vertical sync signal, we should use
- * frame_sequence + 1 here to indicate to userspace on which frame these parameters
- * are being applied.
- */
- unsigned int frame_sequence = rkisp1->isp.frame_sequence + 1;
struct rkisp1_params *params = &rkisp1->params;
+ struct rkisp1_params_cfg *new_params;
+ struct rkisp1_buffer *cur_buf;
spin_lock(¶ms->config_lock);
- rkisp1_params_apply_params_cfg(params, frame_sequence);
+ if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+ goto unlock;
+
+ rkisp1_isp_isr_other_config(params, new_params);
+ rkisp1_isp_isr_lsc_config(params, new_params);
+ rkisp1_isp_isr_meas_config(params, new_params);
+
+ /* update shadow register immediately */
+ rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+ RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+ /*
+ * This isr is called when the ISR finishes processing a frame
+ * (RKISP1_CIF_ISP_FRAME). Configurations performed here will be
+ * applied on the next frame. Since frame_sequence is updated on the
+ * vertical sync signal, we should use frame_sequence + 1 here to
+ * indicate to userspace on which frame these parameters are being
+ * applied.
+ */
+ rkisp1_params_complete_buffer(params, cur_buf,
+ rkisp1->isp.frame_sequence + 1);
+
+unlock:
spin_unlock(¶ms->config_lock);
}
14
};
-static void rkisp1_params_config_parameter(struct rkisp1_params *params)
+void rkisp1_params_pre_configure(struct rkisp1_params *params,
+ enum rkisp1_fmt_raw_pat_type bayer_pat,
+ enum v4l2_quantization quantization,
+ enum v4l2_ycbcr_encoding ycbcr_encoding)
{
struct rkisp1_cif_isp_hst_config hst = rkisp1_hst_params_default_config;
+ struct rkisp1_params_cfg *new_params;
+ struct rkisp1_buffer *cur_buf;
+
+ params->quantization = quantization;
+ params->ycbcr_encoding = ycbcr_encoding;
+ params->raw_type = bayer_pat;
params->ops->awb_meas_config(params, &rkisp1_awb_params_default_config);
params->ops->awb_meas_enable(params, &rkisp1_awb_params_default_config,
rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP_V10,
rkisp1_hst_params_default_config.mode);
- /* set the range */
- if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE)
- rkisp1_csm_config(params, true);
- else
- rkisp1_csm_config(params, false);
+ rkisp1_csm_config(params);
spin_lock_irq(¶ms->config_lock);
/* apply the first buffer if there is one already */
- rkisp1_params_apply_params_cfg(params, 0);
+ if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+ goto unlock;
+
+ rkisp1_isp_isr_other_config(params, new_params);
+ rkisp1_isp_isr_meas_config(params, new_params);
+
+ /* update shadow register immediately */
+ rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+ RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+unlock:
spin_unlock_irq(¶ms->config_lock);
}
-void rkisp1_params_configure(struct rkisp1_params *params,
- enum rkisp1_fmt_raw_pat_type bayer_pat,
- enum v4l2_quantization quantization)
+void rkisp1_params_post_configure(struct rkisp1_params *params)
{
- params->quantization = quantization;
- params->raw_type = bayer_pat;
- rkisp1_params_config_parameter(params);
+ struct rkisp1_params_cfg *new_params;
+ struct rkisp1_buffer *cur_buf;
+
+ spin_lock_irq(¶ms->config_lock);
+
+ /*
+ * Apply LSC parameters from the first buffer (if any is already
+ * available. This must be done after the ISP gets started in the
+ * ISP8000Nano v18.02 (found in the i.MX8MP) as access to the LSC RAM
+ * is gated by the ISP_CTRL.ISP_ENABLE bit. As this initialization
+ * ordering doesn't affect other ISP versions negatively, do so
+ * unconditionally.
+ */
+
+ if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+ goto unlock;
+
+ rkisp1_isp_isr_lsc_config(params, new_params);
+
+ /* update shadow register immediately */
+ rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+ RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+ rkisp1_params_complete_buffer(params, cur_buf, 0);
+
+unlock:
+ spin_unlock_irq(¶ms->config_lock);
}
/*
void rkisp1_params_disable(struct rkisp1_params *params)
{
rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_DPCC_MODE,
- RKISP1_CIF_ISP_DPCC_ENA);
+ RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
RKISP1_CIF_ISP_LSC_CTRL_ENA);
rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_BLS_CTRL,
(((v0) & 0x1FFF) | (((v1) & 0x1FFF) << 13))
#define RKISP1_CIF_ISP_LSC_SECT_SIZE(v0, v1) \
(((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
-#define RKISP1_CIF_ISP_LSC_GRAD_SIZE(v0, v1) \
+#define RKISP1_CIF_ISP_LSC_SECT_GRAD(v0, v1) \
(((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
/* LSC: ISP_LSC_TABLE_SEL */
#define RKISP1_CIF_ISP_CTRL_ISP_GAMMA_OUT_ENA_READ(x) (((x) >> 11) & 1)
/* DPCC */
-/* ISP_DPCC_MODE */
-#define RKISP1_CIF_ISP_DPCC_ENA BIT(0)
-#define RKISP1_CIF_ISP_DPCC_MODE_MAX 0x07
-#define RKISP1_CIF_ISP_DPCC_OUTPUTMODE_MAX 0x0F
-#define RKISP1_CIF_ISP_DPCC_SETUSE_MAX 0x0F
-#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RESERVED 0xFFFFE000
-#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RESERVED 0xFFFF0000
-#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RESERVED 0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_PG_FAC_RESERVED 0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_RND_THRESH_RESERVED 0xFFFF0000
-#define RKISP1_CIF_ISP_DPCC_RG_FAC_RESERVED 0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_RO_LIMIT_RESERVED 0xFFFFF000
-#define RKISP1_CIF_ISP_DPCC_RND_OFFS_RESERVED 0xFFFFF000
+#define RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE BIT(0)
+#define RKISP1_CIF_ISP_DPCC_MODE_GRAYSCALE_MODE BIT(1)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK GENMASK(3, 0)
+#define RKISP1_CIF_ISP_DPCC_SET_USE_MASK GENMASK(3, 0)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK 0x00001f1f
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK 0x0000ffff
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK 0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_MASK 0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK 0x0000ffff
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_MASK 0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK 0x00000fff
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK 0x00000fff
/* BLS */
/* ISP_BLS_CTRL */
#define RKISP1_CIF_ISP_LSC_GR_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x00000018)
#define RKISP1_CIF_ISP_LSC_B_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x0000001C)
#define RKISP1_CIF_ISP_LSC_GB_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x00000020)
-#define RKISP1_CIF_ISP_LSC_XGRAD_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000024)
-#define RKISP1_CIF_ISP_LSC_XGRAD_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000028)
-#define RKISP1_CIF_ISP_LSC_XGRAD_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000002C)
-#define RKISP1_CIF_ISP_LSC_XGRAD_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000030)
-#define RKISP1_CIF_ISP_LSC_YGRAD_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000034)
-#define RKISP1_CIF_ISP_LSC_YGRAD_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000038)
-#define RKISP1_CIF_ISP_LSC_YGRAD_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000003C)
-#define RKISP1_CIF_ISP_LSC_YGRAD_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000040)
-#define RKISP1_CIF_ISP_LSC_XSIZE_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000044)
-#define RKISP1_CIF_ISP_LSC_XSIZE_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000048)
-#define RKISP1_CIF_ISP_LSC_XSIZE_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000004C)
-#define RKISP1_CIF_ISP_LSC_XSIZE_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000050)
-#define RKISP1_CIF_ISP_LSC_YSIZE_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000054)
-#define RKISP1_CIF_ISP_LSC_YSIZE_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000058)
-#define RKISP1_CIF_ISP_LSC_YSIZE_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000005C)
-#define RKISP1_CIF_ISP_LSC_YSIZE_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000060)
+#define RKISP1_CIF_ISP_LSC_XGRAD(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000024 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_YGRAD(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000034 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_XSIZE(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000044 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_YSIZE(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000054 + (n) * 4)
#define RKISP1_CIF_ISP_LSC_TABLE_SEL (RKISP1_CIF_ISP_LSC_BASE + 0x00000064)
#define RKISP1_CIF_ISP_LSC_STATUS (RKISP1_CIF_ISP_LSC_BASE + 0x00000068)
sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
sink_fmt->field = V4L2_FIELD_NONE;
sink_fmt->code = RKISP1_DEF_FMT;
+ sink_fmt->colorspace = V4L2_COLORSPACE_SRGB;
+ sink_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
+ sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+ sink_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
sink_crop = v4l2_subdev_get_try_crop(sd, sd_state,
RKISP1_RSZ_PAD_SINK);
const struct rkisp1_mbus_info *mbus_info;
struct v4l2_mbus_framefmt *sink_fmt, *src_fmt;
struct v4l2_rect *sink_crop;
+ bool is_yuv;
sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, sd_state, RKISP1_RSZ_PAD_SINK,
which);
if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
rsz->pixel_enc = mbus_info->pixel_enc;
- /* Propagete to source pad */
- src_fmt->code = sink_fmt->code;
-
sink_fmt->width = clamp_t(u32, format->width,
RKISP1_ISP_MIN_WIDTH,
RKISP1_ISP_MAX_WIDTH);
RKISP1_ISP_MIN_HEIGHT,
RKISP1_ISP_MAX_HEIGHT);
+ /*
+ * Adjust the color space fields. Accept any color primaries and
+ * transfer function for both YUV and Bayer. For YUV any YCbCr encoding
+ * and quantization range is also accepted. For Bayer formats, the YCbCr
+ * encoding isn't applicable, and the quantization range can only be
+ * full.
+ */
+ is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV;
+
+ sink_fmt->colorspace = format->colorspace ? :
+ (is_yuv ? V4L2_COLORSPACE_SRGB :
+ V4L2_COLORSPACE_RAW);
+ sink_fmt->xfer_func = format->xfer_func ? :
+ V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace);
+ if (is_yuv) {
+ sink_fmt->ycbcr_enc = format->ycbcr_enc ? :
+ V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace);
+ sink_fmt->quantization = format->quantization ? :
+ V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace,
+ sink_fmt->ycbcr_enc);
+ } else {
+ /*
+ * The YCbCr encoding isn't applicable for non-YUV formats, but
+ * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it
+ * should be ignored by userspace.
+ */
+ sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+ sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+ }
+
*format = *sink_fmt;
+ /* Propagate the media bus code and color space to the source pad. */
+ src_fmt->code = sink_fmt->code;
+ src_fmt->colorspace = sink_fmt->colorspace;
+ src_fmt->xfer_func = sink_fmt->xfer_func;
+ src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc;
+ src_fmt->quantization = sink_fmt->quantization;
+
/* Update sink crop */
rkisp1_rsz_set_sink_crop(rsz, sd_state, sink_crop, which);
}
mutex_lock(&fimc->lock);
if (close && vc->streaming) {
- media_pipeline_stop(&vc->ve.vdev.entity);
+ video_device_pipeline_stop(&vc->ve.vdev);
vc->streaming = false;
}
{
struct fimc_dev *fimc = video_drvdata(file);
struct fimc_vid_cap *vc = &fimc->vid_cap;
- struct media_entity *entity = &vc->ve.vdev.entity;
struct fimc_source_info *si = NULL;
struct v4l2_subdev *sd;
int ret;
if (fimc_capture_active(fimc))
return -EBUSY;
- ret = media_pipeline_start(entity, &vc->ve.pipe->mp);
+ ret = video_device_pipeline_start(&vc->ve.vdev, &vc->ve.pipe->mp);
if (ret < 0)
return ret;
}
err_p_stop:
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&vc->ve.vdev);
return ret;
}
return ret;
if (vc->streaming) {
- media_pipeline_stop(&vc->ve.vdev.entity);
+ video_device_pipeline_stop(&vc->ve.vdev);
vc->streaming = false;
}
is_singular_file = v4l2_fh_is_singular_file(file);
if (is_singular_file && ivc->streaming) {
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&ivc->ve.vdev);
ivc->streaming = 0;
}
{
struct fimc_isp *isp = video_drvdata(file);
struct exynos_video_entity *ve = &isp->video_capture.ve;
- struct media_entity *me = &ve->vdev.entity;
int ret;
- ret = media_pipeline_start(me, &ve->pipe->mp);
+ ret = video_device_pipeline_start(&ve->vdev, &ve->pipe->mp);
if (ret < 0)
return ret;
isp->video_capture.streaming = 1;
return 0;
p_stop:
- media_pipeline_stop(me);
+ video_device_pipeline_stop(&ve->vdev);
return ret;
}
if (ret < 0)
return ret;
- media_pipeline_stop(&video->ve.vdev.entity);
+ video_device_pipeline_stop(&video->ve.vdev);
video->streaming = 0;
return 0;
}
if (v4l2_fh_is_singular_file(file) &&
atomic_read(&fimc->out_path) == FIMC_IO_DMA) {
if (fimc->streaming) {
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&fimc->ve.vdev);
fimc->streaming = false;
}
fimc_lite_stop_capture(fimc, false);
enum v4l2_buf_type type)
{
struct fimc_lite *fimc = video_drvdata(file);
- struct media_entity *entity = &fimc->ve.vdev.entity;
int ret;
if (fimc_lite_active(fimc))
return -EBUSY;
- ret = media_pipeline_start(entity, &fimc->ve.pipe->mp);
+ ret = video_device_pipeline_start(&fimc->ve.vdev, &fimc->ve.pipe->mp);
if (ret < 0)
return ret;
}
err_p_stop:
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&fimc->ve.vdev);
return 0;
}
if (ret < 0)
return ret;
- media_pipeline_stop(&fimc->ve.vdev.entity);
+ video_device_pipeline_stop(&fimc->ve.vdev);
fimc->streaming = false;
return 0;
}
if (s3c_vp_active(vp))
return 0;
- ret = media_pipeline_start(sensor, camif->m_pipeline);
+ ret = media_pipeline_start(sensor->pads, camif->m_pipeline);
if (ret < 0)
return ret;
ret = camif_pipeline_validate(camif);
if (ret < 0) {
- media_pipeline_stop(sensor);
+ media_pipeline_stop(sensor->pads);
return ret;
}
ret = vb2_streamoff(&vp->vb_queue, type);
if (ret == 0)
- media_pipeline_stop(&camif->sensor.sd->entity);
+ media_pipeline_stop(camif->sensor.sd->entity.pads);
return ret;
}
goto err_unlocked;
}
- ret = media_pipeline_start(&dcmi->vdev->entity, &dcmi->pipeline);
+ ret = video_device_pipeline_start(dcmi->vdev, &dcmi->pipeline);
if (ret < 0) {
dev_err(dcmi->dev, "%s: Failed to start streaming, media pipeline start error (%d)\n",
__func__, ret);
dcmi_pipeline_stop(dcmi);
err_media_pipeline_stop:
- media_pipeline_stop(&dcmi->vdev->entity);
+ video_device_pipeline_stop(dcmi->vdev);
err_pm_put:
pm_runtime_put(dcmi->dev);
dcmi_pipeline_stop(dcmi);
- media_pipeline_stop(&dcmi->vdev->entity);
+ video_device_pipeline_stop(dcmi->vdev);
spin_lock_irq(&dcmi->irqlock);
config VIDEO_SUN4I_CSI
tristate "Allwinner A10 CMOS Sensor Interface Support"
depends on V4L_PLATFORM_DRIVERS
- depends on VIDEO_DEV && COMMON_CLK && HAS_DMA
+ depends on VIDEO_DEV && COMMON_CLK && RESET_CONTROLLER && HAS_DMA
depends on ARCH_SUNXI || COMPILE_TEST
select MEDIA_CONTROLLER
select VIDEO_V4L2_SUBDEV_API
goto err_clear_dma_queue;
}
- ret = media_pipeline_start(&csi->vdev.entity, &csi->vdev.pipe);
+ ret = video_device_pipeline_alloc_start(&csi->vdev);
if (ret < 0)
goto err_free_scratch_buffer;
sun4i_csi_capture_stop(csi);
err_disable_pipeline:
- media_pipeline_stop(&csi->vdev.entity);
+ video_device_pipeline_stop(&csi->vdev);
err_free_scratch_buffer:
dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr,
return_all_buffers(csi, VB2_BUF_STATE_ERROR);
spin_unlock_irqrestore(&csi->qlock, flags);
- media_pipeline_stop(&csi->vdev.entity);
+ video_device_pipeline_stop(&csi->vdev);
dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr,
csi->scratch.paddr);
# SPDX-License-Identifier: GPL-2.0-only
config VIDEO_SUN6I_CSI
- tristate "Allwinner V3s Camera Sensor Interface driver"
- depends on V4L_PLATFORM_DRIVERS
- depends on VIDEO_DEV && COMMON_CLK && HAS_DMA
+ tristate "Allwinner A31 Camera Sensor Interface (CSI) Driver"
+ depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
depends on ARCH_SUNXI || COMPILE_TEST
+ depends on PM && COMMON_CLK && RESET_CONTROLLER && HAS_DMA
select MEDIA_CONTROLLER
select VIDEO_V4L2_SUBDEV_API
select VIDEOBUF2_DMA_CONTIG
- select REGMAP_MMIO
select V4L2_FWNODE
+ select REGMAP_MMIO
help
- Support for the Allwinner Camera Sensor Interface Controller on V3s.
+ Support for the Allwinner A31 Camera Sensor Interface (CSI)
+ controller, also found on other platforms such as the A83T, H3,
+ V3/V3s or A64.
#include <linux/sched.h>
#include <linux/sizes.h>
#include <linux/slab.h>
+#include <media/v4l2-mc.h>
#include "sun6i_csi.h"
#include "sun6i_csi_reg.h"
-#define MODULE_NAME "sun6i-csi"
-
-struct sun6i_csi_dev {
- struct sun6i_csi csi;
- struct device *dev;
-
- struct regmap *regmap;
- struct clk *clk_mod;
- struct clk *clk_ram;
- struct reset_control *rstc_bus;
-
- int planar_offset[3];
-};
-
-static inline struct sun6i_csi_dev *sun6i_csi_to_dev(struct sun6i_csi *csi)
-{
- return container_of(csi, struct sun6i_csi_dev, csi);
-}
+/* Helpers */
/* TODO add 10&12 bit YUV, RGB support */
-bool sun6i_csi_is_format_supported(struct sun6i_csi *csi,
+bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev,
u32 pixformat, u32 mbus_code)
{
- struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
+ struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
/*
* Some video receivers have the ability to be compatible with
* 8bit and 16bit bus width.
* Identify the media bus format from device tree.
*/
- if ((sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_PARALLEL
- || sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_BT656)
- && sdev->csi.v4l2_ep.bus.parallel.bus_width == 16) {
+ if ((v4l2->v4l2_ep.bus_type == V4L2_MBUS_PARALLEL
+ || v4l2->v4l2_ep.bus_type == V4L2_MBUS_BT656)
+ && v4l2->v4l2_ep.bus.parallel.bus_width == 16) {
switch (pixformat) {
case V4L2_PIX_FMT_NV12_16L16:
case V4L2_PIX_FMT_NV12:
case MEDIA_BUS_FMT_YVYU8_1X16:
return true;
default:
- dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n",
+ dev_dbg(csi_dev->dev,
+ "Unsupported mbus code: 0x%x\n",
mbus_code);
break;
}
break;
default:
- dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n",
+ dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n",
pixformat);
break;
}
case MEDIA_BUS_FMT_YVYU8_2X8:
return true;
default:
- dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n",
+ dev_dbg(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
mbus_code);
break;
}
return (mbus_code == MEDIA_BUS_FMT_JPEG_1X8);
default:
- dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
+ dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n",
+ pixformat);
break;
}
return false;
}
-int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable)
+int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable)
{
- struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
- struct device *dev = sdev->dev;
- struct regmap *regmap = sdev->regmap;
+ struct device *dev = csi_dev->dev;
+ struct regmap *regmap = csi_dev->regmap;
int ret;
if (!enable) {
regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, 0);
+ pm_runtime_put(dev);
- clk_disable_unprepare(sdev->clk_ram);
- if (of_device_is_compatible(dev->of_node,
- "allwinner,sun50i-a64-csi"))
- clk_rate_exclusive_put(sdev->clk_mod);
- clk_disable_unprepare(sdev->clk_mod);
- reset_control_assert(sdev->rstc_bus);
return 0;
}
- ret = clk_prepare_enable(sdev->clk_mod);
- if (ret) {
- dev_err(sdev->dev, "Enable csi clk err %d\n", ret);
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
return ret;
- }
-
- if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi"))
- clk_set_rate_exclusive(sdev->clk_mod, 300000000);
-
- ret = clk_prepare_enable(sdev->clk_ram);
- if (ret) {
- dev_err(sdev->dev, "Enable clk_dram_csi clk err %d\n", ret);
- goto clk_mod_disable;
- }
-
- ret = reset_control_deassert(sdev->rstc_bus);
- if (ret) {
- dev_err(sdev->dev, "reset err %d\n", ret);
- goto clk_ram_disable;
- }
regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, CSI_EN_CSI_EN);
return 0;
-
-clk_ram_disable:
- clk_disable_unprepare(sdev->clk_ram);
-clk_mod_disable:
- if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi"))
- clk_rate_exclusive_put(sdev->clk_mod);
- clk_disable_unprepare(sdev->clk_mod);
- return ret;
}
-static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_dev *sdev,
+static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_device *csi_dev,
u32 mbus_code, u32 pixformat)
{
/* non-YUV */
}
/* not support YUV420 input format yet */
- dev_dbg(sdev->dev, "Select YUV422 as default input format of CSI.\n");
+ dev_dbg(csi_dev->dev, "Select YUV422 as default input format of CSI.\n");
return CSI_INPUT_FORMAT_YUV422;
}
-static enum csi_output_fmt get_csi_output_format(struct sun6i_csi_dev *sdev,
- u32 pixformat, u32 field)
+static enum csi_output_fmt
+get_csi_output_format(struct sun6i_csi_device *csi_dev, u32 pixformat,
+ u32 field)
{
bool buf_interlaced = false;
return buf_interlaced ? CSI_FRAME_RAW_8 : CSI_FIELD_RAW_8;
default:
- dev_warn(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
+ dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
break;
}
return CSI_FIELD_RAW_8;
}
-static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
+static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_device *csi_dev,
u32 mbus_code, u32 pixformat)
{
/* Input sequence does not apply to non-YUV formats */
case MEDIA_BUS_FMT_YVYU8_2X8:
return CSI_INPUT_SEQ_YVYU;
default:
- dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n",
+ dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
mbus_code);
break;
}
case MEDIA_BUS_FMT_YVYU8_2X8:
return CSI_INPUT_SEQ_YUYV;
default:
- dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n",
+ dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
mbus_code);
break;
}
return CSI_INPUT_SEQ_YUYV;
default:
- dev_warn(sdev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n",
+ dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n",
pixformat);
break;
}
return CSI_INPUT_SEQ_YUYV;
}
-static void sun6i_csi_setup_bus(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_setup_bus(struct sun6i_csi_device *csi_dev)
{
- struct v4l2_fwnode_endpoint *endpoint = &sdev->csi.v4l2_ep;
- struct sun6i_csi *csi = &sdev->csi;
+ struct v4l2_fwnode_endpoint *endpoint = &csi_dev->v4l2.v4l2_ep;
+ struct sun6i_csi_config *config = &csi_dev->config;
unsigned char bus_width;
u32 flags;
u32 cfg;
bool input_interlaced = false;
- if (csi->config.field == V4L2_FIELD_INTERLACED
- || csi->config.field == V4L2_FIELD_INTERLACED_TB
- || csi->config.field == V4L2_FIELD_INTERLACED_BT)
+ if (config->field == V4L2_FIELD_INTERLACED
+ || config->field == V4L2_FIELD_INTERLACED_TB
+ || config->field == V4L2_FIELD_INTERLACED_BT)
input_interlaced = true;
bus_width = endpoint->bus.parallel.bus_width;
- regmap_read(sdev->regmap, CSI_IF_CFG_REG, &cfg);
+ regmap_read(csi_dev->regmap, CSI_IF_CFG_REG, &cfg);
cfg &= ~(CSI_IF_CFG_CSI_IF_MASK | CSI_IF_CFG_MIPI_IF_MASK |
CSI_IF_CFG_IF_DATA_WIDTH_MASK |
cfg |= CSI_IF_CFG_CLK_POL_FALLING_EDGE;
break;
default:
- dev_warn(sdev->dev, "Unsupported bus type: %d\n",
+ dev_warn(csi_dev->dev, "Unsupported bus type: %d\n",
endpoint->bus_type);
break;
}
case 16: /* No need to configure DATA_WIDTH for 16bit */
break;
default:
- dev_warn(sdev->dev, "Unsupported bus width: %u\n", bus_width);
+ dev_warn(csi_dev->dev, "Unsupported bus width: %u\n", bus_width);
break;
}
- regmap_write(sdev->regmap, CSI_IF_CFG_REG, cfg);
+ regmap_write(csi_dev->regmap, CSI_IF_CFG_REG, cfg);
}
-static void sun6i_csi_set_format(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_set_format(struct sun6i_csi_device *csi_dev)
{
- struct sun6i_csi *csi = &sdev->csi;
+ struct sun6i_csi_config *config = &csi_dev->config;
u32 cfg;
u32 val;
- regmap_read(sdev->regmap, CSI_CH_CFG_REG, &cfg);
+ regmap_read(csi_dev->regmap, CSI_CH_CFG_REG, &cfg);
cfg &= ~(CSI_CH_CFG_INPUT_FMT_MASK |
CSI_CH_CFG_OUTPUT_FMT_MASK | CSI_CH_CFG_VFLIP_EN |
CSI_CH_CFG_HFLIP_EN | CSI_CH_CFG_FIELD_SEL_MASK |
CSI_CH_CFG_INPUT_SEQ_MASK);
- val = get_csi_input_format(sdev, csi->config.code,
- csi->config.pixelformat);
+ val = get_csi_input_format(csi_dev, config->code,
+ config->pixelformat);
cfg |= CSI_CH_CFG_INPUT_FMT(val);
- val = get_csi_output_format(sdev, csi->config.pixelformat,
- csi->config.field);
+ val = get_csi_output_format(csi_dev, config->pixelformat,
+ config->field);
cfg |= CSI_CH_CFG_OUTPUT_FMT(val);
- val = get_csi_input_seq(sdev, csi->config.code,
- csi->config.pixelformat);
+ val = get_csi_input_seq(csi_dev, config->code,
+ config->pixelformat);
cfg |= CSI_CH_CFG_INPUT_SEQ(val);
- if (csi->config.field == V4L2_FIELD_TOP)
+ if (config->field == V4L2_FIELD_TOP)
cfg |= CSI_CH_CFG_FIELD_SEL_FIELD0;
- else if (csi->config.field == V4L2_FIELD_BOTTOM)
+ else if (config->field == V4L2_FIELD_BOTTOM)
cfg |= CSI_CH_CFG_FIELD_SEL_FIELD1;
else
cfg |= CSI_CH_CFG_FIELD_SEL_BOTH;
- regmap_write(sdev->regmap, CSI_CH_CFG_REG, cfg);
+ regmap_write(csi_dev->regmap, CSI_CH_CFG_REG, cfg);
}
-static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_set_window(struct sun6i_csi_device *csi_dev)
{
- struct sun6i_csi_config *config = &sdev->csi.config;
+ struct sun6i_csi_config *config = &csi_dev->config;
u32 bytesperline_y;
u32 bytesperline_c;
- int *planar_offset = sdev->planar_offset;
+ int *planar_offset = csi_dev->planar_offset;
u32 width = config->width;
u32 height = config->height;
u32 hor_len = width;
case V4L2_PIX_FMT_YVYU:
case V4L2_PIX_FMT_UYVY:
case V4L2_PIX_FMT_VYUY:
- dev_dbg(sdev->dev,
+ dev_dbg(csi_dev->dev,
"Horizontal length should be 2 times of width for packed YUV formats!\n");
hor_len = width * 2;
break;
break;
}
- regmap_write(sdev->regmap, CSI_CH_HSIZE_REG,
+ regmap_write(csi_dev->regmap, CSI_CH_HSIZE_REG,
CSI_CH_HSIZE_HOR_LEN(hor_len) |
CSI_CH_HSIZE_HOR_START(0));
- regmap_write(sdev->regmap, CSI_CH_VSIZE_REG,
+ regmap_write(csi_dev->regmap, CSI_CH_VSIZE_REG,
CSI_CH_VSIZE_VER_LEN(height) |
CSI_CH_VSIZE_VER_START(0));
bytesperline_c * height;
break;
default: /* raw */
- dev_dbg(sdev->dev,
+ dev_dbg(csi_dev->dev,
"Calculating pixelformat(0x%x)'s bytesperline as a packed format\n",
config->pixelformat);
bytesperline_y = (sun6i_csi_get_bpp(config->pixelformat) *
break;
}
- regmap_write(sdev->regmap, CSI_CH_BUF_LEN_REG,
+ regmap_write(csi_dev->regmap, CSI_CH_BUF_LEN_REG,
CSI_CH_BUF_LEN_BUF_LEN_C(bytesperline_c) |
CSI_CH_BUF_LEN_BUF_LEN_Y(bytesperline_y));
}
-int sun6i_csi_update_config(struct sun6i_csi *csi,
+int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev,
struct sun6i_csi_config *config)
{
- struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-
if (!config)
return -EINVAL;
- memcpy(&csi->config, config, sizeof(csi->config));
+ memcpy(&csi_dev->config, config, sizeof(csi_dev->config));
- sun6i_csi_setup_bus(sdev);
- sun6i_csi_set_format(sdev);
- sun6i_csi_set_window(sdev);
+ sun6i_csi_setup_bus(csi_dev);
+ sun6i_csi_set_format(csi_dev);
+ sun6i_csi_set_window(csi_dev);
return 0;
}
-void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr)
+void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev,
+ dma_addr_t addr)
{
- struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-
- regmap_write(sdev->regmap, CSI_CH_F0_BUFA_REG,
- (addr + sdev->planar_offset[0]) >> 2);
- if (sdev->planar_offset[1] != -1)
- regmap_write(sdev->regmap, CSI_CH_F1_BUFA_REG,
- (addr + sdev->planar_offset[1]) >> 2);
- if (sdev->planar_offset[2] != -1)
- regmap_write(sdev->regmap, CSI_CH_F2_BUFA_REG,
- (addr + sdev->planar_offset[2]) >> 2);
+ regmap_write(csi_dev->regmap, CSI_CH_F0_BUFA_REG,
+ (addr + csi_dev->planar_offset[0]) >> 2);
+ if (csi_dev->planar_offset[1] != -1)
+ regmap_write(csi_dev->regmap, CSI_CH_F1_BUFA_REG,
+ (addr + csi_dev->planar_offset[1]) >> 2);
+ if (csi_dev->planar_offset[2] != -1)
+ regmap_write(csi_dev->regmap, CSI_CH_F2_BUFA_REG,
+ (addr + csi_dev->planar_offset[2]) >> 2);
}
-void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable)
+void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable)
{
- struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
- struct regmap *regmap = sdev->regmap;
+ struct regmap *regmap = csi_dev->regmap;
if (!enable) {
regmap_update_bits(regmap, CSI_CAP_REG, CSI_CAP_CH0_VCAP_ON, 0);
CSI_CAP_CH0_VCAP_ON);
}
-/* -----------------------------------------------------------------------------
- * Media Controller and V4L2
- */
-static int sun6i_csi_link_entity(struct sun6i_csi *csi,
+/* Media */
+
+static const struct media_device_ops sun6i_csi_media_ops = {
+ .link_notify = v4l2_pipeline_link_notify,
+};
+
+/* V4L2 */
+
+static int sun6i_csi_link_entity(struct sun6i_csi_device *csi_dev,
struct media_entity *entity,
struct fwnode_handle *fwnode)
{
ret = media_entity_get_fwnode_pad(entity, fwnode, MEDIA_PAD_FL_SOURCE);
if (ret < 0) {
- dev_err(csi->dev, "%s: no source pad in external entity %s\n",
- __func__, entity->name);
+ dev_err(csi_dev->dev,
+ "%s: no source pad in external entity %s\n", __func__,
+ entity->name);
return -EINVAL;
}
src_pad_index = ret;
- sink = &csi->video.vdev.entity;
- sink_pad = &csi->video.pad;
+ sink = &csi_dev->video.video_dev.entity;
+ sink_pad = &csi_dev->video.pad;
- dev_dbg(csi->dev, "creating %s:%u -> %s:%u link\n",
+ dev_dbg(csi_dev->dev, "creating %s:%u -> %s:%u link\n",
entity->name, src_pad_index, sink->name, sink_pad->index);
ret = media_create_pad_link(entity, src_pad_index, sink,
sink_pad->index,
MEDIA_LNK_FL_ENABLED |
MEDIA_LNK_FL_IMMUTABLE);
if (ret < 0) {
- dev_err(csi->dev, "failed to create %s:%u -> %s:%u link\n",
+ dev_err(csi_dev->dev, "failed to create %s:%u -> %s:%u link\n",
entity->name, src_pad_index,
sink->name, sink_pad->index);
return ret;
static int sun6i_subdev_notify_complete(struct v4l2_async_notifier *notifier)
{
- struct sun6i_csi *csi = container_of(notifier, struct sun6i_csi,
- notifier);
- struct v4l2_device *v4l2_dev = &csi->v4l2_dev;
+ struct sun6i_csi_device *csi_dev =
+ container_of(notifier, struct sun6i_csi_device,
+ v4l2.notifier);
+ struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+ struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev;
struct v4l2_subdev *sd;
int ret;
- dev_dbg(csi->dev, "notify complete, all subdevs registered\n");
+ dev_dbg(csi_dev->dev, "notify complete, all subdevs registered\n");
sd = list_first_entry(&v4l2_dev->subdevs, struct v4l2_subdev, list);
if (!sd)
return -EINVAL;
- ret = sun6i_csi_link_entity(csi, &sd->entity, sd->fwnode);
+ ret = sun6i_csi_link_entity(csi_dev, &sd->entity, sd->fwnode);
if (ret < 0)
return ret;
- ret = v4l2_device_register_subdev_nodes(&csi->v4l2_dev);
+ ret = v4l2_device_register_subdev_nodes(v4l2_dev);
if (ret < 0)
return ret;
- return media_device_register(&csi->media_dev);
+ return 0;
}
static const struct v4l2_async_notifier_operations sun6i_csi_async_ops = {
struct v4l2_fwnode_endpoint *vep,
struct v4l2_async_subdev *asd)
{
- struct sun6i_csi *csi = dev_get_drvdata(dev);
+ struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
if (vep->base.port || vep->base.id) {
dev_warn(dev, "Only support a single port with one endpoint\n");
switch (vep->bus_type) {
case V4L2_MBUS_PARALLEL:
case V4L2_MBUS_BT656:
- csi->v4l2_ep = *vep;
+ csi_dev->v4l2.v4l2_ep = *vep;
return 0;
default:
dev_err(dev, "Unsupported media bus type\n");
}
}
-static void sun6i_csi_v4l2_cleanup(struct sun6i_csi *csi)
-{
- media_device_unregister(&csi->media_dev);
- v4l2_async_nf_unregister(&csi->notifier);
- v4l2_async_nf_cleanup(&csi->notifier);
- sun6i_video_cleanup(&csi->video);
- v4l2_device_unregister(&csi->v4l2_dev);
- v4l2_ctrl_handler_free(&csi->ctrl_handler);
- media_device_cleanup(&csi->media_dev);
-}
-
-static int sun6i_csi_v4l2_init(struct sun6i_csi *csi)
+static int sun6i_csi_v4l2_setup(struct sun6i_csi_device *csi_dev)
{
+ struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+ struct media_device *media_dev = &v4l2->media_dev;
+ struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev;
+ struct v4l2_async_notifier *notifier = &v4l2->notifier;
+ struct device *dev = csi_dev->dev;
int ret;
- csi->media_dev.dev = csi->dev;
- strscpy(csi->media_dev.model, "Allwinner Video Capture Device",
- sizeof(csi->media_dev.model));
- csi->media_dev.hw_revision = 0;
+ /* Media Device */
+
+ strscpy(media_dev->model, SUN6I_CSI_DESCRIPTION,
+ sizeof(media_dev->model));
+ media_dev->hw_revision = 0;
+ media_dev->ops = &sun6i_csi_media_ops;
+ media_dev->dev = dev;
- media_device_init(&csi->media_dev);
- v4l2_async_nf_init(&csi->notifier);
+ media_device_init(media_dev);
- ret = v4l2_ctrl_handler_init(&csi->ctrl_handler, 0);
+ ret = media_device_register(media_dev);
if (ret) {
- dev_err(csi->dev, "V4L2 controls handler init failed (%d)\n",
- ret);
- goto clean_media;
+ dev_err(dev, "failed to register media device: %d\n", ret);
+ goto error_media;
}
- csi->v4l2_dev.mdev = &csi->media_dev;
- csi->v4l2_dev.ctrl_handler = &csi->ctrl_handler;
- ret = v4l2_device_register(csi->dev, &csi->v4l2_dev);
+ /* V4L2 Device */
+
+ v4l2_dev->mdev = media_dev;
+
+ ret = v4l2_device_register(dev, v4l2_dev);
if (ret) {
- dev_err(csi->dev, "V4L2 device registration failed (%d)\n",
- ret);
- goto free_ctrl;
+ dev_err(dev, "failed to register v4l2 device: %d\n", ret);
+ goto error_media;
}
- ret = sun6i_video_init(&csi->video, csi, "sun6i-csi");
+ /* Video */
+
+ ret = sun6i_video_setup(csi_dev);
if (ret)
- goto unreg_v4l2;
+ goto error_v4l2_device;
- ret = v4l2_async_nf_parse_fwnode_endpoints(csi->dev,
- &csi->notifier,
+ /* V4L2 Async */
+
+ v4l2_async_nf_init(notifier);
+ notifier->ops = &sun6i_csi_async_ops;
+
+ ret = v4l2_async_nf_parse_fwnode_endpoints(dev, notifier,
sizeof(struct
v4l2_async_subdev),
sun6i_csi_fwnode_parse);
if (ret)
- goto clean_video;
+ goto error_video;
- csi->notifier.ops = &sun6i_csi_async_ops;
-
- ret = v4l2_async_nf_register(&csi->v4l2_dev, &csi->notifier);
+ ret = v4l2_async_nf_register(v4l2_dev, notifier);
if (ret) {
- dev_err(csi->dev, "notifier registration failed\n");
- goto clean_video;
+ dev_err(dev, "failed to register v4l2 async notifier: %d\n",
+ ret);
+ goto error_v4l2_async_notifier;
}
return 0;
-clean_video:
- sun6i_video_cleanup(&csi->video);
-unreg_v4l2:
- v4l2_device_unregister(&csi->v4l2_dev);
-free_ctrl:
- v4l2_ctrl_handler_free(&csi->ctrl_handler);
-clean_media:
- v4l2_async_nf_cleanup(&csi->notifier);
- media_device_cleanup(&csi->media_dev);
+error_v4l2_async_notifier:
+ v4l2_async_nf_cleanup(notifier);
+
+error_video:
+ sun6i_video_cleanup(csi_dev);
+
+error_v4l2_device:
+ v4l2_device_unregister(&v4l2->v4l2_dev);
+
+error_media:
+ media_device_unregister(media_dev);
+ media_device_cleanup(media_dev);
return ret;
}
-/* -----------------------------------------------------------------------------
- * Resources and IRQ
- */
-static irqreturn_t sun6i_csi_isr(int irq, void *dev_id)
+static void sun6i_csi_v4l2_cleanup(struct sun6i_csi_device *csi_dev)
{
- struct sun6i_csi_dev *sdev = (struct sun6i_csi_dev *)dev_id;
- struct regmap *regmap = sdev->regmap;
+ struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+
+ media_device_unregister(&v4l2->media_dev);
+ v4l2_async_nf_unregister(&v4l2->notifier);
+ v4l2_async_nf_cleanup(&v4l2->notifier);
+ sun6i_video_cleanup(csi_dev);
+ v4l2_device_unregister(&v4l2->v4l2_dev);
+ media_device_cleanup(&v4l2->media_dev);
+}
+
+/* Platform */
+
+static irqreturn_t sun6i_csi_interrupt(int irq, void *private)
+{
+ struct sun6i_csi_device *csi_dev = private;
+ struct regmap *regmap = csi_dev->regmap;
u32 status;
regmap_read(regmap, CSI_CH_INT_STA_REG, &status);
}
if (status & CSI_CH_INT_STA_FD_PD)
- sun6i_video_frame_done(&sdev->csi.video);
+ sun6i_video_frame_done(csi_dev);
regmap_write(regmap, CSI_CH_INT_STA_REG, status);
return IRQ_HANDLED;
}
+static int sun6i_csi_suspend(struct device *dev)
+{
+ struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
+
+ reset_control_assert(csi_dev->reset);
+ clk_disable_unprepare(csi_dev->clock_ram);
+ clk_disable_unprepare(csi_dev->clock_mod);
+
+ return 0;
+}
+
+static int sun6i_csi_resume(struct device *dev)
+{
+ struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
+ int ret;
+
+ ret = reset_control_deassert(csi_dev->reset);
+ if (ret) {
+ dev_err(dev, "failed to deassert reset\n");
+ return ret;
+ }
+
+ ret = clk_prepare_enable(csi_dev->clock_mod);
+ if (ret) {
+ dev_err(dev, "failed to enable module clock\n");
+ goto error_reset;
+ }
+
+ ret = clk_prepare_enable(csi_dev->clock_ram);
+ if (ret) {
+ dev_err(dev, "failed to enable ram clock\n");
+ goto error_clock_mod;
+ }
+
+ return 0;
+
+error_clock_mod:
+ clk_disable_unprepare(csi_dev->clock_mod);
+
+error_reset:
+ reset_control_assert(csi_dev->reset);
+
+ return ret;
+}
+
+static const struct dev_pm_ops sun6i_csi_pm_ops = {
+ .runtime_suspend = sun6i_csi_suspend,
+ .runtime_resume = sun6i_csi_resume,
+};
+
static const struct regmap_config sun6i_csi_regmap_config = {
.reg_bits = 32,
.reg_stride = 4,
.max_register = 0x9c,
};
-static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
- struct platform_device *pdev)
+static int sun6i_csi_resources_setup(struct sun6i_csi_device *csi_dev,
+ struct platform_device *platform_dev)
{
+ struct device *dev = csi_dev->dev;
+ const struct sun6i_csi_variant *variant;
void __iomem *io_base;
int ret;
int irq;
- io_base = devm_platform_ioremap_resource(pdev, 0);
+ variant = of_device_get_match_data(dev);
+ if (!variant)
+ return -EINVAL;
+
+ /* Registers */
+
+ io_base = devm_platform_ioremap_resource(platform_dev, 0);
if (IS_ERR(io_base))
return PTR_ERR(io_base);
- sdev->regmap = devm_regmap_init_mmio_clk(&pdev->dev, "bus", io_base,
- &sun6i_csi_regmap_config);
- if (IS_ERR(sdev->regmap)) {
- dev_err(&pdev->dev, "Failed to init register map\n");
- return PTR_ERR(sdev->regmap);
+ csi_dev->regmap = devm_regmap_init_mmio_clk(dev, "bus", io_base,
+ &sun6i_csi_regmap_config);
+ if (IS_ERR(csi_dev->regmap)) {
+ dev_err(dev, "failed to init register map\n");
+ return PTR_ERR(csi_dev->regmap);
}
- sdev->clk_mod = devm_clk_get(&pdev->dev, "mod");
- if (IS_ERR(sdev->clk_mod)) {
- dev_err(&pdev->dev, "Unable to acquire csi clock\n");
- return PTR_ERR(sdev->clk_mod);
+ /* Clocks */
+
+ csi_dev->clock_mod = devm_clk_get(dev, "mod");
+ if (IS_ERR(csi_dev->clock_mod)) {
+ dev_err(dev, "failed to acquire module clock\n");
+ return PTR_ERR(csi_dev->clock_mod);
}
- sdev->clk_ram = devm_clk_get(&pdev->dev, "ram");
- if (IS_ERR(sdev->clk_ram)) {
- dev_err(&pdev->dev, "Unable to acquire dram-csi clock\n");
- return PTR_ERR(sdev->clk_ram);
+ csi_dev->clock_ram = devm_clk_get(dev, "ram");
+ if (IS_ERR(csi_dev->clock_ram)) {
+ dev_err(dev, "failed to acquire ram clock\n");
+ return PTR_ERR(csi_dev->clock_ram);
}
- sdev->rstc_bus = devm_reset_control_get_shared(&pdev->dev, NULL);
- if (IS_ERR(sdev->rstc_bus)) {
- dev_err(&pdev->dev, "Cannot get reset controller\n");
- return PTR_ERR(sdev->rstc_bus);
+ ret = clk_set_rate_exclusive(csi_dev->clock_mod,
+ variant->clock_mod_rate);
+ if (ret) {
+ dev_err(dev, "failed to set mod clock rate\n");
+ return ret;
+ }
+
+ /* Reset */
+
+ csi_dev->reset = devm_reset_control_get_shared(dev, NULL);
+ if (IS_ERR(csi_dev->reset)) {
+ dev_err(dev, "failed to acquire reset\n");
+ ret = PTR_ERR(csi_dev->reset);
+ goto error_clock_rate_exclusive;
}
- irq = platform_get_irq(pdev, 0);
- if (irq < 0)
- return -ENXIO;
+ /* Interrupt */
- ret = devm_request_irq(&pdev->dev, irq, sun6i_csi_isr, 0, MODULE_NAME,
- sdev);
+ irq = platform_get_irq(platform_dev, 0);
+ if (irq < 0) {
+ dev_err(dev, "failed to get interrupt\n");
+ ret = -ENXIO;
+ goto error_clock_rate_exclusive;
+ }
+
+ ret = devm_request_irq(dev, irq, sun6i_csi_interrupt, 0, SUN6I_CSI_NAME,
+ csi_dev);
if (ret) {
- dev_err(&pdev->dev, "Cannot request csi IRQ\n");
- return ret;
+ dev_err(dev, "failed to request interrupt\n");
+ goto error_clock_rate_exclusive;
}
+ /* Runtime PM */
+
+ pm_runtime_enable(dev);
+
return 0;
+
+error_clock_rate_exclusive:
+ clk_rate_exclusive_put(csi_dev->clock_mod);
+
+ return ret;
+}
+
+static void sun6i_csi_resources_cleanup(struct sun6i_csi_device *csi_dev)
+{
+ pm_runtime_disable(csi_dev->dev);
+ clk_rate_exclusive_put(csi_dev->clock_mod);
}
-static int sun6i_csi_probe(struct platform_device *pdev)
+static int sun6i_csi_probe(struct platform_device *platform_dev)
{
- struct sun6i_csi_dev *sdev;
+ struct sun6i_csi_device *csi_dev;
+ struct device *dev = &platform_dev->dev;
int ret;
- sdev = devm_kzalloc(&pdev->dev, sizeof(*sdev), GFP_KERNEL);
- if (!sdev)
+ csi_dev = devm_kzalloc(dev, sizeof(*csi_dev), GFP_KERNEL);
+ if (!csi_dev)
return -ENOMEM;
- sdev->dev = &pdev->dev;
+ csi_dev->dev = &platform_dev->dev;
+ platform_set_drvdata(platform_dev, csi_dev);
- ret = sun6i_csi_resource_request(sdev, pdev);
+ ret = sun6i_csi_resources_setup(csi_dev, platform_dev);
if (ret)
return ret;
- platform_set_drvdata(pdev, sdev);
+ ret = sun6i_csi_v4l2_setup(csi_dev);
+ if (ret)
+ goto error_resources;
+
+ return 0;
- sdev->csi.dev = &pdev->dev;
- return sun6i_csi_v4l2_init(&sdev->csi);
+error_resources:
+ sun6i_csi_resources_cleanup(csi_dev);
+
+ return ret;
}
static int sun6i_csi_remove(struct platform_device *pdev)
{
- struct sun6i_csi_dev *sdev = platform_get_drvdata(pdev);
+ struct sun6i_csi_device *csi_dev = platform_get_drvdata(pdev);
- sun6i_csi_v4l2_cleanup(&sdev->csi);
+ sun6i_csi_v4l2_cleanup(csi_dev);
+ sun6i_csi_resources_cleanup(csi_dev);
return 0;
}
+static const struct sun6i_csi_variant sun6i_a31_csi_variant = {
+ .clock_mod_rate = 297000000,
+};
+
+static const struct sun6i_csi_variant sun50i_a64_csi_variant = {
+ .clock_mod_rate = 300000000,
+};
+
static const struct of_device_id sun6i_csi_of_match[] = {
- { .compatible = "allwinner,sun6i-a31-csi", },
- { .compatible = "allwinner,sun8i-a83t-csi", },
- { .compatible = "allwinner,sun8i-h3-csi", },
- { .compatible = "allwinner,sun8i-v3s-csi", },
- { .compatible = "allwinner,sun50i-a64-csi", },
+ {
+ .compatible = "allwinner,sun6i-a31-csi",
+ .data = &sun6i_a31_csi_variant,
+ },
+ {
+ .compatible = "allwinner,sun8i-a83t-csi",
+ .data = &sun6i_a31_csi_variant,
+ },
+ {
+ .compatible = "allwinner,sun8i-h3-csi",
+ .data = &sun6i_a31_csi_variant,
+ },
+ {
+ .compatible = "allwinner,sun8i-v3s-csi",
+ .data = &sun6i_a31_csi_variant,
+ },
+ {
+ .compatible = "allwinner,sun50i-a64-csi",
+ .data = &sun50i_a64_csi_variant,
+ },
{},
};
+
MODULE_DEVICE_TABLE(of, sun6i_csi_of_match);
static struct platform_driver sun6i_csi_platform_driver = {
- .probe = sun6i_csi_probe,
- .remove = sun6i_csi_remove,
- .driver = {
- .name = MODULE_NAME,
- .of_match_table = of_match_ptr(sun6i_csi_of_match),
+ .probe = sun6i_csi_probe,
+ .remove = sun6i_csi_remove,
+ .driver = {
+ .name = SUN6I_CSI_NAME,
+ .of_match_table = of_match_ptr(sun6i_csi_of_match),
+ .pm = &sun6i_csi_pm_ops,
},
};
+
module_platform_driver(sun6i_csi_platform_driver);
-MODULE_DESCRIPTION("Allwinner V3s Camera Sensor Interface driver");
+MODULE_DESCRIPTION("Allwinner A31 Camera Sensor Interface driver");
MODULE_AUTHOR("Yong Deng <yong.deng@magewell.com>");
MODULE_LICENSE("GPL");
#ifndef __SUN6I_CSI_H__
#define __SUN6I_CSI_H__
-#include <media/v4l2-ctrls.h>
#include <media/v4l2-device.h>
#include <media/v4l2-fwnode.h>
+#include <media/videobuf2-v4l2.h>
#include "sun6i_video.h"
-struct sun6i_csi;
+#define SUN6I_CSI_NAME "sun6i-csi"
+#define SUN6I_CSI_DESCRIPTION "Allwinner A31 CSI Device"
+
+struct sun6i_csi_buffer {
+ struct vb2_v4l2_buffer v4l2_buffer;
+ struct list_head list;
+
+ dma_addr_t dma_addr;
+ bool queued_to_csi;
+};
/**
* struct sun6i_csi_config - configs for sun6i csi
u32 height;
};
-struct sun6i_csi {
- struct device *dev;
- struct v4l2_ctrl_handler ctrl_handler;
+struct sun6i_csi_v4l2 {
struct v4l2_device v4l2_dev;
struct media_device media_dev;
struct v4l2_async_notifier notifier;
-
/* video port settings */
struct v4l2_fwnode_endpoint v4l2_ep;
+};
- struct sun6i_csi_config config;
+struct sun6i_csi_device {
+ struct device *dev;
+ struct sun6i_csi_config config;
+ struct sun6i_csi_v4l2 v4l2;
struct sun6i_video video;
+
+ struct regmap *regmap;
+ struct clk *clock_mod;
+ struct clk *clock_ram;
+ struct reset_control *reset;
+
+ int planar_offset[3];
+};
+
+struct sun6i_csi_variant {
+ unsigned long clock_mod_rate;
};
/**
* sun6i_csi_is_format_supported() - check if the format supported by csi
- * @csi: pointer to the csi
+ * @csi_dev: pointer to the csi device
* @pixformat: v4l2 pixel format (V4L2_PIX_FMT_*)
* @mbus_code: media bus format code (MEDIA_BUS_FMT_*)
+ *
+ * Return: true if format is supported, false otherwise.
*/
-bool sun6i_csi_is_format_supported(struct sun6i_csi *csi, u32 pixformat,
- u32 mbus_code);
+bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev,
+ u32 pixformat, u32 mbus_code);
/**
* sun6i_csi_set_power() - power on/off the csi
- * @csi: pointer to the csi
+ * @csi_dev: pointer to the csi device
* @enable: on/off
+ *
+ * Return: 0 if successful, error code otherwise.
*/
-int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable);
+int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable);
/**
* sun6i_csi_update_config() - update the csi register settings
- * @csi: pointer to the csi
+ * @csi_dev: pointer to the csi device
* @config: see struct sun6i_csi_config
+ *
+ * Return: 0 if successful, error code otherwise.
*/
-int sun6i_csi_update_config(struct sun6i_csi *csi,
+int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev,
struct sun6i_csi_config *config);
/**
* sun6i_csi_update_buf_addr() - update the csi frame buffer address
- * @csi: pointer to the csi
+ * @csi_dev: pointer to the csi device
* @addr: frame buffer's physical address
*/
-void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr);
+void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev,
+ dma_addr_t addr);
/**
* sun6i_csi_set_stream() - start/stop csi streaming
- * @csi: pointer to the csi
+ * @csi_dev: pointer to the csi device
* @enable: start/stop
*/
-void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable);
+void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable);
/* get bpp form v4l2 pixformat */
static inline int sun6i_csi_get_bpp(unsigned int pixformat)
#define MAX_WIDTH (4800)
#define MAX_HEIGHT (4800)
-struct sun6i_csi_buffer {
- struct vb2_v4l2_buffer vb;
- struct list_head list;
+/* Helpers */
- dma_addr_t dma_addr;
- bool queued_to_csi;
-};
+static struct v4l2_subdev *
+sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad)
+{
+ struct media_pad *remote;
+
+ remote = media_pad_remote_pad_first(&video->pad);
+
+ if (!remote || !is_media_entity_v4l2_subdev(remote->entity))
+ return NULL;
+
+ if (pad)
+ *pad = remote->index;
-static const u32 supported_pixformats[] = {
+ return media_entity_to_v4l2_subdev(remote->entity);
+}
+
+/* Format */
+
+static const u32 sun6i_video_formats[] = {
V4L2_PIX_FMT_SBGGR8,
V4L2_PIX_FMT_SGBRG8,
V4L2_PIX_FMT_SGRBG8,
V4L2_PIX_FMT_JPEG,
};
-static bool is_pixformat_valid(unsigned int pixformat)
+static bool sun6i_video_format_check(u32 format)
{
unsigned int i;
- for (i = 0; i < ARRAY_SIZE(supported_pixformats); i++)
- if (supported_pixformats[i] == pixformat)
+ for (i = 0; i < ARRAY_SIZE(sun6i_video_formats); i++)
+ if (sun6i_video_formats[i] == format)
return true;
return false;
}
-static struct v4l2_subdev *
-sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad)
-{
- struct media_pad *remote;
+/* Video */
- remote = media_pad_remote_pad_first(&video->pad);
+static void sun6i_video_buffer_configure(struct sun6i_csi_device *csi_dev,
+ struct sun6i_csi_buffer *csi_buffer)
+{
+ csi_buffer->queued_to_csi = true;
+ sun6i_csi_update_buf_addr(csi_dev, csi_buffer->dma_addr);
+}
- if (!remote || !is_media_entity_v4l2_subdev(remote->entity))
- return NULL;
+static void sun6i_video_configure(struct sun6i_csi_device *csi_dev)
+{
+ struct sun6i_video *video = &csi_dev->video;
+ struct sun6i_csi_config config = { 0 };
- if (pad)
- *pad = remote->index;
+ config.pixelformat = video->format.fmt.pix.pixelformat;
+ config.code = video->mbus_code;
+ config.field = video->format.fmt.pix.field;
+ config.width = video->format.fmt.pix.width;
+ config.height = video->format.fmt.pix.height;
- return media_entity_to_v4l2_subdev(remote->entity);
+ sun6i_csi_update_config(csi_dev, &config);
}
-static int sun6i_video_queue_setup(struct vb2_queue *vq,
- unsigned int *nbuffers,
- unsigned int *nplanes,
+/* Queue */
+
+static int sun6i_video_queue_setup(struct vb2_queue *queue,
+ unsigned int *buffers_count,
+ unsigned int *planes_count,
unsigned int sizes[],
struct device *alloc_devs[])
{
- struct sun6i_video *video = vb2_get_drv_priv(vq);
- unsigned int size = video->fmt.fmt.pix.sizeimage;
+ struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+ struct sun6i_video *video = &csi_dev->video;
+ unsigned int size = video->format.fmt.pix.sizeimage;
- if (*nplanes)
+ if (*planes_count)
return sizes[0] < size ? -EINVAL : 0;
- *nplanes = 1;
+ *planes_count = 1;
sizes[0] = size;
return 0;
}
-static int sun6i_video_buffer_prepare(struct vb2_buffer *vb)
+static int sun6i_video_buffer_prepare(struct vb2_buffer *buffer)
{
- struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
- struct sun6i_csi_buffer *buf =
- container_of(vbuf, struct sun6i_csi_buffer, vb);
- struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue);
- unsigned long size = video->fmt.fmt.pix.sizeimage;
-
- if (vb2_plane_size(vb, 0) < size) {
- v4l2_err(video->vdev.v4l2_dev, "buffer too small (%lu < %lu)\n",
- vb2_plane_size(vb, 0), size);
+ struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue);
+ struct sun6i_video *video = &csi_dev->video;
+ struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev;
+ struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer);
+ struct sun6i_csi_buffer *csi_buffer =
+ container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer);
+ unsigned long size = video->format.fmt.pix.sizeimage;
+
+ if (vb2_plane_size(buffer, 0) < size) {
+ v4l2_err(v4l2_dev, "buffer too small (%lu < %lu)\n",
+ vb2_plane_size(buffer, 0), size);
return -EINVAL;
}
- vb2_set_plane_payload(vb, 0, size);
-
- buf->dma_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
+ vb2_set_plane_payload(buffer, 0, size);
- vbuf->field = video->fmt.fmt.pix.field;
+ csi_buffer->dma_addr = vb2_dma_contig_plane_dma_addr(buffer, 0);
+ v4l2_buffer->field = video->format.fmt.pix.field;
return 0;
}
-static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+static void sun6i_video_buffer_queue(struct vb2_buffer *buffer)
+{
+ struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue);
+ struct sun6i_video *video = &csi_dev->video;
+ struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer);
+ struct sun6i_csi_buffer *csi_buffer =
+ container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer);
+ unsigned long flags;
+
+ spin_lock_irqsave(&video->dma_queue_lock, flags);
+ csi_buffer->queued_to_csi = false;
+ list_add_tail(&csi_buffer->list, &video->dma_queue);
+ spin_unlock_irqrestore(&video->dma_queue_lock, flags);
+}
+
+static int sun6i_video_start_streaming(struct vb2_queue *queue,
+ unsigned int count)
{
- struct sun6i_video *video = vb2_get_drv_priv(vq);
+ struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+ struct sun6i_video *video = &csi_dev->video;
+ struct video_device *video_dev = &video->video_dev;
struct sun6i_csi_buffer *buf;
struct sun6i_csi_buffer *next_buf;
- struct sun6i_csi_config config;
struct v4l2_subdev *subdev;
unsigned long flags;
int ret;
video->sequence = 0;
- ret = media_pipeline_start(&video->vdev.entity, &video->vdev.pipe);
+ ret = video_device_pipeline_alloc_start(video_dev);
if (ret < 0)
- goto clear_dma_queue;
+ goto error_dma_queue_flush;
if (video->mbus_code == 0) {
ret = -EINVAL;
- goto stop_media_pipeline;
+ goto error_media_pipeline;
}
subdev = sun6i_video_remote_subdev(video, NULL);
if (!subdev) {
ret = -EINVAL;
- goto stop_media_pipeline;
+ goto error_media_pipeline;
}
- config.pixelformat = video->fmt.fmt.pix.pixelformat;
- config.code = video->mbus_code;
- config.field = video->fmt.fmt.pix.field;
- config.width = video->fmt.fmt.pix.width;
- config.height = video->fmt.fmt.pix.height;
-
- ret = sun6i_csi_update_config(video->csi, &config);
- if (ret < 0)
- goto stop_media_pipeline;
+ sun6i_video_configure(csi_dev);
spin_lock_irqsave(&video->dma_queue_lock, flags);
buf = list_first_entry(&video->dma_queue,
struct sun6i_csi_buffer, list);
- buf->queued_to_csi = true;
- sun6i_csi_update_buf_addr(video->csi, buf->dma_addr);
+ sun6i_video_buffer_configure(csi_dev, buf);
- sun6i_csi_set_stream(video->csi, true);
+ sun6i_csi_set_stream(csi_dev, true);
/*
* CSI will lookup the next dma buffer for next frame before the
* would also drop frame when lacking of queued buffer.
*/
next_buf = list_next_entry(buf, list);
- next_buf->queued_to_csi = true;
- sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
+ sun6i_video_buffer_configure(csi_dev, next_buf);
spin_unlock_irqrestore(&video->dma_queue_lock, flags);
ret = v4l2_subdev_call(subdev, video, s_stream, 1);
if (ret && ret != -ENOIOCTLCMD)
- goto stop_csi_stream;
+ goto error_stream;
return 0;
-stop_csi_stream:
- sun6i_csi_set_stream(video->csi, false);
-stop_media_pipeline:
- media_pipeline_stop(&video->vdev.entity);
-clear_dma_queue:
+error_stream:
+ sun6i_csi_set_stream(csi_dev, false);
+
+error_media_pipeline:
+ video_device_pipeline_stop(video_dev);
+
+error_dma_queue_flush:
spin_lock_irqsave(&video->dma_queue_lock, flags);
list_for_each_entry(buf, &video->dma_queue, list)
- vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_QUEUED);
+ vb2_buffer_done(&buf->v4l2_buffer.vb2_buf,
+ VB2_BUF_STATE_QUEUED);
INIT_LIST_HEAD(&video->dma_queue);
spin_unlock_irqrestore(&video->dma_queue_lock, flags);
return ret;
}
-static void sun6i_video_stop_streaming(struct vb2_queue *vq)
+static void sun6i_video_stop_streaming(struct vb2_queue *queue)
{
- struct sun6i_video *video = vb2_get_drv_priv(vq);
+ struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+ struct sun6i_video *video = &csi_dev->video;
struct v4l2_subdev *subdev;
unsigned long flags;
struct sun6i_csi_buffer *buf;
if (subdev)
v4l2_subdev_call(subdev, video, s_stream, 0);
- sun6i_csi_set_stream(video->csi, false);
+ sun6i_csi_set_stream(csi_dev, false);
- media_pipeline_stop(&video->vdev.entity);
+ video_device_pipeline_stop(&video->video_dev);
/* Release all active buffers */
spin_lock_irqsave(&video->dma_queue_lock, flags);
list_for_each_entry(buf, &video->dma_queue, list)
- vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+ vb2_buffer_done(&buf->v4l2_buffer.vb2_buf, VB2_BUF_STATE_ERROR);
INIT_LIST_HEAD(&video->dma_queue);
spin_unlock_irqrestore(&video->dma_queue_lock, flags);
}
-static void sun6i_video_buffer_queue(struct vb2_buffer *vb)
-{
- struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
- struct sun6i_csi_buffer *buf =
- container_of(vbuf, struct sun6i_csi_buffer, vb);
- struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue);
- unsigned long flags;
-
- spin_lock_irqsave(&video->dma_queue_lock, flags);
- buf->queued_to_csi = false;
- list_add_tail(&buf->list, &video->dma_queue);
- spin_unlock_irqrestore(&video->dma_queue_lock, flags);
-}
-
-void sun6i_video_frame_done(struct sun6i_video *video)
+void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev)
{
+ struct sun6i_video *video = &csi_dev->video;
struct sun6i_csi_buffer *buf;
struct sun6i_csi_buffer *next_buf;
- struct vb2_v4l2_buffer *vbuf;
+ struct vb2_v4l2_buffer *v4l2_buffer;
spin_lock(&video->dma_queue_lock);
buf = list_first_entry(&video->dma_queue,
struct sun6i_csi_buffer, list);
if (list_is_last(&buf->list, &video->dma_queue)) {
- dev_dbg(video->csi->dev, "Frame dropped!\n");
- goto unlock;
+ dev_dbg(csi_dev->dev, "Frame dropped!\n");
+ goto complete;
}
next_buf = list_next_entry(buf, list);
* for next ISR call.
*/
if (!next_buf->queued_to_csi) {
- next_buf->queued_to_csi = true;
- sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
- dev_dbg(video->csi->dev, "Frame dropped!\n");
- goto unlock;
+ sun6i_video_buffer_configure(csi_dev, next_buf);
+ dev_dbg(csi_dev->dev, "Frame dropped!\n");
+ goto complete;
}
list_del(&buf->list);
- vbuf = &buf->vb;
- vbuf->vb2_buf.timestamp = ktime_get_ns();
- vbuf->sequence = video->sequence;
- vb2_buffer_done(&vbuf->vb2_buf, VB2_BUF_STATE_DONE);
+ v4l2_buffer = &buf->v4l2_buffer;
+ v4l2_buffer->vb2_buf.timestamp = ktime_get_ns();
+ v4l2_buffer->sequence = video->sequence;
+ vb2_buffer_done(&v4l2_buffer->vb2_buf, VB2_BUF_STATE_DONE);
/* Prepare buffer for next frame but one. */
if (!list_is_last(&next_buf->list, &video->dma_queue)) {
next_buf = list_next_entry(next_buf, list);
- next_buf->queued_to_csi = true;
- sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
+ sun6i_video_buffer_configure(csi_dev, next_buf);
} else {
- dev_dbg(video->csi->dev, "Next frame will be dropped!\n");
+ dev_dbg(csi_dev->dev, "Next frame will be dropped!\n");
}
-unlock:
+complete:
video->sequence++;
spin_unlock(&video->dma_queue_lock);
}
-static const struct vb2_ops sun6i_csi_vb2_ops = {
+static const struct vb2_ops sun6i_video_queue_ops = {
.queue_setup = sun6i_video_queue_setup,
- .wait_prepare = vb2_ops_wait_prepare,
- .wait_finish = vb2_ops_wait_finish,
.buf_prepare = sun6i_video_buffer_prepare,
+ .buf_queue = sun6i_video_buffer_queue,
.start_streaming = sun6i_video_start_streaming,
.stop_streaming = sun6i_video_stop_streaming,
- .buf_queue = sun6i_video_buffer_queue,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
};
-static int vidioc_querycap(struct file *file, void *priv,
- struct v4l2_capability *cap)
+/* V4L2 Device */
+
+static int sun6i_video_querycap(struct file *file, void *private,
+ struct v4l2_capability *capability)
{
- struct sun6i_video *video = video_drvdata(file);
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct video_device *video_dev = &csi_dev->video.video_dev;
- strscpy(cap->driver, "sun6i-video", sizeof(cap->driver));
- strscpy(cap->card, video->vdev.name, sizeof(cap->card));
- snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s",
- video->csi->dev->of_node->name);
+ strscpy(capability->driver, SUN6I_CSI_NAME, sizeof(capability->driver));
+ strscpy(capability->card, video_dev->name, sizeof(capability->card));
+ snprintf(capability->bus_info, sizeof(capability->bus_info),
+ "platform:%s", dev_name(csi_dev->dev));
return 0;
}
-static int vidioc_enum_fmt_vid_cap(struct file *file, void *priv,
- struct v4l2_fmtdesc *f)
+static int sun6i_video_enum_fmt(struct file *file, void *private,
+ struct v4l2_fmtdesc *fmtdesc)
{
- u32 index = f->index;
+ u32 index = fmtdesc->index;
- if (index >= ARRAY_SIZE(supported_pixformats))
+ if (index >= ARRAY_SIZE(sun6i_video_formats))
return -EINVAL;
- f->pixelformat = supported_pixformats[index];
+ fmtdesc->pixelformat = sun6i_video_formats[index];
return 0;
}
-static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
- struct v4l2_format *fmt)
+static int sun6i_video_g_fmt(struct file *file, void *private,
+ struct v4l2_format *format)
{
- struct sun6i_video *video = video_drvdata(file);
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct sun6i_video *video = &csi_dev->video;
- *fmt = video->fmt;
+ *format = video->format;
return 0;
}
-static int sun6i_video_try_fmt(struct sun6i_video *video,
- struct v4l2_format *f)
+static int sun6i_video_format_try(struct sun6i_video *video,
+ struct v4l2_format *format)
{
- struct v4l2_pix_format *pixfmt = &f->fmt.pix;
+ struct v4l2_pix_format *pix_format = &format->fmt.pix;
int bpp;
- if (!is_pixformat_valid(pixfmt->pixelformat))
- pixfmt->pixelformat = supported_pixformats[0];
+ if (!sun6i_video_format_check(pix_format->pixelformat))
+ pix_format->pixelformat = sun6i_video_formats[0];
- v4l_bound_align_image(&pixfmt->width, MIN_WIDTH, MAX_WIDTH, 1,
- &pixfmt->height, MIN_HEIGHT, MAX_WIDTH, 1, 1);
+ v4l_bound_align_image(&pix_format->width, MIN_WIDTH, MAX_WIDTH, 1,
+ &pix_format->height, MIN_HEIGHT, MAX_WIDTH, 1, 1);
- bpp = sun6i_csi_get_bpp(pixfmt->pixelformat);
- pixfmt->bytesperline = (pixfmt->width * bpp) >> 3;
- pixfmt->sizeimage = pixfmt->bytesperline * pixfmt->height;
+ bpp = sun6i_csi_get_bpp(pix_format->pixelformat);
+ pix_format->bytesperline = (pix_format->width * bpp) >> 3;
+ pix_format->sizeimage = pix_format->bytesperline * pix_format->height;
- if (pixfmt->field == V4L2_FIELD_ANY)
- pixfmt->field = V4L2_FIELD_NONE;
+ if (pix_format->field == V4L2_FIELD_ANY)
+ pix_format->field = V4L2_FIELD_NONE;
- if (pixfmt->pixelformat == V4L2_PIX_FMT_JPEG)
- pixfmt->colorspace = V4L2_COLORSPACE_JPEG;
+ if (pix_format->pixelformat == V4L2_PIX_FMT_JPEG)
+ pix_format->colorspace = V4L2_COLORSPACE_JPEG;
else
- pixfmt->colorspace = V4L2_COLORSPACE_SRGB;
+ pix_format->colorspace = V4L2_COLORSPACE_SRGB;
- pixfmt->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
- pixfmt->quantization = V4L2_QUANTIZATION_DEFAULT;
- pixfmt->xfer_func = V4L2_XFER_FUNC_DEFAULT;
+ pix_format->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
+ pix_format->quantization = V4L2_QUANTIZATION_DEFAULT;
+ pix_format->xfer_func = V4L2_XFER_FUNC_DEFAULT;
return 0;
}
-static int sun6i_video_set_fmt(struct sun6i_video *video, struct v4l2_format *f)
+static int sun6i_video_format_set(struct sun6i_video *video,
+ struct v4l2_format *format)
{
int ret;
- ret = sun6i_video_try_fmt(video, f);
+ ret = sun6i_video_format_try(video, format);
if (ret)
return ret;
- video->fmt = *f;
+ video->format = *format;
return 0;
}
-static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
- struct v4l2_format *f)
+static int sun6i_video_s_fmt(struct file *file, void *private,
+ struct v4l2_format *format)
{
- struct sun6i_video *video = video_drvdata(file);
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct sun6i_video *video = &csi_dev->video;
- if (vb2_is_busy(&video->vb2_vidq))
+ if (vb2_is_busy(&video->queue))
return -EBUSY;
- return sun6i_video_set_fmt(video, f);
+ return sun6i_video_format_set(video, format);
}
-static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
- struct v4l2_format *f)
+static int sun6i_video_try_fmt(struct file *file, void *private,
+ struct v4l2_format *format)
{
- struct sun6i_video *video = video_drvdata(file);
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct sun6i_video *video = &csi_dev->video;
- return sun6i_video_try_fmt(video, f);
+ return sun6i_video_format_try(video, format);
}
-static int vidioc_enum_input(struct file *file, void *fh,
- struct v4l2_input *inp)
+static int sun6i_video_enum_input(struct file *file, void *private,
+ struct v4l2_input *input)
{
- if (inp->index != 0)
+ if (input->index != 0)
return -EINVAL;
- strscpy(inp->name, "camera", sizeof(inp->name));
- inp->type = V4L2_INPUT_TYPE_CAMERA;
+ input->type = V4L2_INPUT_TYPE_CAMERA;
+ strscpy(input->name, "Camera", sizeof(input->name));
return 0;
}
-static int vidioc_g_input(struct file *file, void *fh, unsigned int *i)
+static int sun6i_video_g_input(struct file *file, void *private,
+ unsigned int *index)
{
- *i = 0;
+ *index = 0;
return 0;
}
-static int vidioc_s_input(struct file *file, void *fh, unsigned int i)
+static int sun6i_video_s_input(struct file *file, void *private,
+ unsigned int index)
{
- if (i != 0)
+ if (index != 0)
return -EINVAL;
return 0;
}
static const struct v4l2_ioctl_ops sun6i_video_ioctl_ops = {
- .vidioc_querycap = vidioc_querycap,
- .vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap,
- .vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap,
- .vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap,
- .vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap,
+ .vidioc_querycap = sun6i_video_querycap,
+
+ .vidioc_enum_fmt_vid_cap = sun6i_video_enum_fmt,
+ .vidioc_g_fmt_vid_cap = sun6i_video_g_fmt,
+ .vidioc_s_fmt_vid_cap = sun6i_video_s_fmt,
+ .vidioc_try_fmt_vid_cap = sun6i_video_try_fmt,
- .vidioc_enum_input = vidioc_enum_input,
- .vidioc_s_input = vidioc_s_input,
- .vidioc_g_input = vidioc_g_input,
+ .vidioc_enum_input = sun6i_video_enum_input,
+ .vidioc_g_input = sun6i_video_g_input,
+ .vidioc_s_input = sun6i_video_s_input,
+ .vidioc_create_bufs = vb2_ioctl_create_bufs,
+ .vidioc_prepare_buf = vb2_ioctl_prepare_buf,
.vidioc_reqbufs = vb2_ioctl_reqbufs,
.vidioc_querybuf = vb2_ioctl_querybuf,
- .vidioc_qbuf = vb2_ioctl_qbuf,
.vidioc_expbuf = vb2_ioctl_expbuf,
+ .vidioc_qbuf = vb2_ioctl_qbuf,
.vidioc_dqbuf = vb2_ioctl_dqbuf,
- .vidioc_create_bufs = vb2_ioctl_create_bufs,
- .vidioc_prepare_buf = vb2_ioctl_prepare_buf,
.vidioc_streamon = vb2_ioctl_streamon,
.vidioc_streamoff = vb2_ioctl_streamoff,
-
- .vidioc_log_status = v4l2_ctrl_log_status,
- .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
- .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
};
-/* -----------------------------------------------------------------------------
- * V4L2 file operations
- */
+/* V4L2 File */
+
static int sun6i_video_open(struct file *file)
{
- struct sun6i_video *video = video_drvdata(file);
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct sun6i_video *video = &csi_dev->video;
int ret = 0;
if (mutex_lock_interruptible(&video->lock))
ret = v4l2_fh_open(file);
if (ret < 0)
- goto unlock;
+ goto error_lock;
- ret = v4l2_pipeline_pm_get(&video->vdev.entity);
+ ret = v4l2_pipeline_pm_get(&video->video_dev.entity);
if (ret < 0)
- goto fh_release;
-
- /* check if already powered */
- if (!v4l2_fh_is_singular_file(file))
- goto unlock;
+ goto error_v4l2_fh;
- ret = sun6i_csi_set_power(video->csi, true);
- if (ret < 0)
- goto fh_release;
+ /* Power on at first open. */
+ if (v4l2_fh_is_singular_file(file)) {
+ ret = sun6i_csi_set_power(csi_dev, true);
+ if (ret < 0)
+ goto error_v4l2_fh;
+ }
mutex_unlock(&video->lock);
+
return 0;
-fh_release:
+error_v4l2_fh:
v4l2_fh_release(file);
-unlock:
+
+error_lock:
mutex_unlock(&video->lock);
+
return ret;
}
static int sun6i_video_close(struct file *file)
{
- struct sun6i_video *video = video_drvdata(file);
- bool last_fh;
+ struct sun6i_csi_device *csi_dev = video_drvdata(file);
+ struct sun6i_video *video = &csi_dev->video;
+ bool last_close;
mutex_lock(&video->lock);
- last_fh = v4l2_fh_is_singular_file(file);
+ last_close = v4l2_fh_is_singular_file(file);
_vb2_fop_release(file, NULL);
+ v4l2_pipeline_pm_put(&video->video_dev.entity);
- v4l2_pipeline_pm_put(&video->vdev.entity);
-
- if (last_fh)
- sun6i_csi_set_power(video->csi, false);
+ /* Power off at last close. */
+ if (last_close)
+ sun6i_csi_set_power(csi_dev, false);
mutex_unlock(&video->lock);
.poll = vb2_fop_poll
};
-/* -----------------------------------------------------------------------------
- * Media Operations
- */
+/* Media Entity */
+
static int sun6i_video_link_validate_get_format(struct media_pad *pad,
struct v4l2_subdev_format *fmt)
{
{
struct video_device *vdev = container_of(link->sink->entity,
struct video_device, entity);
- struct sun6i_video *video = video_get_drvdata(vdev);
+ struct sun6i_csi_device *csi_dev = video_get_drvdata(vdev);
+ struct sun6i_video *video = &csi_dev->video;
struct v4l2_subdev_format source_fmt;
int ret;
video->mbus_code = 0;
if (!media_pad_remote_pad_first(link->sink->entity->pads)) {
- dev_info(video->csi->dev,
- "video node %s pad not connected\n", vdev->name);
+ dev_info(csi_dev->dev, "video node %s pad not connected\n",
+ vdev->name);
return -ENOLINK;
}
if (ret < 0)
return ret;
- if (!sun6i_csi_is_format_supported(video->csi,
- video->fmt.fmt.pix.pixelformat,
+ if (!sun6i_csi_is_format_supported(csi_dev,
+ video->format.fmt.pix.pixelformat,
source_fmt.format.code)) {
- dev_err(video->csi->dev,
+ dev_err(csi_dev->dev,
"Unsupported pixformat: 0x%x with mbus code: 0x%x!\n",
- video->fmt.fmt.pix.pixelformat,
+ video->format.fmt.pix.pixelformat,
source_fmt.format.code);
return -EPIPE;
}
- if (source_fmt.format.width != video->fmt.fmt.pix.width ||
- source_fmt.format.height != video->fmt.fmt.pix.height) {
- dev_err(video->csi->dev,
+ if (source_fmt.format.width != video->format.fmt.pix.width ||
+ source_fmt.format.height != video->format.fmt.pix.height) {
+ dev_err(csi_dev->dev,
"Wrong width or height %ux%u (%ux%u expected)\n",
- video->fmt.fmt.pix.width, video->fmt.fmt.pix.height,
+ video->format.fmt.pix.width, video->format.fmt.pix.height,
source_fmt.format.width, source_fmt.format.height);
return -EPIPE;
}
.link_validate = sun6i_video_link_validate
};
-int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi,
- const char *name)
+/* Video */
+
+int sun6i_video_setup(struct sun6i_csi_device *csi_dev)
{
- struct video_device *vdev = &video->vdev;
- struct vb2_queue *vidq = &video->vb2_vidq;
- struct v4l2_format fmt = { 0 };
+ struct sun6i_video *video = &csi_dev->video;
+ struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev;
+ struct video_device *video_dev = &video->video_dev;
+ struct vb2_queue *queue = &video->queue;
+ struct media_pad *pad = &video->pad;
+ struct v4l2_format format = { 0 };
+ struct v4l2_pix_format *pix_format = &format.fmt.pix;
int ret;
- video->csi = csi;
+ /* Media Entity */
- /* Initialize the media entity... */
- video->pad.flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT;
- vdev->entity.ops = &sun6i_video_media_ops;
- ret = media_entity_pads_init(&vdev->entity, 1, &video->pad);
+ video_dev->entity.ops = &sun6i_video_media_ops;
+
+ /* Media Pad */
+
+ pad->flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT;
+
+ ret = media_entity_pads_init(&video_dev->entity, 1, pad);
if (ret < 0)
return ret;
- mutex_init(&video->lock);
+ /* DMA queue */
INIT_LIST_HEAD(&video->dma_queue);
spin_lock_init(&video->dma_queue_lock);
video->sequence = 0;
- /* Setup default format */
- fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- fmt.fmt.pix.pixelformat = supported_pixformats[0];
- fmt.fmt.pix.width = 1280;
- fmt.fmt.pix.height = 720;
- fmt.fmt.pix.field = V4L2_FIELD_NONE;
- sun6i_video_set_fmt(video, &fmt);
-
- /* Initialize videobuf2 queue */
- vidq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- vidq->io_modes = VB2_MMAP | VB2_DMABUF;
- vidq->drv_priv = video;
- vidq->buf_struct_size = sizeof(struct sun6i_csi_buffer);
- vidq->ops = &sun6i_csi_vb2_ops;
- vidq->mem_ops = &vb2_dma_contig_memops;
- vidq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
- vidq->lock = &video->lock;
- /* Make sure non-dropped frame */
- vidq->min_buffers_needed = 3;
- vidq->dev = csi->dev;
-
- ret = vb2_queue_init(vidq);
+ /* Queue */
+
+ mutex_init(&video->lock);
+
+ queue->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ queue->io_modes = VB2_MMAP | VB2_DMABUF;
+ queue->buf_struct_size = sizeof(struct sun6i_csi_buffer);
+ queue->ops = &sun6i_video_queue_ops;
+ queue->mem_ops = &vb2_dma_contig_memops;
+ queue->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ queue->lock = &video->lock;
+ queue->dev = csi_dev->dev;
+ queue->drv_priv = csi_dev;
+
+ /* Make sure non-dropped frame. */
+ queue->min_buffers_needed = 3;
+
+ ret = vb2_queue_init(queue);
if (ret) {
- v4l2_err(&csi->v4l2_dev, "vb2_queue_init failed: %d\n", ret);
- goto clean_entity;
+ v4l2_err(v4l2_dev, "failed to initialize vb2 queue: %d\n", ret);
+ goto error_media_entity;
}
- /* Register video device */
- strscpy(vdev->name, name, sizeof(vdev->name));
- vdev->release = video_device_release_empty;
- vdev->fops = &sun6i_video_fops;
- vdev->ioctl_ops = &sun6i_video_ioctl_ops;
- vdev->vfl_type = VFL_TYPE_VIDEO;
- vdev->vfl_dir = VFL_DIR_RX;
- vdev->v4l2_dev = &csi->v4l2_dev;
- vdev->queue = vidq;
- vdev->lock = &video->lock;
- vdev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE;
- video_set_drvdata(vdev, video);
-
- ret = video_register_device(vdev, VFL_TYPE_VIDEO, -1);
+ /* V4L2 Format */
+
+ format.type = queue->type;
+ pix_format->pixelformat = sun6i_video_formats[0];
+ pix_format->width = 1280;
+ pix_format->height = 720;
+ pix_format->field = V4L2_FIELD_NONE;
+
+ sun6i_video_format_set(video, &format);
+
+ /* Video Device */
+
+ strscpy(video_dev->name, SUN6I_CSI_NAME, sizeof(video_dev->name));
+ video_dev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+ video_dev->vfl_dir = VFL_DIR_RX;
+ video_dev->release = video_device_release_empty;
+ video_dev->fops = &sun6i_video_fops;
+ video_dev->ioctl_ops = &sun6i_video_ioctl_ops;
+ video_dev->v4l2_dev = v4l2_dev;
+ video_dev->queue = queue;
+ video_dev->lock = &video->lock;
+
+ video_set_drvdata(video_dev, csi_dev);
+
+ ret = video_register_device(video_dev, VFL_TYPE_VIDEO, -1);
if (ret < 0) {
- v4l2_err(&csi->v4l2_dev,
- "video_register_device failed: %d\n", ret);
- goto clean_entity;
+ v4l2_err(v4l2_dev, "failed to register video device: %d\n",
+ ret);
+ goto error_media_entity;
}
return 0;
-clean_entity:
- media_entity_cleanup(&video->vdev.entity);
+error_media_entity:
+ media_entity_cleanup(&video_dev->entity);
+
mutex_destroy(&video->lock);
+
return ret;
}
-void sun6i_video_cleanup(struct sun6i_video *video)
+void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev)
{
- vb2_video_unregister_device(&video->vdev);
- media_entity_cleanup(&video->vdev.entity);
+ struct sun6i_video *video = &csi_dev->video;
+ struct video_device *video_dev = &video->video_dev;
+
+ vb2_video_unregister_device(video_dev);
+ media_entity_cleanup(&video_dev->entity);
mutex_destroy(&video->lock);
}
#include <media/v4l2-dev.h>
#include <media/videobuf2-core.h>
-struct sun6i_csi;
+struct sun6i_csi_device;
struct sun6i_video {
- struct video_device vdev;
+ struct video_device video_dev;
+ struct vb2_queue queue;
+ struct mutex lock; /* Queue lock. */
struct media_pad pad;
- struct sun6i_csi *csi;
- struct mutex lock;
-
- struct vb2_queue vb2_vidq;
- spinlock_t dma_queue_lock;
struct list_head dma_queue;
+ spinlock_t dma_queue_lock; /* DMA queue lock. */
- unsigned int sequence;
- struct v4l2_format fmt;
+ struct v4l2_format format;
u32 mbus_code;
+ unsigned int sequence;
};
-int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi,
- const char *name);
-void sun6i_video_cleanup(struct sun6i_video *video);
+int sun6i_video_setup(struct sun6i_csi_device *csi_dev);
+void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev);
-void sun6i_video_frame_done(struct sun6i_video *video);
+void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev);
#endif /* __SUN6I_VIDEO_H__ */
tristate "Allwinner A31 MIPI CSI-2 Controller Driver"
depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
depends on ARCH_SUNXI || COMPILE_TEST
- depends on PM && COMMON_CLK
+ depends on PM && COMMON_CLK && RESET_CONTROLLER
+ depends on PHY_SUN6I_MIPI_DPHY
select MEDIA_CONTROLLER
select VIDEO_V4L2_SUBDEV_API
select V4L2_FWNODE
- select PHY_SUN6I_MIPI_DPHY
select GENERIC_PHY_MIPI_DPHY
select REGMAP_MMIO
help
csi2_dev->reset = devm_reset_control_get_shared(dev, NULL);
if (IS_ERR(csi2_dev->reset)) {
dev_err(dev, "failed to get reset controller\n");
- return PTR_ERR(csi2_dev->reset);
+ ret = PTR_ERR(csi2_dev->reset);
+ goto error_clock_rate_exclusive;
}
/* D-PHY */
csi2_dev->dphy = devm_phy_get(dev, "dphy");
if (IS_ERR(csi2_dev->dphy)) {
dev_err(dev, "failed to get MIPI D-PHY\n");
- return PTR_ERR(csi2_dev->dphy);
+ ret = PTR_ERR(csi2_dev->dphy);
+ goto error_clock_rate_exclusive;
}
ret = phy_init(csi2_dev->dphy);
if (ret) {
dev_err(dev, "failed to initialize MIPI D-PHY\n");
- return ret;
+ goto error_clock_rate_exclusive;
}
/* Runtime PM */
pm_runtime_enable(dev);
return 0;
+
+error_clock_rate_exclusive:
+ clk_rate_exclusive_put(csi2_dev->clock_mod);
+
+ return ret;
}
static void
ret = sun6i_mipi_csi2_bridge_setup(csi2_dev);
if (ret)
- return ret;
+ goto error_resources;
return 0;
+
+error_resources:
+ sun6i_mipi_csi2_resources_cleanup(csi2_dev);
+
+ return ret;
}
static int sun6i_mipi_csi2_remove(struct platform_device *platform_dev)
tristate "Allwinner A83T MIPI CSI-2 Controller and D-PHY Driver"
depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
depends on ARCH_SUNXI || COMPILE_TEST
- depends on PM && COMMON_CLK
+ depends on PM && COMMON_CLK && RESET_CONTROLLER
select MEDIA_CONTROLLER
select VIDEO_V4L2_SUBDEV_API
select V4L2_FWNODE
csi2_dev->clock_mipi = devm_clk_get(dev, "mipi");
if (IS_ERR(csi2_dev->clock_mipi)) {
dev_err(dev, "failed to acquire mipi clock\n");
- return PTR_ERR(csi2_dev->clock_mipi);
+ ret = PTR_ERR(csi2_dev->clock_mipi);
+ goto error_clock_rate_exclusive;
}
csi2_dev->clock_misc = devm_clk_get(dev, "misc");
if (IS_ERR(csi2_dev->clock_misc)) {
dev_err(dev, "failed to acquire misc clock\n");
- return PTR_ERR(csi2_dev->clock_misc);
+ ret = PTR_ERR(csi2_dev->clock_misc);
+ goto error_clock_rate_exclusive;
}
/* Reset */
csi2_dev->reset = devm_reset_control_get_shared(dev, NULL);
if (IS_ERR(csi2_dev->reset)) {
dev_err(dev, "failed to get reset controller\n");
- return PTR_ERR(csi2_dev->reset);
+ ret = PTR_ERR(csi2_dev->reset);
+ goto error_clock_rate_exclusive;
}
/* D-PHY */
ret = sun8i_a83t_dphy_register(csi2_dev);
if (ret) {
dev_err(dev, "failed to initialize MIPI D-PHY\n");
- return ret;
+ goto error_clock_rate_exclusive;
}
/* Runtime PM */
pm_runtime_enable(dev);
return 0;
+
+error_clock_rate_exclusive:
+ clk_rate_exclusive_put(csi2_dev->clock_mod);
+
+ return ret;
}
static void
ret = sun8i_a83t_mipi_csi2_bridge_setup(csi2_dev);
if (ret)
- return ret;
+ goto error_resources;
return 0;
+
+error_resources:
+ sun8i_a83t_mipi_csi2_resources_cleanup(csi2_dev);
+
+ return ret;
}
static int sun8i_a83t_mipi_csi2_remove(struct platform_device *platform_dev)
depends on V4L_MEM2MEM_DRIVERS
depends on VIDEO_DEV
depends on ARCH_SUNXI || COMPILE_TEST
- depends on COMMON_CLK && OF
+ depends on COMMON_CLK && RESET_CONTROLLER && OF
depends on PM
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
depends on V4L_MEM2MEM_DRIVERS
depends on VIDEO_DEV
depends on ARCH_SUNXI || COMPILE_TEST
- depends on COMMON_CLK && OF
+ depends on COMMON_CLK && RESET_CONTROLLER && OF
depends on PM
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
dma_addr_t addr;
int ret;
- ret = media_pipeline_start(&ctx->vdev.entity, &ctx->phy->pipe);
+ ret = video_device_pipeline_alloc_start(&ctx->vdev);
if (ret < 0) {
ctx_err(ctx, "Failed to start media pipeline: %d\n", ret);
goto error_release_buffers;
cal_ctx_unprepare(ctx);
error_pipeline:
- media_pipeline_stop(&ctx->vdev.entity);
+ video_device_pipeline_stop(&ctx->vdev);
error_release_buffers:
cal_release_buffers(ctx, VB2_BUF_STATE_QUEUED);
cal_release_buffers(ctx, VB2_BUF_STATE_ERROR);
- media_pipeline_stop(&ctx->vdev.entity);
+ video_device_pipeline_stop(&ctx->vdev);
}
static const struct vb2_ops cal_video_qops = {
struct device_node *source_ep_node;
struct device_node *source_node;
struct v4l2_subdev *source;
- struct media_pipeline pipe;
struct v4l2_subdev subdev;
struct media_pad pads[CAL_CAMERARX_NUM_PADS];
struct isp_pipeline *pipe;
struct media_pad *pad;
- if (!me->pipe)
- return 0;
pipe = to_isp_pipeline(me);
- if (pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED)
+ if (!pipe || pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED)
return 0;
pad = media_pad_remote_pad_first(&pipe->output->pad);
return pad->entity == me;
/* Start streaming on the pipeline. No link touching an entity in the
* pipeline can be activated or deactivated once streaming is started.
*/
- pipe = video->video.entity.pipe
- ? to_isp_pipeline(&video->video.entity) : &video->pipe;
+ pipe = to_isp_pipeline(&video->video.entity) ? : &video->pipe;
ret = media_entity_enum_init(&pipe->ent_enum, &video->isp->media_dev);
if (ret)
pipe->l3_ick = clk_get_rate(video->isp->clock[ISP_CLK_L3_ICK]);
pipe->max_rate = pipe->l3_ick;
- ret = media_pipeline_start(&video->video.entity, &pipe->pipe);
+ ret = video_device_pipeline_start(&video->video, &pipe->pipe);
if (ret < 0)
goto err_pipeline_start;
return 0;
err_check_format:
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
err_pipeline_start:
/* TODO: Implement PM QoS */
/* The DMA queue must be emptied here, otherwise CCDC interrupts that
video->error = false;
/* TODO: Implement PM QoS */
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
media_entity_enum_cleanup(&pipe->ent_enum);
unsigned int external_width;
};
-#define to_isp_pipeline(__e) \
- container_of((__e)->pipe, struct isp_pipeline, pipe)
+static inline struct isp_pipeline *to_isp_pipeline(struct media_entity *entity)
+{
+ struct media_pipeline *pipe = media_entity_pipeline(entity);
+
+ if (!pipe)
+ return NULL;
+
+ return container_of(pipe, struct isp_pipeline, pipe);
+}
static inline int isp_pipeline_ready(struct isp_pipeline *pipe)
{
static int hantro_try_ctrl(struct v4l2_ctrl *ctrl)
{
+ struct hantro_ctx *ctx;
+
+ ctx = container_of(ctrl->handler,
+ struct hantro_ctx, ctrl_handler);
+
if (ctrl->id == V4L2_CID_STATELESS_H264_SPS) {
const struct v4l2_ctrl_h264_sps *sps = ctrl->p_new.p_h264_sps;
} else if (ctrl->id == V4L2_CID_STATELESS_HEVC_SPS) {
const struct v4l2_ctrl_hevc_sps *sps = ctrl->p_new.p_hevc_sps;
- if (sps->bit_depth_luma_minus8 != sps->bit_depth_chroma_minus8)
- /* Luma and chroma bit depth mismatch */
- return -EINVAL;
- if (sps->bit_depth_luma_minus8 != 0)
- /* Only 8-bit is supported */
+ if (sps->bit_depth_luma_minus8 != 0 && sps->bit_depth_luma_minus8 != 2)
+ /* Only 8-bit and 10-bit are supported */
return -EINVAL;
+
+ ctx->bit_depth = sps->bit_depth_luma_minus8 + 8;
} else if (ctrl->id == V4L2_CID_STATELESS_VP9_FRAME) {
const struct v4l2_ctrl_vp9_frame *dec_params = ctrl->p_new.p_vp9_frame;
static size_t hantro_hevc_chroma_offset(struct hantro_ctx *ctx)
{
- return ctx->dst_fmt.width * ctx->dst_fmt.height;
+ return ctx->dst_fmt.width * ctx->dst_fmt.height * ctx->bit_depth / 8;
}
static size_t hantro_hevc_motion_vectors_offset(struct hantro_ctx *ctx)
hantro_reg_write(vpu, &g2_bit_depth_y_minus8, sps->bit_depth_luma_minus8);
hantro_reg_write(vpu, &g2_bit_depth_c_minus8, sps->bit_depth_chroma_minus8);
- hantro_reg_write(vpu, &g2_output_8_bits, 0);
-
hantro_reg_write(vpu, &g2_hdr_skip_length, compute_header_skip_length(ctx));
min_log2_cb_size = sps->log2_min_luma_coding_block_size_minus3 + 3;
hevc_dec->tile_bsd.cpu = NULL;
}
- size = VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1);
+ size = (VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8;
hevc_dec->tile_filter.cpu = dma_alloc_coherent(vpu->dev, size,
&hevc_dec->tile_filter.dma,
GFP_KERNEL);
goto err_free_tile_buffers;
hevc_dec->tile_filter.size = size;
- size = VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1);
+ size = (VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8;
hevc_dec->tile_sao.cpu = dma_alloc_coherent(vpu->dev, size,
&hevc_dec->tile_sao.dma,
GFP_KERNEL);
struct hantro_dev *vpu = ctx->dev;
struct vb2_v4l2_buffer *dst_buf;
int down_scale = down_scale_factor(ctx);
+ int out_depth;
size_t chroma_offset;
dma_addr_t dst_dma;
hantro_write_addr(vpu, G2_RS_OUT_LUMA_ADDR, dst_dma);
hantro_write_addr(vpu, G2_RS_OUT_CHROMA_ADDR, dst_dma + chroma_offset);
}
+
+ out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat);
if (ctx->dev->variant->legacy_regs) {
- int out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat);
u8 pp_shift = 0;
if (out_depth > 8)
hantro_reg_write(ctx->dev, &g2_rs_out_bit_depth, out_depth);
hantro_reg_write(ctx->dev, &g2_pp_pix_shift, pp_shift);
+ } else {
+ hantro_reg_write(vpu, &g2_output_8_bits, out_depth > 8 ? 0 : 1);
+ hantro_reg_write(vpu, &g2_output_format, out_depth > 8 ? 1 : 0);
}
hantro_reg_write(vpu, &g2_out_rs_e, 1);
}
.step_height = MB_DIM,
},
},
+ {
+ .fourcc = V4L2_PIX_FMT_P010,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
+ .frmsize = {
+ .min_width = FMT_MIN_WIDTH,
+ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+ .min_height = FMT_MIN_HEIGHT,
+ .max_height = FMT_UHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
};
static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = {
{
.fourcc = V4L2_PIX_FMT_NV12_4L4,
.codec_mode = HANTRO_MODE_NONE,
+ .match_depth = true,
+ .frmsize = {
+ .min_width = FMT_MIN_WIDTH,
+ .max_width = FMT_UHD_WIDTH,
+ .step_width = TILE_MB_DIM,
+ .min_height = FMT_MIN_HEIGHT,
+ .max_height = FMT_UHD_HEIGHT,
+ .step_height = TILE_MB_DIM,
+ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_P010_4L4,
+ .codec_mode = HANTRO_MODE_NONE,
+ .match_depth = true,
.frmsize = {
.min_width = FMT_MIN_WIDTH,
.max_width = FMT_UHD_WIDTH,
* Use the pipeline object embedded in the first DMA object that starts
* streaming.
*/
- pipe = dma->video.entity.pipe
- ? to_xvip_pipeline(&dma->video.entity) : &dma->pipe;
+ pipe = to_xvip_pipeline(&dma->video) ? : &dma->pipe;
- ret = media_pipeline_start(&dma->video.entity, &pipe->pipe);
+ ret = video_device_pipeline_start(&dma->video, &pipe->pipe);
if (ret < 0)
goto error;
return 0;
error_stop:
- media_pipeline_stop(&dma->video.entity);
+ video_device_pipeline_stop(&dma->video);
error:
/* Give back all queued buffers to videobuf2. */
static void xvip_dma_stop_streaming(struct vb2_queue *vq)
{
struct xvip_dma *dma = vb2_get_drv_priv(vq);
- struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video.entity);
+ struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video);
struct xvip_dma_buffer *buf, *nbuf;
/* Stop the pipeline. */
/* Cleanup the pipeline and mark it as being stopped. */
xvip_pipeline_cleanup(pipe);
- media_pipeline_stop(&dma->video.entity);
+ video_device_pipeline_stop(&dma->video);
/* Give back all queued buffers to videobuf2. */
spin_lock_irq(&dma->queued_lock);
struct xvip_dma *output;
};
-static inline struct xvip_pipeline *to_xvip_pipeline(struct media_entity *e)
+static inline struct xvip_pipeline *to_xvip_pipeline(struct video_device *vdev)
{
- return container_of(e->pipe, struct xvip_pipeline, pipe);
+ struct media_pipeline *pipe = video_device_pipeline(vdev);
+
+ if (!pipe)
+ return NULL;
+
+ return container_of(pipe, struct xvip_pipeline, pipe);
}
/**
static int si476x_radio_fops_release(struct file *file)
{
- int err;
struct si476x_radio *radio = video_drvdata(file);
if (v4l2_fh_is_singular_file(file) &&
si476x_core_set_power_state(radio->core,
SI476X_POWER_DOWN);
- err = v4l2_fh_release(file);
-
- return err;
+ return v4l2_fh_release(file);
}
static ssize_t si476x_radio_fops_read(struct file *file, char __user *buf,
#include <linux/interrupt.h>
#include <linux/i2c.h>
#include <linux/slab.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
*/
static int send_associate_24g(struct imon_context *ictx)
{
- int retval;
const unsigned char packet[8] = { 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x20 };
}
memcpy(ictx->usb_tx_buf, packet, sizeof(packet));
- retval = send_packet(ictx);
- return retval;
+ return send_packet(ictx);
}
/*
struct mceusb_dev *ir = dev->priv;
unsigned int units;
- units = DIV_ROUND_CLOSEST(timeout, MCE_TIME_UNIT);
+ units = DIV_ROUND_UP(timeout, MCE_TIME_UNIT);
cmdbuf[2] = units >> 8;
cmdbuf[3] = units;
static int vimc_capture_start_streaming(struct vb2_queue *vq, unsigned int count)
{
struct vimc_capture_device *vcapture = vb2_get_drv_priv(vq);
- struct media_entity *entity = &vcapture->vdev.entity;
int ret;
vcapture->sequence = 0;
/* Start the media pipeline */
- ret = media_pipeline_start(entity, &vcapture->stream.pipe);
+ ret = video_device_pipeline_start(&vcapture->vdev, &vcapture->stream.pipe);
if (ret) {
vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED);
return ret;
ret = vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 1);
if (ret) {
- media_pipeline_stop(entity);
+ video_device_pipeline_stop(&vcapture->vdev);
vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED);
return ret;
}
vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 0);
/* Stop the media pipeline */
- media_pipeline_stop(&vcapture->vdev.entity);
+ video_device_pipeline_stop(&vcapture->vdev);
/* Release all active buffers */
vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_ERROR);
return vivid_vid_out_g_fbuf(file, fh, a);
}
+/*
+ * Only support the framebuffer of one of the vivid instances.
+ * Anything else is rejected.
+ */
+bool vivid_validate_fb(const struct v4l2_framebuffer *a)
+{
+ struct vivid_dev *dev;
+ int i;
+
+ for (i = 0; i < n_devs; i++) {
+ dev = vivid_devs[i];
+ if (!dev || !dev->video_pbase)
+ continue;
+ if ((unsigned long)a->base == dev->video_pbase &&
+ a->fmt.width <= dev->display_width &&
+ a->fmt.height <= dev->display_height &&
+ a->fmt.bytesperline <= dev->display_byte_stride)
+ return true;
+ }
+ return false;
+}
+
static int vidioc_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a)
{
struct video_device *vdev = video_devdata(file);
/* how many inputs do we have and of what type? */
dev->num_inputs = num_inputs[inst];
- if (dev->num_inputs < 1)
- dev->num_inputs = 1;
+ if (node_type & 0x20007) {
+ if (dev->num_inputs < 1)
+ dev->num_inputs = 1;
+ } else {
+ dev->num_inputs = 0;
+ }
if (dev->num_inputs >= MAX_INPUTS)
dev->num_inputs = MAX_INPUTS;
for (i = 0; i < dev->num_inputs; i++) {
/* how many outputs do we have and of what type? */
dev->num_outputs = num_outputs[inst];
- if (dev->num_outputs < 1)
- dev->num_outputs = 1;
+ if (node_type & 0x40300) {
+ if (dev->num_outputs < 1)
+ dev->num_outputs = 1;
+ } else {
+ dev->num_outputs = 0;
+ }
if (dev->num_outputs >= MAX_OUTPUTS)
dev->num_outputs = MAX_OUTPUTS;
for (i = 0; i < dev->num_outputs; i++) {
return dev->output_type[dev->output] == HDMI;
}
+bool vivid_validate_fb(const struct v4l2_framebuffer *a);
+
#endif
int ret;
dev->video_buffer_size = MAX_OSD_HEIGHT * MAX_OSD_WIDTH * 2;
- dev->video_vbase = kzalloc(dev->video_buffer_size, GFP_KERNEL | GFP_DMA32);
+ dev->video_vbase = kzalloc(dev->video_buffer_size, GFP_KERNEL);
if (dev->video_vbase == NULL)
return -ENOMEM;
dev->video_pbase = virt_to_phys(dev->video_vbase);
tpg_reset_source(&dev->tpg, dev->src_rect.width, dev->src_rect.height, dev->field_cap);
dev->crop_cap = dev->src_rect;
dev->crop_bounds_cap = dev->src_rect;
+ if (dev->bitmap_cap &&
+ (dev->compose_cap.width != dev->crop_cap.width ||
+ dev->compose_cap.height != dev->crop_cap.height)) {
+ vfree(dev->bitmap_cap);
+ dev->bitmap_cap = NULL;
+ }
dev->compose_cap = dev->crop_cap;
if (V4L2_FIELD_HAS_T_OR_B(dev->field_cap))
dev->compose_cap.height /= 2;
tpg_s_video_aspect(&dev->tpg, vivid_get_video_aspect(dev));
tpg_s_pixel_aspect(&dev->tpg, vivid_get_pixel_aspect(dev));
tpg_update_mv_step(&dev->tpg);
+
+ /*
+ * We can be called from within s_ctrl, in that case we can't
+ * modify controls. Luckily we don't need to in that case.
+ */
+ if (keep_controls)
+ return;
+
dims[0] = roundup(dev->src_rect.width, PIXEL_ARRAY_DIV);
dims[1] = roundup(dev->src_rect.height, PIXEL_ARRAY_DIV);
v4l2_ctrl_modify_dimensions(dev->pixel_array, dims);
struct vivid_dev *dev = video_drvdata(file);
struct v4l2_rect *crop = &dev->crop_cap;
struct v4l2_rect *compose = &dev->compose_cap;
+ unsigned orig_compose_w = compose->width;
+ unsigned orig_compose_h = compose->height;
unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
int ret;
s->r.height /= factor;
}
v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
- if (dev->bitmap_cap && (compose->width != s->r.width ||
- compose->height != s->r.height)) {
- vfree(dev->bitmap_cap);
- dev->bitmap_cap = NULL;
- }
*compose = s->r;
break;
default:
return -EINVAL;
}
+ if (dev->bitmap_cap && (compose->width != orig_compose_w ||
+ compose->height != orig_compose_h)) {
+ vfree(dev->bitmap_cap);
+ dev->bitmap_cap = NULL;
+ }
tpg_s_crop_compose(&dev->tpg, crop, compose);
return 0;
}
return -EINVAL;
if (a->fmt.bytesperline < (a->fmt.width * fmt->bit_depth[0]) / 8)
return -EINVAL;
- if (a->fmt.height * a->fmt.bytesperline < a->fmt.sizeimage)
+ if (a->fmt.bytesperline > a->fmt.sizeimage / a->fmt.height)
+ return -EINVAL;
+
+ /*
+ * Only support the framebuffer of one of the vivid instances.
+ * Anything else is rejected.
+ */
+ if (!vivid_validate_fb(a))
return -EINVAL;
dev->fb_vbase_cap = phys_to_virt((unsigned long)a->base);
static int xc_write_reg(struct xc4000_priv *priv, u16 regAddr, u16 i2cData)
{
u8 buf[4];
- int result;
buf[0] = (regAddr >> 8) & 0xFF;
buf[1] = regAddr & 0xFF;
buf[2] = (i2cData >> 8) & 0xFF;
buf[3] = i2cData & 0xFF;
- result = xc_send_i2c_data(priv, buf, 4);
- return result;
+ return xc_send_i2c_data(priv, buf, 4);
}
static int xc_load_i2c_sequence(struct dvb_frontend *fe, const u8 *i2c_sequence)
goto end;
}
- ret = __media_pipeline_start(entity, pipe);
+ ret = __media_pipeline_start(entity->pads, pipe);
if (ret) {
pr_err("Start Pipeline: %s->%s Error %d\n",
source->name, entity->name, ret);
return;
/* stop pipeline */
- __media_pipeline_stop(dev->active_link_owner);
+ __media_pipeline_stop(dev->active_link_owner->pads);
pr_debug("Pipeline stop for %s\n",
dev->active_link_owner->name);
ret = __media_pipeline_start(
- dev->active_link_user,
+ dev->active_link_user->pads,
dev->active_link_user_pipe);
if (ret) {
pr_err("Start Pipeline: %s->%s %d\n",
return;
/* stop pipeline */
- __media_pipeline_stop(dev->active_link_owner);
+ __media_pipeline_stop(dev->active_link_owner->pads);
pr_debug("Pipeline stop for %s\n",
dev->active_link_owner->name);
/*
* AF9035 gpiot2 = FC0012 enable
* XXX: there seems to be something on gpioh8 too, but on my
- * my test I didn't find any difference.
+ * test I didn't find any difference.
*/
if (adap->id == 0) {
*
* Control bits for previous samples is 32-bit field, containing 16 x 2-bit
* numbers. This results one 2-bit number for 8 samples. It is likely used for
- * for bit shifting sample by given bits, increasing actual sampling resolution.
+ * bit shifting sample by given bits, increasing actual sampling resolution.
* Number 2 (0b10) was never seen.
*
* 6 * 16 * 2 * 4 = 768 samples. 768 * 4 = 3072 bytes
/* Helper function: copy the initial control value back to the caller */
static int def_to_user(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
{
- ctrl->type_ops->init(ctrl, 0, ctrl->elems, ctrl->p_new);
+ ctrl->type_ops->init(ctrl, 0, ctrl->p_new);
return ptr_to_user(c, ctrl, ctrl->p_new);
}
if (ctrl->is_dyn_array)
ctrl->new_elems = elems;
else if (ctrl->is_array)
- ctrl->type_ops->init(ctrl, elems, ctrl->elems, ctrl->p_new);
+ ctrl->type_ops->init(ctrl, elems, ctrl->p_new);
return 0;
}
/* Validate a new control */
static int validate_new(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr p_new)
{
- return ctrl->type_ops->validate(ctrl, ctrl->new_elems, p_new);
+ return ctrl->type_ops->validate(ctrl, p_new);
}
/* Validate controls. */
ctrl->p_cur.p = p_array + elems * ctrl->elem_size;
for (i = 0; i < ctrl->nr_of_dims; i++)
ctrl->dims[i] = dims[i];
- ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur);
+ ctrl->type_ops->init(ctrl, 0, ctrl->p_cur);
cur_to_new(ctrl);
send_event(NULL, ctrl, V4L2_EVENT_CTRL_CH_VALUE |
V4L2_EVENT_CTRL_CH_DIMENSIONS);
v4l2_event_queue_fh(sev->fh, &ev);
}
-bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
+bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl,
union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2)
{
unsigned int i;
case V4L2_CTRL_TYPE_BUTTON:
return false;
case V4L2_CTRL_TYPE_STRING:
- for (i = 0; i < elems; i++) {
+ for (i = 0; i < ctrl->elems; i++) {
unsigned int idx = i * ctrl->elem_size;
/* strings are always 0-terminated */
return true;
default:
return !memcmp(ptr1.p_const, ptr2.p_const,
- elems * ctrl->elem_size);
+ ctrl->elems * ctrl->elem_size);
}
}
EXPORT_SYMBOL(v4l2_ctrl_type_op_equal);
}
void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx,
- u32 tot_elems, union v4l2_ctrl_ptr ptr)
+ union v4l2_ctrl_ptr ptr)
{
unsigned int i;
+ u32 tot_elems = ctrl->elems;
u32 elems = tot_elems - from_idx;
if (from_idx >= tot_elems)
}
}
-int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems,
+int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl,
union v4l2_ctrl_ptr ptr)
{
unsigned int i;
case V4L2_CTRL_TYPE_BUTTON:
case V4L2_CTRL_TYPE_CTRL_CLASS:
- memset(ptr.p_s32, 0, elems * sizeof(s32));
+ memset(ptr.p_s32, 0, ctrl->new_elems * sizeof(s32));
return 0;
}
- for (i = 0; !ret && i < elems; i++)
+ for (i = 0; !ret && i < ctrl->new_elems; i++)
ret = std_validate_elem(ctrl, i, ptr);
return ret;
}
memcpy(ctrl->p_def.p, p_def.p_const, elem_size);
}
- ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur);
+ ctrl->type_ops->init(ctrl, 0, ctrl->p_cur);
cur_to_new(ctrl);
if (handler_new_ref(hdl, ctrl, NULL, false, false)) {
ctrl_changed = true;
if (!ctrl_changed)
ctrl_changed = !ctrl->type_ops->equal(ctrl,
- ctrl->elems, ctrl->p_cur, ctrl->p_new);
+ ctrl->p_cur, ctrl->p_new);
ctrl->has_changed = ctrl_changed;
changed |= ctrl->has_changed;
}
}
EXPORT_SYMBOL(video_unregister_device);
+#if defined(CONFIG_MEDIA_CONTROLLER)
+
+__must_check int video_device_pipeline_start(struct video_device *vdev,
+ struct media_pipeline *pipe)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (entity->num_pads != 1)
+ return -ENODEV;
+
+ return media_pipeline_start(&entity->pads[0], pipe);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_start);
+
+__must_check int __video_device_pipeline_start(struct video_device *vdev,
+ struct media_pipeline *pipe)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (entity->num_pads != 1)
+ return -ENODEV;
+
+ return __media_pipeline_start(&entity->pads[0], pipe);
+}
+EXPORT_SYMBOL_GPL(__video_device_pipeline_start);
+
+void video_device_pipeline_stop(struct video_device *vdev)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (WARN_ON(entity->num_pads != 1))
+ return;
+
+ return media_pipeline_stop(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_stop);
+
+void __video_device_pipeline_stop(struct video_device *vdev)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (WARN_ON(entity->num_pads != 1))
+ return;
+
+ return __media_pipeline_stop(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(__video_device_pipeline_stop);
+
+__must_check int video_device_pipeline_alloc_start(struct video_device *vdev)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (entity->num_pads != 1)
+ return -ENODEV;
+
+ return media_pipeline_alloc_start(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_alloc_start);
+
+struct media_pipeline *video_device_pipeline(struct video_device *vdev)
+{
+ struct media_entity *entity = &vdev->entity;
+
+ if (WARN_ON(entity->num_pads != 1))
+ return NULL;
+
+ return media_pad_pipeline(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline);
+
+#endif /* CONFIG_MEDIA_CONTROLLER */
+
/*
* Initialise video for linux
*/
(bt->interlaced && !(caps & V4L2_DV_BT_CAP_INTERLACED)) ||
(!bt->interlaced && !(caps & V4L2_DV_BT_CAP_PROGRESSIVE)))
return false;
+
+ /* sanity checks for the blanking timings */
+ if (!bt->interlaced &&
+ (bt->il_vbackporch || bt->il_vsync || bt->il_vfrontporch))
+ return false;
+ if (bt->hfrontporch > 2 * bt->width ||
+ bt->hsync > 1024 || bt->hbackporch > 1024)
+ return false;
+ if (bt->vfrontporch > 4096 ||
+ bt->vsync > 128 || bt->vbackporch > 4096)
+ return false;
+ if (bt->interlaced && (bt->il_vfrontporch > 4096 ||
+ bt->il_vsync > 128 || bt->il_vbackporch > 4096))
+ return false;
return fnc == NULL || fnc(t, fnc_handle);
}
EXPORT_SYMBOL_GPL(v4l2_valid_dv_timings);
goto err_map;
}
+ /* Parse the device's DT node for an endianness specification */
+ if (of_property_read_bool(np, "big-endian"))
+ syscon_config.val_format_endian = REGMAP_ENDIAN_BIG;
+ else if (of_property_read_bool(np, "little-endian"))
+ syscon_config.val_format_endian = REGMAP_ENDIAN_LITTLE;
+ else if (of_property_read_bool(np, "native-endian"))
+ syscon_config.val_format_endian = REGMAP_ENDIAN_NATIVE;
+
/*
* search for reg-io-width property in DT. If it is not provided,
* default to 4 bytes. regmap_init_mmio will return an error if values
* Optionally, build an array of chars that contain the bit numbers allocated.
*/
static unsigned long reserve_resources(unsigned long *p, int n, int mmax,
- char *idx)
+ signed char *idx)
{
unsigned long bits = 0;
int i;
}
unsigned long gru_reserve_cb_resources(struct gru_state *gru, int cbr_au_count,
- char *cbmap)
+ signed char *cbmap)
{
return reserve_resources(&gru->gs_cbr_map, cbr_au_count, GRU_CBR_AU,
cbmap);
}
unsigned long gru_reserve_ds_resources(struct gru_state *gru, int dsr_au_count,
- char *dsmap)
+ signed char *dsmap)
{
return reserve_resources(&gru->gs_dsr_map, dsr_au_count, GRU_DSR_AU,
dsmap);
pid_t ts_tgid_owner; /* task that is using the
context - for migration */
short ts_user_blade_id;/* user selected blade */
- char ts_user_chiplet_id;/* user selected chiplet */
+ signed char ts_user_chiplet_id;/* user selected chiplet */
unsigned short ts_sizeavail; /* Pagesizes in use */
int ts_tsid; /* thread that owns the
structure */
required for contest */
unsigned char ts_cbr_au_count;/* Number of CBR resources
required for contest */
- char ts_cch_req_slice;/* CCH packet slice */
- char ts_blade; /* If >= 0, migrate context if
+ signed char ts_cch_req_slice;/* CCH packet slice */
+ signed char ts_blade; /* If >= 0, migrate context if
ref from different blade */
- char ts_force_cch_reload;
- char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each
+ signed char ts_force_cch_reload;
+ signed char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each
allocated CB */
int ts_data_valid; /* Indicates if ts_gdata has
valid data */
int cbr_au_count, int dsr_au_count,
unsigned char tlb_preload_count, int options, int tsid);
extern unsigned long gru_reserve_cb_resources(struct gru_state *gru,
- int cbr_au_count, char *cbmap);
+ int cbr_au_count, signed char *cbmap);
extern unsigned long gru_reserve_ds_resources(struct gru_state *gru,
- int dsr_au_count, char *dsmap);
+ int dsr_au_count, signed char *dsmap);
extern vm_fault_t gru_fault(struct vm_fault *vmf);
extern struct gru_mm_struct *gru_register_mmu_notifier(void);
extern void gru_drop_mmu_notifier(struct gru_mm_struct *gms);
* track of the current selected device partition.
*/
unsigned int part_curr;
+#define MMC_BLK_PART_INVALID UINT_MAX /* Unknown partition active */
int area_type;
/* debugfs files (only in main mmc_blk_data) */
return ms;
}
+/*
+ * Attempts to reset the card and get back to the requested partition.
+ * Therefore any error here must result in cancelling the block layer
+ * request, it must not be reattempted without going through the mmc_blk
+ * partition sanity checks.
+ */
static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,
int type)
{
int err;
+ struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev);
if (md->reset_done & type)
return -EEXIST;
md->reset_done |= type;
err = mmc_hw_reset(host->card);
+ /*
+ * A successful reset will leave the card in the main partition, but
+ * upon failure it might not be, so set it to MMC_BLK_PART_INVALID
+ * in that case.
+ */
+ main_md->part_curr = err ? MMC_BLK_PART_INVALID : main_md->part_type;
+ if (err)
+ return err;
/* Ensure we switch back to the correct partition */
- if (err) {
- struct mmc_blk_data *main_md =
- dev_get_drvdata(&host->card->dev);
- int part_err;
-
- main_md->part_curr = main_md->part_type;
- part_err = mmc_blk_part_switch(host->card, md->part_type);
- if (part_err) {
- /*
- * We have failed to get back into the correct
- * partition, so we need to abort the whole request.
- */
- return -ENODEV;
- }
- }
- return err;
+ if (mmc_blk_part_switch(host->card, md->part_type))
+ /*
+ * We have failed to get back into the correct
+ * partition, so we need to abort the whole request.
+ */
+ return -ENODEV;
+ return 0;
}
static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
return;
/* Reset before last retry */
- if (mqrq->retries + 1 == MMC_MAX_RETRIES)
- mmc_blk_reset(md, card->host, type);
+ if (mqrq->retries + 1 == MMC_MAX_RETRIES &&
+ mmc_blk_reset(md, card->host, type))
+ return;
/* Command errors fail fast, so use all MMC_MAX_RETRIES */
if (brq->sbc.error || brq->cmd.error)
case REQ_OP_DRV_OUT:
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
+ case REQ_OP_WRITE_ZEROES:
return MMC_ISSUE_SYNC;
case REQ_OP_FLUSH:
return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;
if (blk_queue_quiesced(q))
blk_mq_unquiesce_queue(q);
+ /*
+ * If the recovery completes the last (and only remaining) request in
+ * the queue, and the card has been removed, we could end up here with
+ * the recovery not quite finished yet, so cancel it.
+ */
+ cancel_work_sync(&mq->recovery_work);
+
blk_mq_free_tag_set(&mq->tag_set);
/*
{
struct sdio_func *func = dev_to_sdio_func(dev);
- sdio_free_func_cis(func);
+ if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
+ sdio_free_func_cis(func);
kfree(func->info);
kfree(func->tmpbuf);
config MMC_SDHCI_AM654
tristate "Support for the SDHCI Controller in TI's AM654 SOCs"
- depends on MMC_SDHCI_PLTFM && OF && REGMAP_MMIO
+ depends on MMC_SDHCI_PLTFM && OF
select MMC_SDHCI_IO_ACCESSORS
select MMC_CQHCI
+ select REGMAP_MMIO
help
This selects the Secure Digital Host Controller Interface (SDHCI)
support present in TI's AM654 SOCs. The controller supports
host->mmc_host_ops.execute_tuning = usdhc_execute_tuning;
}
+ err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
+ if (err)
+ goto disable_ahb_clk;
+
if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)
sdhci_esdhc_ops.platform_execute_tuning =
esdhc_executing_tuning;
if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
- if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ if (host->caps & MMC_CAP_8_BIT_DATA &&
+ imx_data->socdata->flags & ESDHC_FLAG_HS400)
host->mmc->caps2 |= MMC_CAP2_HS400;
if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
- if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
+ if (host->caps & MMC_CAP_8_BIT_DATA &&
+ imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
host->mmc->caps2 |= MMC_CAP2_HS400_ES;
host->mmc_host_ops.hs400_enhanced_strobe =
esdhc_hs400_enhanced_strobe;
goto disable_ahb_clk;
}
- err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
- if (err)
- goto disable_ahb_clk;
-
sdhci_esdhc_imx_hwinit(host);
err = sdhci_add_host(host);
dmi_match(DMI_SYS_VENDOR, "IRBIS"));
}
+static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
+{
+ return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_JSL_EMMC &&
+ dmi_match(DMI_BIOS_VENDOR, "ASUSTeK COMPUTER INC.");
+}
+
static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
int ret = byt_emmc_probe_slot(slot);
slot->host->mmc->caps2 |= MMC_CAP2_CQE;
if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) {
- slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES;
- slot->host->mmc_host_ops.hs400_enhanced_strobe =
- intel_hs400_enhanced_strobe;
+ if (!jsl_broken_hs400es(slot)) {
+ slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES;
+ slot->host->mmc_host_ops.hs400_enhanced_strobe =
+ intel_hs400_enhanced_strobe;
+ }
slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
}
if (!mtd_is_partition(mtd))
return;
parent = mtd->parent;
- parent_dn = dev_of_node(&parent->dev);
+ parent_dn = of_node_get(dev_of_node(&parent->dev));
if (!parent_dn)
return;
ret = of_property_read_u32(chip_np, "reg", &cs);
if (ret) {
dev_err(dev, "failed to get chip select: %d\n", ret);
- return ret;
+ goto err_of_node_put;
}
if (cs >= MAX_CS) {
dev_err(dev, "got invalid chip select: %d\n", cs);
- return -EINVAL;
+ ret = -EINVAL;
+ goto err_of_node_put;
}
ebu_host->cs_num = cs;
resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs);
ebu_host->cs[cs].chipaddr = devm_platform_ioremap_resource_byname(pdev,
resname);
- if (IS_ERR(ebu_host->cs[cs].chipaddr))
- return PTR_ERR(ebu_host->cs[cs].chipaddr);
+ if (IS_ERR(ebu_host->cs[cs].chipaddr)) {
+ ret = PTR_ERR(ebu_host->cs[cs].chipaddr);
+ goto err_of_node_put;
+ }
ebu_host->clk = devm_clk_get(dev, NULL);
- if (IS_ERR(ebu_host->clk))
- return dev_err_probe(dev, PTR_ERR(ebu_host->clk),
- "failed to get clock\n");
+ if (IS_ERR(ebu_host->clk)) {
+ ret = dev_err_probe(dev, PTR_ERR(ebu_host->clk),
+ "failed to get clock\n");
+ goto err_of_node_put;
+ }
ret = clk_prepare_enable(ebu_host->clk);
if (ret) {
dev_err(dev, "failed to enable clock: %d\n", ret);
- return ret;
+ goto err_of_node_put;
}
ebu_host->dma_tx = dma_request_chan(dev, "tx");
ebu_dma_cleanup(ebu_host);
err_disable_unprepare_clk:
clk_disable_unprepare(ebu_host->clk);
+err_of_node_put:
+ of_node_put(chip_np);
return ret;
}
chip->controller = &nfc->controller;
nand_set_flash_node(chip, np);
- if (!of_property_read_bool(np, "marvell,nand-keep-config"))
+ if (of_property_read_bool(np, "marvell,nand-keep-config"))
chip->options |= NAND_KEEP_TIMINGS;
mtd = nand_to_mtd(chip);
pm_runtime_enable(&pdev->dev);
err = pm_runtime_resume_and_get(&pdev->dev);
if (err)
- return err;
+ goto err_dis_pm;
err = reset_control_reset(rst);
if (err) {
err_put_pm:
pm_runtime_put_sync_suspend(ctrl->dev);
pm_runtime_force_suspend(ctrl->dev);
+err_dis_pm:
+ pm_runtime_disable(&pdev->dev);
return err;
}
}
/* Read middle of the block */
- err = mtd_read(master, offset + 0x8000, 0x4, &bytes_read,
+ err = mtd_read(master, offset + (blocksize / 2), 0x4, &bytes_read,
(uint8_t *)buf);
if (err && !mtd_is_bitflip(err)) {
pr_err("mtd_read error while parsing (offset: 0x%X): %d\n",
- offset + 0x8000, err);
+ offset + (blocksize / 2), err);
continue;
}
*/
WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET,
"enabling reset hack; may not recover from unexpected reboots\n");
- return nor->params->set_4byte_addr_mode(nor, true);
+ err = nor->params->set_4byte_addr_mode(nor, true);
+ if (err && err != -ENOTSUPP)
+ return err;
}
return 0;
&mscan_clksrc);
if (!priv->can.clock.freq) {
dev_err(&ofdev->dev, "couldn't get MSCAN clock properties\n");
- goto exit_free_mscan;
+ goto exit_put_clock;
}
err = register_mscandev(dev, mscan_clksrc);
if (err) {
dev_err(&ofdev->dev, "registering %s failed (err=%d)\n",
DRV_NAME, err);
- goto exit_free_mscan;
+ goto exit_put_clock;
}
dev_info(&ofdev->dev, "MSCAN at 0x%p, irq %d, clock %d Hz\n",
return 0;
-exit_free_mscan:
+exit_put_clock:
+ if (data->put_clock)
+ data->put_clock(ofdev);
free_candev(dev);
exit_dispose_irq:
irq_dispose_mapping(irq);
{
struct rcar_canfd_channel *priv = gpriv->ch[ch];
u32 ridx = ch + RCANFD_RFFIFO_IDX;
- u32 sts;
+ u32 sts, cc;
/* Handle Rx interrupts */
sts = rcar_canfd_read(priv->base, RCANFD_RFSTS(gpriv, ridx));
- if (likely(sts & RCANFD_RFSTS_RFIF)) {
+ cc = rcar_canfd_read(priv->base, RCANFD_RFCC(gpriv, ridx));
+ if (likely(sts & RCANFD_RFSTS_RFIF &&
+ cc & RCANFD_RFCC_RFIE)) {
if (napi_schedule_prep(&priv->napi)) {
/* Disable Rx FIFO interrupts */
rcar_canfd_clear_bit(priv->base,
static irqreturn_t rcar_canfd_channel_tx_interrupt(int irq, void *dev_id)
{
- struct rcar_canfd_global *gpriv = dev_id;
- u32 ch;
+ struct rcar_canfd_channel *priv = dev_id;
- for_each_set_bit(ch, &gpriv->channels_mask, gpriv->max_channels)
- rcar_canfd_handle_channel_tx(gpriv, ch);
+ rcar_canfd_handle_channel_tx(priv->gpriv, priv->channel);
return IRQ_HANDLED;
}
static irqreturn_t rcar_canfd_channel_err_interrupt(int irq, void *dev_id)
{
- struct rcar_canfd_global *gpriv = dev_id;
- u32 ch;
+ struct rcar_canfd_channel *priv = dev_id;
- for_each_set_bit(ch, &gpriv->channels_mask, gpriv->max_channels)
- rcar_canfd_handle_channel_err(gpriv, ch);
+ rcar_canfd_handle_channel_err(priv->gpriv, priv->channel);
return IRQ_HANDLED;
}
priv->ndev = ndev;
priv->base = gpriv->base;
priv->channel = ch;
+ priv->gpriv = gpriv;
priv->can.clock.freq = fcan_freq;
dev_info(&pdev->dev, "can_clk rate is %u\n", priv->can.clock.freq);
}
err = devm_request_irq(&pdev->dev, err_irq,
rcar_canfd_channel_err_interrupt, 0,
- irq_name, gpriv);
+ irq_name, priv);
if (err) {
dev_err(&pdev->dev, "devm_request_irq CH Err(%d) failed, error %d\n",
err_irq, err);
}
err = devm_request_irq(&pdev->dev, tx_irq,
rcar_canfd_channel_tx_interrupt, 0,
- irq_name, gpriv);
+ irq_name, priv);
if (err) {
dev_err(&pdev->dev, "devm_request_irq Tx (%d) failed, error %d\n",
tx_irq, err);
priv->can.do_set_mode = rcar_canfd_do_set_mode;
priv->can.do_get_berr_counter = rcar_canfd_get_berr_counter;
- priv->gpriv = gpriv;
SET_NETDEV_DEV(ndev, &pdev->dev);
netif_napi_add_weight(ndev, &priv->napi, rcar_canfd_rx_poll,
ret = mcp251x_gpio_setup(priv);
if (ret)
- goto error_probe;
+ goto out_unregister_candev;
netdev_info(net, "MCP%x successfully initialized.\n", priv->model);
return 0;
+out_unregister_candev:
+ unregister_candev(net);
+
error_probe:
destroy_workqueue(priv->wq);
priv->wq = NULL;
{
int err;
- init_completion(&priv->start_comp);
+ reinit_completion(&priv->start_comp);
err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_START_CHIP_REQ,
priv->channel);
{
int err;
- init_completion(&priv->stop_comp);
+ reinit_completion(&priv->stop_comp);
/* Make sure we do not report invalid BUS_OFF from CMD_CHIP_STATE_EVENT
* see comment in kvaser_usb_hydra_update_state()
{
int err;
- init_completion(&priv->start_comp);
+ reinit_completion(&priv->start_comp);
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_START_CHIP,
priv->channel);
{
int err;
- init_completion(&priv->stop_comp);
+ reinit_completion(&priv->stop_comp);
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
priv->channel);
#define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2)
+static void dsa_loop_phydevs_unregister(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < NUM_FIXED_PHYS; i++)
+ if (!IS_ERR(phydevs[i])) {
+ fixed_phy_unregister(phydevs[i]);
+ phy_device_free(phydevs[i]);
+ }
+}
+
static int __init dsa_loop_init(void)
{
struct fixed_phy_status status = {
.speed = SPEED_100,
.duplex = DUPLEX_FULL,
};
- unsigned int i;
+ unsigned int i, ret;
for (i = 0; i < NUM_FIXED_PHYS; i++)
phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL);
- return mdio_driver_register(&dsa_loop_drv);
+ ret = mdio_driver_register(&dsa_loop_drv);
+ if (ret)
+ dsa_loop_phydevs_unregister();
+
+ return ret;
}
module_init(dsa_loop_init);
static void __exit dsa_loop_exit(void)
{
- unsigned int i;
-
mdio_driver_unregister(&dsa_loop_drv);
- for (i = 0; i < NUM_FIXED_PHYS; i++)
- if (!IS_ERR(phydevs[i]))
- fixed_phy_unregister(phydevs[i]);
+ dsa_loop_phydevs_unregister();
}
module_exit(dsa_loop_exit);
struct qca8k_mgmt_eth_data *mgmt_eth_data;
struct qca8k_priv *priv = ds->priv;
struct qca_mgmt_ethhdr *mgmt_ethhdr;
+ u32 command;
u8 len, cmd;
+ int i;
mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb);
mgmt_eth_data = &priv->mgmt_eth_data;
- cmd = FIELD_GET(QCA_HDR_MGMT_CMD, mgmt_ethhdr->command);
- len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command);
+ command = get_unaligned_le32(&mgmt_ethhdr->command);
+ cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command);
+ len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command);
/* Make sure the seq match the requested packet */
- if (mgmt_ethhdr->seq == mgmt_eth_data->seq)
+ if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq)
mgmt_eth_data->ack = true;
if (cmd == MDIO_READ) {
- mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data;
+ u32 *val = mgmt_eth_data->data;
+
+ *val = get_unaligned_le32(&mgmt_ethhdr->mdio_data);
/* Get the rest of the 12 byte of data.
* The read/write function will extract the requested data.
*/
- if (len > QCA_HDR_MGMT_DATA1_LEN)
- memcpy(mgmt_eth_data->data + 1, skb->data,
- QCA_HDR_MGMT_DATA2_LEN);
+ if (len > QCA_HDR_MGMT_DATA1_LEN) {
+ __le32 *data2 = (__le32 *)skb->data;
+ int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN,
+ len - QCA_HDR_MGMT_DATA1_LEN);
+
+ val++;
+
+ for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) {
+ *val = get_unaligned_le32(data2);
+ val++;
+ data2++;
+ }
+ }
}
complete(&mgmt_eth_data->rw_done);
struct qca_mgmt_ethhdr *mgmt_ethhdr;
unsigned int real_len;
struct sk_buff *skb;
- u32 *data2;
+ __le32 *data2;
+ u32 command;
u16 hdr;
+ int i;
skb = dev_alloc_skb(QCA_HDR_MGMT_PKT_LEN);
if (!skb)
hdr |= FIELD_PREP(QCA_HDR_XMIT_DP_BIT, BIT(0));
hdr |= FIELD_PREP(QCA_HDR_XMIT_CONTROL, QCA_HDR_XMIT_TYPE_RW_REG);
- mgmt_ethhdr->command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg);
- mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len);
- mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd);
- mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE,
+ command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg);
+ command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len);
+ command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd);
+ command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE,
QCA_HDR_MGMT_CHECK_CODE_VAL);
+ put_unaligned_le32(command, &mgmt_ethhdr->command);
+
if (cmd == MDIO_WRITE)
- mgmt_ethhdr->mdio_data = *val;
+ put_unaligned_le32(*val, &mgmt_ethhdr->mdio_data);
mgmt_ethhdr->hdr = htons(hdr);
data2 = skb_put_zero(skb, QCA_HDR_MGMT_DATA2_LEN + QCA_HDR_MGMT_PADDING_LEN);
- if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN)
- memcpy(data2, val + 1, len - QCA_HDR_MGMT_DATA1_LEN);
+ if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN) {
+ int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN,
+ len - QCA_HDR_MGMT_DATA1_LEN);
+
+ val++;
+
+ for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) {
+ put_unaligned_le32(*val, data2);
+ data2++;
+ val++;
+ }
+ }
return skb;
}
static void qca8k_mdio_header_fill_seq_num(struct sk_buff *skb, u32 seq_num)
{
struct qca_mgmt_ethhdr *mgmt_ethhdr;
+ u32 seq;
+ seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num);
mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb->data;
- mgmt_ethhdr->seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num);
+ put_unaligned_le32(seq, &mgmt_ethhdr->seq);
}
static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
struct qca8k_priv *priv = ds->priv;
const struct qca8k_mib_desc *mib;
struct mib_ethhdr *mib_ethhdr;
- int i, mib_len, offset = 0;
- u64 *data;
+ __le32 *data2;
u8 port;
+ int i;
mib_ethhdr = (struct mib_ethhdr *)skb_mac_header(skb);
mib_eth_data = &priv->mib_eth_data;
if (port != mib_eth_data->req_port)
goto exit;
- data = mib_eth_data->data;
+ data2 = (__le32 *)skb->data;
for (i = 0; i < priv->info->mib_count; i++) {
mib = &ar8327_mib[i];
/* First 3 mib are present in the skb head */
if (i < 3) {
- data[i] = mib_ethhdr->data[i];
+ mib_eth_data->data[i] = get_unaligned_le32(mib_ethhdr->data + i);
continue;
}
- mib_len = sizeof(uint32_t);
-
/* Some mib are 64 bit wide */
if (mib->size == 2)
- mib_len = sizeof(uint64_t);
-
- /* Copy the mib value from packet to the */
- memcpy(data + i, skb->data + offset, mib_len);
+ mib_eth_data->data[i] = get_unaligned_le64((__le64 *)data2);
+ else
+ mib_eth_data->data[i] = get_unaligned_le32(data2);
- /* Set the offset for the next mib */
- offset += mib_len;
+ data2 += mib->size;
}
exit:
.notifier_call = adin1110_switchdev_event,
};
-static void adin1110_unregister_notifiers(void *data)
+static void adin1110_unregister_notifiers(void)
{
unregister_switchdev_blocking_notifier(&adin1110_switchdev_blocking_notifier);
unregister_switchdev_notifier(&adin1110_switchdev_notifier);
unregister_netdevice_notifier(&adin1110_netdevice_nb);
}
-static int adin1110_setup_notifiers(struct adin1110_priv *priv)
+static int adin1110_setup_notifiers(void)
{
- struct device *dev = &priv->spidev->dev;
int ret;
ret = register_netdevice_notifier(&adin1110_netdevice_nb);
if (ret < 0)
goto err_sdev;
- return devm_add_action_or_reset(dev, adin1110_unregister_notifiers, NULL);
+ return 0;
err_sdev:
unregister_switchdev_notifier(&adin1110_switchdev_notifier);
err_netdev:
unregister_netdevice_notifier(&adin1110_netdevice_nb);
+
return ret;
}
if (ret < 0)
return ret;
- ret = adin1110_setup_notifiers(priv);
- if (ret < 0)
- return ret;
-
for (i = 0; i < priv->cfg->ports_nr; i++) {
ret = devm_register_netdev(dev, priv->ports[i]->netdev);
if (ret < 0) {
.probe = adin1110_probe,
.id_table = adin1110_spi_id,
};
-module_spi_driver(adin1110_driver);
+
+static int __init adin1110_driver_init(void)
+{
+ int ret;
+
+ ret = adin1110_setup_notifiers();
+ if (ret < 0)
+ return ret;
+
+ ret = spi_register_driver(&adin1110_driver);
+ if (ret < 0) {
+ adin1110_unregister_notifiers();
+ return ret;
+ }
+
+ return 0;
+}
+
+static void __exit adin1110_exit(void)
+{
+ adin1110_unregister_notifiers();
+ spi_unregister_driver(&adin1110_driver);
+}
+module_init(adin1110_driver_init);
+module_exit(adin1110_exit);
MODULE_DESCRIPTION("ADIN1110 Network driver");
MODULE_AUTHOR("Alexandru Tachici <alexandru.tachici@analog.com>");
/* Yellow Carp devices do not need cdr workaround */
pdata->vdata->an_cdr_workaround = 0;
+
+ /* Yellow Carp devices do not need rrc */
+ pdata->vdata->enable_rrc = 0;
} else {
pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF;
pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT;
.tx_desc_prefetch = 5,
.rx_desc_prefetch = 5,
.an_cdr_workaround = 1,
+ .enable_rrc = 1,
};
static struct xgbe_version_data xgbe_v2b = {
.tx_desc_prefetch = 5,
.rx_desc_prefetch = 5,
.an_cdr_workaround = 1,
+ .enable_rrc = 1,
};
static const struct pci_device_id xgbe_pci_table[] = {
#define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d
#define XGBE_SFP_BASE_BR_10GBE_MIN 0x64
#define XGBE_SFP_BASE_BR_10GBE_MAX 0x68
+#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX 0x78
#define XGBE_SFP_BASE_CU_CABLE_LEN 18
#define XGBE_BEL_FUSE_VENDOR "BEL-FUSE "
#define XGBE_BEL_FUSE_PARTNO "1GBT-SFP06 "
+#define XGBE_MOLEX_VENDOR "Molex Inc. "
+
struct xgbe_sfp_ascii {
union {
char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
break;
case XGBE_SFP_SPEED_10000:
min = XGBE_SFP_BASE_BR_10GBE_MIN;
- max = XGBE_SFP_BASE_BR_10GBE_MAX;
+ if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
+ XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
+ max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
+ else
+ max = XGBE_SFP_BASE_BR_10GBE_MAX;
break;
default:
return false;
}
/* Determine the type of SFP */
- if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
+ if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
+ xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
+ phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
+ else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
- else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
- xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
- phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
switch (phy_data->sfp_base) {
case XGBE_SFP_BASE_1000_T:
static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
{
+ /* PLL_CTRL feature needs to be enabled for fixed PHY modes (Non-Autoneg) only */
+ if (pdata->phy.autoneg != AUTONEG_DISABLE)
+ return;
+
XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
XGBE_PMA_PLL_CTRL_MASK,
enable ? XGBE_PMA_PLL_CTRL_ENABLE
}
static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
- unsigned int cmd, unsigned int sub_cmd)
+ enum xgbe_mb_cmd cmd, enum xgbe_mb_subcmd sub_cmd)
{
unsigned int s0 = 0;
unsigned int wait;
xgbe_phy_rx_reset(pdata);
reenable_pll:
- /* Enable PLL re-initialization */
- xgbe_phy_pll_ctrl(pdata, true);
+ /* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */
+ if (cmd != XGBE_MB_CMD_POWER_OFF &&
+ cmd != XGBE_MB_CMD_RRC)
+ xgbe_phy_pll_ctrl(pdata, true);
}
static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
{
/* Receiver Reset Cycle */
- xgbe_phy_perform_ratechange(pdata, 5, 0);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_RRC, XGBE_MB_SUBCMD_NONE);
netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n");
}
struct xgbe_phy_data *phy_data = pdata->phy_data;
/* Power off */
- xgbe_phy_perform_ratechange(pdata, 0, 0);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_POWER_OFF, XGBE_MB_SUBCMD_NONE);
phy_data->cur_mode = XGBE_MODE_UNKNOWN;
/* 10G/SFI */
if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
- xgbe_phy_perform_ratechange(pdata, 3, 0);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE);
} else {
if (phy_data->sfp_cable_len <= 1)
- xgbe_phy_perform_ratechange(pdata, 3, 1);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_SUBCMD_PASSIVE_1M);
else if (phy_data->sfp_cable_len <= 3)
- xgbe_phy_perform_ratechange(pdata, 3, 2);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_SUBCMD_PASSIVE_3M);
else
- xgbe_phy_perform_ratechange(pdata, 3, 3);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_SUBCMD_PASSIVE_OTHER);
}
phy_data->cur_mode = XGBE_MODE_SFI;
xgbe_phy_set_redrv_mode(pdata);
/* 1G/X */
- xgbe_phy_perform_ratechange(pdata, 1, 3);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
phy_data->cur_mode = XGBE_MODE_X;
xgbe_phy_set_redrv_mode(pdata);
/* 1G/SGMII */
- xgbe_phy_perform_ratechange(pdata, 1, 2);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_SGMII);
phy_data->cur_mode = XGBE_MODE_SGMII_1000;
xgbe_phy_set_redrv_mode(pdata);
/* 100M/SGMII */
- xgbe_phy_perform_ratechange(pdata, 1, 1);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_100MBITS);
phy_data->cur_mode = XGBE_MODE_SGMII_100;
xgbe_phy_set_redrv_mode(pdata);
/* 10G/KR */
- xgbe_phy_perform_ratechange(pdata, 4, 0);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE);
phy_data->cur_mode = XGBE_MODE_KR;
xgbe_phy_set_redrv_mode(pdata);
/* 2.5G/KX */
- xgbe_phy_perform_ratechange(pdata, 2, 0);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_2_5G, XGBE_MB_SUBCMD_NONE);
phy_data->cur_mode = XGBE_MODE_KX_2500;
xgbe_phy_set_redrv_mode(pdata);
/* 1G/KX */
- xgbe_phy_perform_ratechange(pdata, 1, 3);
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
phy_data->cur_mode = XGBE_MODE_KX_1000;
}
/* No link, attempt a receiver reset cycle */
- if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
+ if (pdata->vdata->enable_rrc && phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
phy_data->rrc_count = 0;
xgbe_phy_rrc(pdata);
}
XGBE_MDIO_MODE_CL45,
};
+enum xgbe_mb_cmd {
+ XGBE_MB_CMD_POWER_OFF = 0,
+ XGBE_MB_CMD_SET_1G,
+ XGBE_MB_CMD_SET_2_5G,
+ XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_CMD_SET_10G_KR,
+ XGBE_MB_CMD_RRC
+};
+
+enum xgbe_mb_subcmd {
+ XGBE_MB_SUBCMD_NONE = 0,
+
+ /* 10GbE SFP subcommands */
+ XGBE_MB_SUBCMD_ACTIVE = 0,
+ XGBE_MB_SUBCMD_PASSIVE_1M,
+ XGBE_MB_SUBCMD_PASSIVE_3M,
+ XGBE_MB_SUBCMD_PASSIVE_OTHER,
+
+ /* 1GbE Mode subcommands */
+ XGBE_MB_SUBCMD_10MBITS = 0,
+ XGBE_MB_SUBCMD_100MBITS,
+ XGBE_MB_SUBCMD_1G_SGMII,
+ XGBE_MB_SUBCMD_1G_KX
+};
+
struct xgbe_phy {
struct ethtool_link_ksettings lks;
unsigned int tx_desc_prefetch;
unsigned int rx_desc_prefetch;
unsigned int an_cdr_workaround;
+ unsigned int enable_rrc;
};
struct xgbe_prv_data {
egress_sa_threshold_expired);
}
+#define AQ_LOCKED_MDO_DEF(mdo) \
+static int aq_locked_mdo_##mdo(struct macsec_context *ctx) \
+{ \
+ struct aq_nic_s *nic = netdev_priv(ctx->netdev); \
+ int ret; \
+ mutex_lock(&nic->macsec_mutex); \
+ ret = aq_mdo_##mdo(ctx); \
+ mutex_unlock(&nic->macsec_mutex); \
+ return ret; \
+}
+
+AQ_LOCKED_MDO_DEF(dev_open)
+AQ_LOCKED_MDO_DEF(dev_stop)
+AQ_LOCKED_MDO_DEF(add_secy)
+AQ_LOCKED_MDO_DEF(upd_secy)
+AQ_LOCKED_MDO_DEF(del_secy)
+AQ_LOCKED_MDO_DEF(add_rxsc)
+AQ_LOCKED_MDO_DEF(upd_rxsc)
+AQ_LOCKED_MDO_DEF(del_rxsc)
+AQ_LOCKED_MDO_DEF(add_rxsa)
+AQ_LOCKED_MDO_DEF(upd_rxsa)
+AQ_LOCKED_MDO_DEF(del_rxsa)
+AQ_LOCKED_MDO_DEF(add_txsa)
+AQ_LOCKED_MDO_DEF(upd_txsa)
+AQ_LOCKED_MDO_DEF(del_txsa)
+AQ_LOCKED_MDO_DEF(get_dev_stats)
+AQ_LOCKED_MDO_DEF(get_tx_sc_stats)
+AQ_LOCKED_MDO_DEF(get_tx_sa_stats)
+AQ_LOCKED_MDO_DEF(get_rx_sc_stats)
+AQ_LOCKED_MDO_DEF(get_rx_sa_stats)
+
const struct macsec_ops aq_macsec_ops = {
- .mdo_dev_open = aq_mdo_dev_open,
- .mdo_dev_stop = aq_mdo_dev_stop,
- .mdo_add_secy = aq_mdo_add_secy,
- .mdo_upd_secy = aq_mdo_upd_secy,
- .mdo_del_secy = aq_mdo_del_secy,
- .mdo_add_rxsc = aq_mdo_add_rxsc,
- .mdo_upd_rxsc = aq_mdo_upd_rxsc,
- .mdo_del_rxsc = aq_mdo_del_rxsc,
- .mdo_add_rxsa = aq_mdo_add_rxsa,
- .mdo_upd_rxsa = aq_mdo_upd_rxsa,
- .mdo_del_rxsa = aq_mdo_del_rxsa,
- .mdo_add_txsa = aq_mdo_add_txsa,
- .mdo_upd_txsa = aq_mdo_upd_txsa,
- .mdo_del_txsa = aq_mdo_del_txsa,
- .mdo_get_dev_stats = aq_mdo_get_dev_stats,
- .mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats,
- .mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats,
- .mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats,
- .mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats,
+ .mdo_dev_open = aq_locked_mdo_dev_open,
+ .mdo_dev_stop = aq_locked_mdo_dev_stop,
+ .mdo_add_secy = aq_locked_mdo_add_secy,
+ .mdo_upd_secy = aq_locked_mdo_upd_secy,
+ .mdo_del_secy = aq_locked_mdo_del_secy,
+ .mdo_add_rxsc = aq_locked_mdo_add_rxsc,
+ .mdo_upd_rxsc = aq_locked_mdo_upd_rxsc,
+ .mdo_del_rxsc = aq_locked_mdo_del_rxsc,
+ .mdo_add_rxsa = aq_locked_mdo_add_rxsa,
+ .mdo_upd_rxsa = aq_locked_mdo_upd_rxsa,
+ .mdo_del_rxsa = aq_locked_mdo_del_rxsa,
+ .mdo_add_txsa = aq_locked_mdo_add_txsa,
+ .mdo_upd_txsa = aq_locked_mdo_upd_txsa,
+ .mdo_del_txsa = aq_locked_mdo_del_txsa,
+ .mdo_get_dev_stats = aq_locked_mdo_get_dev_stats,
+ .mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats,
+ .mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats,
+ .mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats,
+ .mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats,
};
int aq_macsec_init(struct aq_nic_s *nic)
nic->ndev->features |= NETIF_F_HW_MACSEC;
nic->ndev->macsec_ops = &aq_macsec_ops;
+ mutex_init(&nic->macsec_mutex);
return 0;
}
if (!nic->macsec_cfg)
return 0;
- rtnl_lock();
+ mutex_lock(&nic->macsec_mutex);
if (nic->aq_fw_ops->send_macsec_req) {
struct macsec_cfg_request cfg = { 0 };
ret = aq_apply_macsec_cfg(nic);
unlock:
- rtnl_unlock();
+ mutex_unlock(&nic->macsec_mutex);
return ret;
}
if (!netif_carrier_ok(nic->ndev))
return;
- rtnl_lock();
+ mutex_lock(&nic->macsec_mutex);
aq_check_txsa_expiration(nic);
- rtnl_unlock();
+ mutex_unlock(&nic->macsec_mutex);
}
int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
if (!cfg)
return 0;
+ mutex_lock(&nic->macsec_mutex);
+
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
if (!test_bit(i, &cfg->rxsc_idx_busy))
continue;
cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
}
+ mutex_unlock(&nic->macsec_mutex);
return cnt;
}
int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
{
+ int cnt;
+
if (!nic->macsec_cfg)
return 0;
- return hweight_long(nic->macsec_cfg->txsc_idx_busy);
+ mutex_lock(&nic->macsec_mutex);
+ cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy);
+ mutex_unlock(&nic->macsec_mutex);
+
+ return cnt;
}
int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
if (!cfg)
return 0;
+ mutex_lock(&nic->macsec_mutex);
+
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
if (!test_bit(i, &cfg->txsc_idx_busy))
continue;
cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
}
+ mutex_unlock(&nic->macsec_mutex);
return cnt;
}
if (!cfg)
return data;
+ mutex_lock(&nic->macsec_mutex);
+
aq_macsec_update_stats(nic);
common_stats = &cfg->stats;
data += i;
+ mutex_unlock(&nic->macsec_mutex);
+
return data;
}
struct mutex fwreq_mutex;
#if IS_ENABLED(CONFIG_MACSEC)
struct aq_macsec_cfg *macsec_cfg;
+ /* mutex to protect data in macsec_cfg */
+ struct mutex macsec_mutex;
#endif
/* PTP support */
struct aq_ptp_s *aq_ptp;
if (++ring->write_idx == ring->length - 1)
ring->write_idx = 0;
- enet->netdev->stats.tx_bytes += skb->len;
- enet->netdev->stats.tx_packets++;
return NETDEV_TX_OK;
}
struct bcm4908_enet_dma_ring_bd *buf_desc;
struct bcm4908_enet_dma_ring_slot *slot;
struct device *dev = enet->dev;
+ unsigned int bytes = 0;
int handled = 0;
while (handled < weight && tx_ring->read_idx != tx_ring->write_idx) {
dma_unmap_single(dev, slot->dma_addr, slot->len, DMA_TO_DEVICE);
dev_kfree_skb(slot->skb);
- if (++tx_ring->read_idx == tx_ring->length)
- tx_ring->read_idx = 0;
handled++;
+ bytes += slot->len;
+
+ if (++tx_ring->read_idx == tx_ring->length)
+ tx_ring->read_idx = 0;
}
+ enet->netdev->stats.tx_packets += handled;
+ enet->netdev->stats.tx_bytes += bytes;
+
if (handled < weight) {
napi_complete_done(napi, handled);
bcm4908_enet_dma_ring_intrs_on(enet, tx_ring);
goto out_clk_disable;
}
+ /* Indicate that the MAC is responsible for PHY PM */
+ phydev->mac_managed_pm = true;
+
/* Reset house keeping link status */
priv->old_duplex = -1;
priv->old_link = -1;
static bool bnxt_nvm_test(struct bnxt *bp, struct netlink_ext_ack *extack)
{
+ bool rc = false;
u32 datalen;
u16 index;
u8 *buf;
if (bnxt_get_nvram_item(bp->dev, index, 0, datalen, buf)) {
NL_SET_ERR_MSG_MOD(extack, "nvm test vpd read error");
- goto err;
+ goto done;
}
if (bnxt_flash_nvram(bp->dev, BNX_DIR_TYPE_VPD, BNX_DIR_ORDINAL_FIRST,
BNX_DIR_EXT_NONE, 0, 0, buf, datalen)) {
NL_SET_ERR_MSG_MOD(extack, "nvm test vpd write error");
- goto err;
+ goto done;
}
- return true;
+ rc = true;
-err:
+done:
kfree(buf);
- return false;
+ return rc;
}
static bool bnxt_dl_selftest_check(struct devlink *dl, unsigned int id,
bp->phylink_config.dev = &dev->dev;
bp->phylink_config.type = PHYLINK_NETDEV;
+ bp->phylink_config.mac_managed_pm = true;
if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
bp->phylink_config.poll_fixed_state = true;
net_dev->netdev_ops = dpaa_ops;
mac_addr = mac_dev->addr;
- net_dev->mem_start = (unsigned long)mac_dev->vaddr;
- net_dev->mem_end = (unsigned long)mac_dev->vaddr_end;
+ net_dev->mem_start = (unsigned long)priv->mac_dev->res->start;
+ net_dev->mem_end = (unsigned long)priv->mac_dev->res->end;
net_dev->min_mtu = ETH_MIN_MTU;
net_dev->max_mtu = dpaa_get_max_mtu();
if (mac_dev)
return sprintf(buf, "%llx",
- (unsigned long long)mac_dev->vaddr);
+ (unsigned long long)mac_dev->res->start);
else
return sprintf(buf, "none");
}
else
enetc_rxbdr_wr(hw, idx, ENETC_RBBSR, ENETC_RXB_DMA_SIZE);
+ /* Also prepare the consumer index in case page allocation never
+ * succeeds. In that case, hardware will never advance producer index
+ * to match consumer index, and will drop all frames.
+ */
enetc_rxbdr_wr(hw, idx, ENETC_RBPIR, 0);
+ enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, 1);
/* enable Rx ints by setting pkt thr to 1 */
enetc_rxbdr_wr(hw, idx, ENETC_RBICR0, ENETC_RBICR0_ICEN | 0x1);
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "Tx DMA memory map failed\n");
- return NETDEV_TX_BUSY;
+ return NETDEV_TX_OK;
}
bdp->cbd_datlen = cpu_to_fec16(size);
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "Tx DMA memory map failed\n");
- return NETDEV_TX_BUSY;
+ return NETDEV_TX_OK;
}
}
IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
IEEE_R_FDXFC, IEEE_R_OCTETS_OK
};
+/* for i.MX6ul */
+static u32 fec_enet_register_offset_6ul[] = {
+ FEC_IEVENT, FEC_IMASK, FEC_R_DES_ACTIVE_0, FEC_X_DES_ACTIVE_0,
+ FEC_ECNTRL, FEC_MII_DATA, FEC_MII_SPEED, FEC_MIB_CTRLSTAT, FEC_R_CNTRL,
+ FEC_X_CNTRL, FEC_ADDR_LOW, FEC_ADDR_HIGH, FEC_OPD, FEC_TXIC0, FEC_RXIC0,
+ FEC_HASH_TABLE_HIGH, FEC_HASH_TABLE_LOW, FEC_GRP_HASH_TABLE_HIGH,
+ FEC_GRP_HASH_TABLE_LOW, FEC_X_WMRK, FEC_R_DES_START_0,
+ FEC_X_DES_START_0, FEC_R_BUFF_SIZE_0, FEC_R_FIFO_RSFL, FEC_R_FIFO_RSEM,
+ FEC_R_FIFO_RAEM, FEC_R_FIFO_RAFL, FEC_RACC,
+ RMON_T_DROP, RMON_T_PACKETS, RMON_T_BC_PKT, RMON_T_MC_PKT,
+ RMON_T_CRC_ALIGN, RMON_T_UNDERSIZE, RMON_T_OVERSIZE, RMON_T_FRAG,
+ RMON_T_JAB, RMON_T_COL, RMON_T_P64, RMON_T_P65TO127, RMON_T_P128TO255,
+ RMON_T_P256TO511, RMON_T_P512TO1023, RMON_T_P1024TO2047,
+ RMON_T_P_GTE2048, RMON_T_OCTETS,
+ IEEE_T_DROP, IEEE_T_FRAME_OK, IEEE_T_1COL, IEEE_T_MCOL, IEEE_T_DEF,
+ IEEE_T_LCOL, IEEE_T_EXCOL, IEEE_T_MACERR, IEEE_T_CSERR, IEEE_T_SQE,
+ IEEE_T_FDXFC, IEEE_T_OCTETS_OK,
+ RMON_R_PACKETS, RMON_R_BC_PKT, RMON_R_MC_PKT, RMON_R_CRC_ALIGN,
+ RMON_R_UNDERSIZE, RMON_R_OVERSIZE, RMON_R_FRAG, RMON_R_JAB,
+ RMON_R_RESVD_O, RMON_R_P64, RMON_R_P65TO127, RMON_R_P128TO255,
+ RMON_R_P256TO511, RMON_R_P512TO1023, RMON_R_P1024TO2047,
+ RMON_R_P_GTE2048, RMON_R_OCTETS,
+ IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
+ IEEE_R_FDXFC, IEEE_R_OCTETS_OK
+};
#else
static __u32 fec_enet_register_version = 1;
static u32 fec_enet_register_offset[] = {
u32 *buf = (u32 *)regbuf;
u32 i, off;
int ret;
+#if defined(CONFIG_M523x) || defined(CONFIG_M527x) || defined(CONFIG_M528x) || \
+ defined(CONFIG_M520x) || defined(CONFIG_M532x) || defined(CONFIG_ARM) || \
+ defined(CONFIG_ARM64) || defined(CONFIG_COMPILE_TEST)
+ u32 *reg_list;
+ u32 reg_cnt;
+ if (!of_machine_is_compatible("fsl,imx6ul")) {
+ reg_list = fec_enet_register_offset;
+ reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
+ } else {
+ reg_list = fec_enet_register_offset_6ul;
+ reg_cnt = ARRAY_SIZE(fec_enet_register_offset_6ul);
+ }
+#else
+ /* coldfire */
+ static u32 *reg_list = fec_enet_register_offset;
+ static const u32 reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
+#endif
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return;
memset(buf, 0, regs->len);
- for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
- off = fec_enet_register_offset[i];
+ for (i = 0; i < reg_cnt; i++) {
+ off = reg_list[i];
if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
!(fep->quirks & FEC_QUIRK_HAS_FRREG))
struct device_node *mac_node, *dev_node;
struct mac_device *mac_dev;
struct platform_device *of_dev;
- struct resource *res;
struct mac_priv_s *priv;
struct fman_mac_params params;
u32 val;
of_node_put(dev_node);
/* Get the address of the memory mapped registers */
- res = platform_get_mem_or_io(_of_dev, 0);
- if (!res) {
+ mac_dev->res = platform_get_mem_or_io(_of_dev, 0);
+ if (!mac_dev->res) {
dev_err(dev, "could not get registers\n");
return -EINVAL;
}
- err = devm_request_resource(dev, fman_get_mem_region(priv->fman), res);
+ err = devm_request_resource(dev, fman_get_mem_region(priv->fman),
+ mac_dev->res);
if (err) {
dev_err_probe(dev, err, "could not request resource\n");
return err;
}
- mac_dev->vaddr = devm_ioremap(dev, res->start, resource_size(res));
+ mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,
+ resource_size(mac_dev->res));
if (!mac_dev->vaddr) {
dev_err(dev, "devm_ioremap() failed\n");
return -EIO;
}
- mac_dev->vaddr_end = mac_dev->vaddr + resource_size(res);
if (!of_device_is_available(mac_node))
return -ENODEV;
struct mac_device {
void __iomem *vaddr;
- void __iomem *vaddr_end;
struct device *dev;
+ struct resource *res;
u8 addr[ETH_ALEN];
struct fman_port *port[2];
u32 if_support;
hdev->cls_dev.release = hnae_release;
(void)dev_set_name(&hdev->cls_dev, "hnae%d", hdev->id);
ret = device_register(&hdev->cls_dev);
- if (ret)
+ if (ret) {
+ put_device(&hdev->cls_dev);
return ret;
+ }
__module_get(THIS_MODULE);
struct tag_sml_funcfg_tbl *funcfg_table_elem;
struct hinic_cmd_lt_rd *read_data;
u16 out_size = sizeof(*read_data);
+ int ret = ~0;
int err;
read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
switch (idx) {
case VALID:
- return funcfg_table_elem->dw0.bs.valid;
+ ret = funcfg_table_elem->dw0.bs.valid;
+ break;
case RX_MODE:
- return funcfg_table_elem->dw0.bs.nic_rx_mode;
+ ret = funcfg_table_elem->dw0.bs.nic_rx_mode;
+ break;
case MTU:
- return funcfg_table_elem->dw1.bs.mtu;
+ ret = funcfg_table_elem->dw1.bs.mtu;
+ break;
case RQ_DEPTH:
- return funcfg_table_elem->dw13.bs.cfg_rq_depth;
+ ret = funcfg_table_elem->dw13.bs.cfg_rq_depth;
+ break;
case QUEUE_NUM:
- return funcfg_table_elem->dw13.bs.cfg_q_num;
+ ret = funcfg_table_elem->dw13.bs.cfg_q_num;
+ break;
}
kfree(read_data);
- return ~0;
+ return ret;
}
static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
err_set_cmdq_depth:
hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
-
+ free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]);
err_cmdq_ctxt:
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
HINIC_MAX_CMDQ_TYPES);
if (err)
return -EINVAL;
- interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt;
+ interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt;
interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
dev_err(&hwdev->hwif->pdev->dev,
"Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
err, register_info.status, out_size);
- hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC);
return -EIO;
}
} else {
ret = of_device_register(&port->ofdev);
if (ret) {
pr_err("failed to register device. ret=%d\n", ret);
+ put_device(&port->ofdev.dev);
goto out;
}
rwi = get_next_rwi(adapter);
/*
- * If there is another reset queued, free the previous rwi
- * and process the new reset even if previous reset failed
- * (the previous reset could have failed because of a fail
- * over for instance, so process the fail over).
- *
* If there are no resets queued and the previous reset failed,
* the adapter would be in an undefined state. So retry the
* previous reset as a hard reset.
+ *
+ * Else, free the previous rwi and, if there is another reset
+ * queued, process the new reset even if previous reset failed
+ * (the previous reset could have failed because of a fail
+ * over for instance, so process the fail over).
*/
- if (rwi)
- kfree(tmprwi);
- else if (rc)
+ if (!rwi && rc)
rwi = tmprwi;
+ else
+ kfree(tmprwi);
if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER ||
rwi->reset_reason == VNIC_RESET_MOBILITY || rc))
*/
rx_rings[i].tail = hw->hw_addr + I40E_PRTGEN_STATUS;
err = i40e_setup_rx_descriptors(&rx_rings[i]);
- if (err)
- goto rx_unwind;
- err = i40e_alloc_rx_bi(&rx_rings[i]);
if (err)
goto rx_unwind;
if (cmd->flow_type == TCP_V4_FLOW ||
cmd->flow_type == UDP_V4_FLOW) {
- if (i_set & I40E_L3_SRC_MASK)
- cmd->data |= RXH_IP_SRC;
- if (i_set & I40E_L3_DST_MASK)
- cmd->data |= RXH_IP_DST;
+ if (hw->mac.type == I40E_MAC_X722) {
+ if (i_set & I40E_X722_L3_SRC_MASK)
+ cmd->data |= RXH_IP_SRC;
+ if (i_set & I40E_X722_L3_DST_MASK)
+ cmd->data |= RXH_IP_DST;
+ } else {
+ if (i_set & I40E_L3_SRC_MASK)
+ cmd->data |= RXH_IP_SRC;
+ if (i_set & I40E_L3_DST_MASK)
+ cmd->data |= RXH_IP_DST;
+ }
} else if (cmd->flow_type == TCP_V6_FLOW ||
cmd->flow_type == UDP_V6_FLOW) {
if (i_set & I40E_L3_V6_SRC_MASK)
/**
* i40e_get_rss_hash_bits - Read RSS Hash bits from register
+ * @hw: hw structure
* @nfc: pointer to user request
* @i_setc: bits currently set
*
* Returns value of bits to be set per user request
**/
-static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
+static u64 i40e_get_rss_hash_bits(struct i40e_hw *hw,
+ struct ethtool_rxnfc *nfc,
+ u64 i_setc)
{
u64 i_set = i_setc;
u64 src_l3 = 0, dst_l3 = 0;
dst_l3 = I40E_L3_V6_DST_MASK;
} else if (nfc->flow_type == TCP_V4_FLOW ||
nfc->flow_type == UDP_V4_FLOW) {
- src_l3 = I40E_L3_SRC_MASK;
- dst_l3 = I40E_L3_DST_MASK;
+ if (hw->mac.type == I40E_MAC_X722) {
+ src_l3 = I40E_X722_L3_SRC_MASK;
+ dst_l3 = I40E_X722_L3_DST_MASK;
+ } else {
+ src_l3 = I40E_L3_SRC_MASK;
+ dst_l3 = I40E_L3_DST_MASK;
+ }
} else {
/* Any other flow type are not supported here */
return i_set;
return i_set;
}
+#define FLOW_PCTYPES_SIZE 64
/**
* i40e_set_rss_hash_opt - Enable/Disable flow types for RSS hash
* @pf: pointer to the physical function struct
struct i40e_hw *hw = &pf->hw;
u64 hena = (u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(0)) |
((u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(1)) << 32);
- u8 flow_pctype = 0;
+ DECLARE_BITMAP(flow_pctypes, FLOW_PCTYPES_SIZE);
u64 i_set, i_setc;
+ bitmap_zero(flow_pctypes, FLOW_PCTYPES_SIZE);
+
if (pf->flags & I40E_FLAG_MFP_ENABLED) {
dev_err(&pf->pdev->dev,
"Change of RSS hash input set is not supported when MFP mode is enabled\n");
switch (nfc->flow_type) {
case TCP_V4_FLOW:
- flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP, flow_pctypes);
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK,
+ flow_pctypes);
break;
case TCP_V6_FLOW:
- flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
- if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP, flow_pctypes);
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK,
+ flow_pctypes);
break;
case UDP_V4_FLOW:
- flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
- if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
-
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_UDP, flow_pctypes);
+ if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
+ set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP,
+ flow_pctypes);
+ set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP,
+ flow_pctypes);
+ }
hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4);
break;
case UDP_V6_FLOW:
- flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
- if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
-
+ set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_UDP, flow_pctypes);
+ if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
+ set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP,
+ flow_pctypes);
+ set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP,
+ flow_pctypes);
+ }
hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6);
break;
case AH_ESP_V4_FLOW:
return -EINVAL;
}
- if (flow_pctype) {
- i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0,
- flow_pctype)) |
- ((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1,
- flow_pctype)) << 32);
- i_set = i40e_get_rss_hash_bits(nfc, i_setc);
- i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_pctype),
- (u32)i_set);
- i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_pctype),
- (u32)(i_set >> 32));
- hena |= BIT_ULL(flow_pctype);
+ if (bitmap_weight(flow_pctypes, FLOW_PCTYPES_SIZE)) {
+ u8 flow_id;
+
+ for_each_set_bit(flow_id, flow_pctypes, FLOW_PCTYPES_SIZE) {
+ i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id)) |
+ ((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id)) << 32);
+ i_set = i40e_get_rss_hash_bits(&pf->hw, nfc, i_setc);
+
+ i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id),
+ (u32)i_set);
+ i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id),
+ (u32)(i_set >> 32));
+ hena |= BIT_ULL(flow_id);
+ }
}
i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), (u32)hena);
if (ring->vsi->type == I40E_VSI_MAIN)
xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
- kfree(ring->rx_bi);
ring->xsk_pool = i40e_xsk_pool(ring);
if (ring->xsk_pool) {
- ret = i40e_alloc_rx_bi_zc(ring);
- if (ret)
- return ret;
ring->rx_buf_len =
xsk_pool_get_rx_frame_size(ring->xsk_pool);
/* For AF_XDP ZC, we disallow packets to span on
ring->queue_index);
} else {
- ret = i40e_alloc_rx_bi(ring);
- if (ret)
- return ret;
ring->rx_buf_len = vsi->rx_buf_len;
if (ring->vsi->type == I40E_VSI_MAIN) {
ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
i40e_reset_and_rebuild(pf, true, true);
}
+ if (!i40e_enabled_xdp_vsi(vsi) && prog) {
+ if (i40e_realloc_rx_bi_zc(vsi, true))
+ return -ENOMEM;
+ } else if (i40e_enabled_xdp_vsi(vsi) && !prog) {
+ if (i40e_realloc_rx_bi_zc(vsi, false))
+ return -ENOMEM;
+ }
+
for (i = 0; i < vsi->num_queue_pairs; i++)
WRITE_ONCE(vsi->rx_rings[i]->xdp_prog, vsi->xdp_prog);
i40e_queue_pair_disable_irq(vsi, queue_pair);
err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */);
+ i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
i40e_queue_pair_clean_rings(vsi, queue_pair);
i40e_queue_pair_reset_stats(vsi, queue_pair);
return -ENOMEM;
}
-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring)
-{
- unsigned long sz = sizeof(*rx_ring->rx_bi) * rx_ring->count;
-
- rx_ring->rx_bi = kzalloc(sz, GFP_KERNEL);
- return rx_ring->rx_bi ? 0 : -ENOMEM;
-}
-
static void i40e_clear_rx_bi(struct i40e_ring *rx_ring)
{
memset(rx_ring->rx_bi, 0, sizeof(*rx_ring->rx_bi) * rx_ring->count);
rx_ring->xdp_prog = rx_ring->vsi->xdp_prog;
+ rx_ring->rx_bi =
+ kcalloc(rx_ring->count, sizeof(*rx_ring->rx_bi), GFP_KERNEL);
+ if (!rx_ring->rx_bi)
+ return -ENOMEM;
+
return 0;
}
bool __i40e_chk_linearize(struct sk_buff *skb);
int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
u32 flags);
-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring);
/**
* i40e_get_head - Retrieve head from head writeback
#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000
/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define I40E_X722_L3_SRC_SHIFT 49
+#define I40E_X722_L3_SRC_MASK (0x3ULL << I40E_X722_L3_SRC_SHIFT)
+#define I40E_X722_L3_DST_SHIFT 41
+#define I40E_X722_L3_DST_MASK (0x3ULL << I40E_X722_L3_DST_SHIFT)
#define I40E_L3_SRC_SHIFT 47
#define I40E_L3_SRC_MASK (0x3ULL << I40E_L3_SRC_SHIFT)
#define I40E_L3_V6_SRC_SHIFT 43
if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state))
return true;
- /* If the VFs have been disabled, this means something else is
- * resetting the VF, so we shouldn't continue.
- */
- if (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
+ /* Bail out if VFs are disabled. */
+ if (test_bit(__I40E_VF_DISABLE, pf->state))
+ return true;
+
+ /* If VF is being reset already we don't need to continue. */
+ if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
return true;
i40e_trigger_vf_reset(vf, flr);
i40e_cleanup_reset_vf(vf);
i40e_flush(hw);
- clear_bit(__I40E_VF_DISABLE, pf->state);
+ clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states);
return true;
}
return false;
/* Begin reset on all VFs at once */
- for (v = 0; v < pf->num_alloc_vfs; v++)
- i40e_trigger_vf_reset(&pf->vf[v], flr);
+ for (v = 0; v < pf->num_alloc_vfs; v++) {
+ vf = &pf->vf[v];
+ /* If VF is being reset no need to trigger reset again */
+ if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ i40e_trigger_vf_reset(&pf->vf[v], flr);
+ }
/* HW requires some time to make sure it can flush the FIFO for a VF
* when it resets it. Poll the VPGEN_VFRSTAT register for each VF in
*/
while (v < pf->num_alloc_vfs) {
vf = &pf->vf[v];
- reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
- if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
- break;
+ if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) {
+ reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
+ if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
+ break;
+ }
/* If the current VF has finished resetting, move on
* to the next VF in sequence.
if (pf->vf[v].lan_vsi_idx == 0)
continue;
+ /* If VF is reset in another thread just continue */
+ if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ continue;
+
i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]);
}
if (pf->vf[v].lan_vsi_idx == 0)
continue;
+ /* If VF is reset in another thread just continue */
+ if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ continue;
+
i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]);
}
mdelay(50);
/* Finish the reset on each VF */
- for (v = 0; v < pf->num_alloc_vfs; v++)
+ for (v = 0; v < pf->num_alloc_vfs; v++) {
+ /* If VF is reset in another thread just continue */
+ if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ continue;
+
i40e_cleanup_reset_vf(&pf->vf[v]);
+ }
i40e_flush(hw);
clear_bit(__I40E_VF_DISABLE, pf->state);
I40E_VF_STATE_MC_PROMISC,
I40E_VF_STATE_UC_PROMISC,
I40E_VF_STATE_PRE_ENABLE,
+ I40E_VF_STATE_RESETTING
};
/* VF capabilities */
#include "i40e_txrx_common.h"
#include "i40e_xsk.h"
-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring)
-{
- unsigned long sz = sizeof(*rx_ring->rx_bi_zc) * rx_ring->count;
-
- rx_ring->rx_bi_zc = kzalloc(sz, GFP_KERNEL);
- return rx_ring->rx_bi_zc ? 0 : -ENOMEM;
-}
-
void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring)
{
memset(rx_ring->rx_bi_zc, 0,
return &rx_ring->rx_bi_zc[idx];
}
+/**
+ * i40e_realloc_rx_xdp_bi - reallocate SW ring for either XSK or normal buffer
+ * @rx_ring: Current rx ring
+ * @pool_present: is pool for XSK present
+ *
+ * Try allocating memory and return ENOMEM, if failed to allocate.
+ * If allocation was successful, substitute buffer with allocated one.
+ * Returns 0 on success, negative on failure
+ */
+static int i40e_realloc_rx_xdp_bi(struct i40e_ring *rx_ring, bool pool_present)
+{
+ size_t elem_size = pool_present ? sizeof(*rx_ring->rx_bi_zc) :
+ sizeof(*rx_ring->rx_bi);
+ void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL);
+
+ if (!sw_ring)
+ return -ENOMEM;
+
+ if (pool_present) {
+ kfree(rx_ring->rx_bi);
+ rx_ring->rx_bi = NULL;
+ rx_ring->rx_bi_zc = sw_ring;
+ } else {
+ kfree(rx_ring->rx_bi_zc);
+ rx_ring->rx_bi_zc = NULL;
+ rx_ring->rx_bi = sw_ring;
+ }
+ return 0;
+}
+
+/**
+ * i40e_realloc_rx_bi_zc - reallocate rx SW rings
+ * @vsi: Current VSI
+ * @zc: is zero copy set
+ *
+ * Reallocate buffer for rx_rings that might be used by XSK.
+ * XDP requires more memory, than rx_buf provides.
+ * Returns 0 on success, negative on failure
+ */
+int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc)
+{
+ struct i40e_ring *rx_ring;
+ unsigned long q;
+
+ for_each_set_bit(q, vsi->af_xdp_zc_qps, vsi->alloc_queue_pairs) {
+ rx_ring = vsi->rx_rings[q];
+ if (i40e_realloc_rx_xdp_bi(rx_ring, zc))
+ return -ENOMEM;
+ }
+ return 0;
+}
+
/**
* i40e_xsk_pool_enable - Enable/associate an AF_XDP buffer pool to a
* certain ring/qid
if (err)
return err;
+ err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], true);
+ if (err)
+ return err;
+
err = i40e_queue_pair_enable(vsi, qid);
if (err)
return err;
xsk_pool_dma_unmap(pool, I40E_RX_DMA_ATTR);
if (if_running) {
+ err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], false);
+ if (err)
+ return err;
err = i40e_queue_pair_enable(vsi, qid);
if (err)
return err;
bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring);
int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring);
+int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc);
void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring);
#endif /* _I40E_XSK_H_ */
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
- dev_kfree_skb_any(skb);
netdev_err(dev, "tx ring full\n");
netif_tx_stop_queue(txq);
return NETDEV_TX_BUSY;
cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID,
txsc->hw_flow_id, false);
fail:
+ kfree(txsc);
return ERR_PTR(ret);
}
cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID,
rxsc->hw_flow_id, false);
fail:
+ kfree(rxsc);
return ERR_PTR(ret);
}
eth->irq[i] = platform_get_irq(pdev, i);
if (eth->irq[i] < 0) {
dev_err(&pdev->dev, "no IRQ%d resource found\n", i);
- return -ENXIO;
+ err = -ENXIO;
+ goto err_wed_exit;
}
}
for (i = 0; i < ARRAY_SIZE(eth->clks); i++) {
eth->clks[i] = devm_clk_get(eth->dev,
mtk_clks_source_name[i]);
if (IS_ERR(eth->clks[i])) {
- if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER)
- return -EPROBE_DEFER;
+ if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER) {
+ err = -EPROBE_DEFER;
+ goto err_wed_exit;
+ }
if (eth->soc->required_clks & BIT(i)) {
dev_err(&pdev->dev, "clock %s not found\n",
mtk_clks_source_name[i]);
- return -EINVAL;
+ err = -EINVAL;
+ goto err_wed_exit;
}
eth->clks[i] = NULL;
}
err = mtk_hw_init(eth);
if (err)
- return err;
+ goto err_wed_exit;
eth->hwlro = MTK_HAS_CAPS(eth->soc->caps, MTK_HWLRO);
mtk_free_dev(eth);
err_deinit_hw:
mtk_hw_deinit(eth);
+err_wed_exit:
+ mtk_wed_exit();
return err;
}
phylink_disconnect_phy(mac->phylink);
}
+ mtk_wed_exit();
mtk_hw_deinit(eth);
netif_napi_del(ð->tx_napi);
return 0;
}
-static inline bool mtk_foe_entry_usable(struct mtk_foe_entry *entry)
-{
- return !(entry->ib1 & MTK_FOE_IB1_STATIC) &&
- FIELD_GET(MTK_FOE_IB1_STATE, entry->ib1) != MTK_FOE_STATE_BIND;
-}
-
static bool
mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry,
struct mtk_foe_entry *data)
pdev = of_find_device_by_node(np);
if (!pdev)
- return;
+ goto err_of_node_put;
get_device(&pdev->dev);
irq = platform_get_irq(pdev, 0);
if (irq < 0)
- return;
+ goto err_put_device;
regs = syscon_regmap_lookup_by_phandle(np, NULL);
if (IS_ERR(regs))
- return;
+ goto err_put_device;
rcu_assign_pointer(mtk_soc_wed_ops, &wed_ops);
hw_list[index] = hw;
+ mutex_unlock(&hw_lock);
+
+ return;
+
unlock:
mutex_unlock(&hw_lock);
+err_put_device:
+ put_device(&pdev->dev);
+err_of_node_put:
+ of_node_put(np);
}
void mtk_wed_exit(void)
hw_list[i] = NULL;
debugfs_remove(hw->debugfs_dir);
put_device(hw->dev);
+ of_node_put(hw->node);
kfree(hw);
}
}
ctx->dev = dev;
/* Starts at 1 to avoid doing wake_up if we are not cleaning up */
atomic_set(&ctx->num_inflight, 1);
- init_waitqueue_head(&ctx->wait);
+ init_completion(&ctx->inflight_done);
}
EXPORT_SYMBOL(mlx5_cmd_init_async_ctx);
*/
void mlx5_cmd_cleanup_async_ctx(struct mlx5_async_ctx *ctx)
{
- atomic_dec(&ctx->num_inflight);
- wait_event(ctx->wait, atomic_read(&ctx->num_inflight) == 0);
+ if (!atomic_dec_and_test(&ctx->num_inflight))
+ wait_for_completion(&ctx->inflight_done);
}
EXPORT_SYMBOL(mlx5_cmd_cleanup_async_ctx);
status = cmd_status_err(ctx->dev, status, work->opcode, work->out);
work->user_callback(status, work);
if (atomic_dec_and_test(&ctx->num_inflight))
- wake_up(&ctx->wait);
+ complete(&ctx->inflight_done);
}
int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
ret = cmd_exec(ctx->dev, in, in_size, out, out_size,
mlx5_cmd_exec_cb_handler, work, false);
if (ret && atomic_dec_and_test(&ctx->num_inflight))
- wake_up(&ctx->wait);
+ complete(&ctx->inflight_done);
return ret;
}
#include "en.h"
#include "en_stats.h"
+#include "en/txrx.h"
#include <linux/ptp_classify.h>
#define MLX5E_PTP_CHANNEL_IX 0
fk.ports.dst == htons(PTP_EV_PORT));
}
+static inline bool mlx5e_ptpsq_fifo_has_room(struct mlx5e_txqsq *sq)
+{
+ if (!sq->ptpsq)
+ return true;
+
+ return mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo);
+}
+
int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params,
u8 lag_port, struct mlx5e_ptp **cp);
void mlx5e_ptp_close(struct mlx5e_ptp *c);
struct encap_flow_item encaps[MLX5_MAX_FLOW_FWD_VPORTS];
struct mlx5e_tc_flow *peer_flow;
struct mlx5e_mod_hdr_handle *mh; /* attached mod header instance */
+ struct mlx5e_mod_hdr_handle *slow_mh; /* attached mod header instance for slow path */
struct mlx5e_hairpin_entry *hpe; /* attached hairpin instance */
struct list_head hairpin; /* flows sharing the same hairpin */
struct list_head peer; /* flows with peer flow */
struct completion del_hw_done;
struct mlx5_flow_attr *attr;
struct list_head attrs;
+ u32 chain_mapping;
};
struct mlx5_flow_handle *
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq);
+static inline bool
+mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
+{
+ return (*fifo->pc - *fifo->cc) < fifo->mask;
+}
+
static inline bool
mlx5e_wqc_has_room_for(struct mlx5_wq_cyc *wq, u16 cc, u16 pc, u16 n)
{
struct xfrm_replay_state_esn *replay_esn;
u32 seq_bottom = 0;
u8 overlap;
- u32 *esn;
if (!(sa_entry->x->props.flags & XFRM_STATE_ESN)) {
sa_entry->esn_state.trigger = 0;
sa_entry->esn_state.esn = xfrm_replay_seqhi(sa_entry->x,
htonl(seq_bottom));
- esn = &sa_entry->esn_state.esn;
sa_entry->esn_state.trigger = 1;
if (unlikely(overlap && seq_bottom < MLX5E_IPSEC_ESN_SCOPE_MID)) {
- ++(*esn);
sa_entry->esn_state.overlap = 0;
return true;
} else if (unlikely(!overlap &&
bool active)
{
struct mlx5_core_dev *mdev = macsec->mdev;
- struct mlx5_macsec_obj_attrs attrs;
+ struct mlx5_macsec_obj_attrs attrs = {};
int err = 0;
if (rx_sa->active != active)
return 0;
}
- attrs.sci = rx_sa->sci;
+ attrs.sci = cpu_to_be64((__force u64)rx_sa->sci);
attrs.enc_key_id = rx_sa->enc_key_id;
err = mlx5e_macsec_create_object(mdev, &attrs, false, &rx_sa->macsec_obj_id);
if (err)
}
rx_sa = rx_sc->rx_sa[assoc_num];
- if (rx_sa) {
+ if (!rx_sa) {
netdev_err(ctx->netdev,
- "MACsec offload rx_sc sci %lld rx_sa %d already exist\n",
+ "MACsec offload rx_sc sci %lld rx_sa %d doesn't exist\n",
sci, assoc_num);
- err = -EEXIST;
+ err = -EINVAL;
goto out;
}
}
rx_sa = rx_sc->rx_sa[assoc_num];
- if (rx_sa) {
+ if (!rx_sa) {
netdev_err(ctx->netdev,
- "MACsec offload rx_sc sci %lld rx_sa %d already exist\n",
+ "MACsec offload rx_sc sci %lld rx_sa %d doesn't exist\n",
sci, assoc_num);
- err = -EEXIST;
+ err = -EINVAL;
goto out;
}
void mlx5e_macsec_cleanup(struct mlx5e_priv *priv)
{
struct mlx5e_macsec *macsec = priv->macsec;
- struct mlx5_core_dev *mdev = macsec->mdev;
+ struct mlx5_core_dev *mdev = priv->mdev;
if (!macsec)
return;
mlx5_notifier_unregister(mdev, &macsec->nb);
-
mlx5e_macsec_fs_cleanup(macsec->macsec_fs);
-
- /* Cleanup workqueue */
destroy_workqueue(macsec->wq);
-
mlx5e_macsec_aso_cleanup(&macsec->aso, mdev);
-
- priv->macsec = NULL;
-
rhashtable_destroy(&macsec->sci_hash);
-
mutex_destroy(&macsec->lock);
-
kfree(macsec);
}
rx_rule->rule[0] = rule;
/* Rx crypto table without SCI rule */
- if (cpu_to_be64((__force u64)attrs->sci) & ntohs(MACSEC_PORT_ES)) {
+ if ((cpu_to_be64((__force u64)attrs->sci) & 0xFFFF) == ntohs(MACSEC_PORT_ES)) {
memset(spec, 0, sizeof(struct mlx5_flow_spec));
memset(&dest, 0, sizeof(struct mlx5_flow_destination));
memset(&flow_act, 0, sizeof(flow_act));
struct mlx5e_tc_flow *flow,
struct mlx5_flow_spec *spec)
{
+ struct mlx5e_tc_mod_hdr_acts mod_acts = {};
+ struct mlx5e_mod_hdr_handle *mh = NULL;
struct mlx5_flow_attr *slow_attr;
struct mlx5_flow_handle *rule;
+ bool fwd_and_modify_cap;
+ u32 chain_mapping = 0;
+ int err;
slow_attr = mlx5_alloc_flow_attr(MLX5_FLOW_NAMESPACE_FDB);
if (!slow_attr)
slow_attr->esw_attr->split_count = 0;
slow_attr->flags |= MLX5_ATTR_FLAG_SLOW_PATH;
+ fwd_and_modify_cap = MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_modify_header_fwd_to_table);
+ if (!fwd_and_modify_cap)
+ goto skip_restore;
+
+ err = mlx5_chains_get_chain_mapping(esw_chains(esw), flow->attr->chain, &chain_mapping);
+ if (err)
+ goto err_get_chain;
+
+ err = mlx5e_tc_match_to_reg_set(esw->dev, &mod_acts, MLX5_FLOW_NAMESPACE_FDB,
+ CHAIN_TO_REG, chain_mapping);
+ if (err)
+ goto err_reg_set;
+
+ mh = mlx5e_mod_hdr_attach(esw->dev, get_mod_hdr_table(flow->priv, flow),
+ MLX5_FLOW_NAMESPACE_FDB, &mod_acts);
+ if (IS_ERR(mh)) {
+ err = PTR_ERR(mh);
+ goto err_attach;
+ }
+
+ slow_attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
+ slow_attr->modify_hdr = mlx5e_mod_hdr_get(mh);
+
+skip_restore:
rule = mlx5e_tc_offload_fdb_rules(esw, flow, spec, slow_attr);
- if (!IS_ERR(rule))
- flow_flag_set(flow, SLOW);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ goto err_offload;
+ }
+ flow->slow_mh = mh;
+ flow->chain_mapping = chain_mapping;
+ flow_flag_set(flow, SLOW);
+
+ mlx5e_mod_hdr_dealloc(&mod_acts);
kfree(slow_attr);
return rule;
+
+err_offload:
+ if (fwd_and_modify_cap)
+ mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), mh);
+err_attach:
+err_reg_set:
+ if (fwd_and_modify_cap)
+ mlx5_chains_put_chain_mapping(esw_chains(esw), chain_mapping);
+err_get_chain:
+ mlx5e_mod_hdr_dealloc(&mod_acts);
+ kfree(slow_attr);
+ return ERR_PTR(err);
}
void mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw,
slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
slow_attr->esw_attr->split_count = 0;
slow_attr->flags |= MLX5_ATTR_FLAG_SLOW_PATH;
+ if (flow->slow_mh) {
+ slow_attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
+ slow_attr->modify_hdr = mlx5e_mod_hdr_get(flow->slow_mh);
+ }
mlx5e_tc_unoffload_fdb_rules(esw, flow, slow_attr);
+ if (flow->slow_mh) {
+ mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), flow->slow_mh);
+ mlx5_chains_put_chain_mapping(esw_chains(esw), flow->chain_mapping);
+ flow->chain_mapping = 0;
+ flow->slow_mh = NULL;
+ }
flow_flag_clear(flow, SLOW);
kfree(slow_attr);
}
attr2->action = 0;
attr2->flags = 0;
attr2->parse_attr = parse_attr;
+ attr2->esw_attr->out_count = 0;
+ attr2->esw_attr->split_count = 0;
+ attr2->dest_chain = 0;
+ attr2->dest_ft = NULL;
return attr2;
}
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5_flow_attr *attr = flow->attr;
struct mlx5_esw_flow_attr *esw_attr;
+ struct net_device *filter_dev;
int err;
err = flow_action_supported(flow_action, extack);
esw_attr = attr->esw_attr;
parse_attr = attr->parse_attr;
+ filter_dev = parse_attr->filter_dev;
parse_state = &parse_attr->parse_state;
mlx5e_tc_act_init_parse_state(parse_state, flow, flow_action, extack);
parse_state->ct_priv = get_ct_priv(priv);
return err;
/* Forward to/from internal port can only have 1 dest */
- if ((netif_is_ovs_master(parse_attr->filter_dev) || esw_attr->dest_int_port) &&
+ if ((netif_is_ovs_master(filter_dev) || esw_attr->dest_int_port) &&
esw_attr->out_count > 1) {
NL_SET_ERR_MSG_MOD(extack,
"Rules with internal port can have only one destination");
return -EOPNOTSUPP;
}
+ /* Forward from tunnel/internal port to internal port is not supported */
+ if ((mlx5e_get_tc_tun(filter_dev) || netif_is_ovs_master(filter_dev)) &&
+ esw_attr->dest_int_port) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Forwarding from tunnel/internal port to internal port is not supported");
+ return -EOPNOTSUPP;
+ }
+
err = actions_prepare_mod_hdr_actions(priv, flow, attr, extack);
if (err)
return err;
if (unlikely(sq->ptpsq)) {
mlx5e_skb_cb_hwtstamp_init(skb);
mlx5e_skb_fifo_push(&sq->ptpsq->skb_fifo, skb);
+ if (!netif_tx_queue_stopped(sq->txq) &&
+ !mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo)) {
+ netif_tx_stop_queue(sq->txq);
+ sq->stats->stopped++;
+ }
skb_get(skb);
}
if (netif_tx_queue_stopped(sq->txq) &&
mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
+ mlx5e_ptpsq_fifo_has_room(sq) &&
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
netif_tx_wake_queue(sq->txq);
stats->wake++;
err = -ETIMEDOUT;
}
+ do {
+ err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, ®16);
+ if (err)
+ return err;
+ if (reg16 == dev_id)
+ break;
+ msleep(20);
+ } while (!time_after(jiffies, timeout));
+
+ if (reg16 == dev_id) {
+ mlx5_core_info(dev, "Firmware responds to PCI config cycles again\n");
+ } else {
+ mlx5_core_err(dev, "Firmware is not responsive (0x%04x) after %llu ms\n",
+ reg16, mlx5_tout_ms(dev, PCI_TOGGLE));
+ err = -ETIMEDOUT;
+ }
+
restore:
list_for_each_entry(sdev, &bridge_bus->devices, bus_list) {
pci_cfg_access_unlock(sdev);
#include <linux/mlx5/device.h>
#include <linux/mlx5/transobj.h>
+#include "clock.h"
#include "aso.h"
#include "wq.h"
{
void *in, *sqc, *wq;
int inlen, err;
+ u8 ts_format;
inlen = MLX5_ST_SZ_BYTES(create_sq_in) +
sizeof(u64) * sq->wq_ctrl.buf.npages;
MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RST);
MLX5_SET(sqc, sqc, flush_in_error_en, 1);
+ ts_format = mlx5_is_real_time_sq(mdev) ?
+ MLX5_TIMESTAMP_FORMAT_REAL_TIME :
+ MLX5_TIMESTAMP_FORMAT_FREE_RUNNING;
+ MLX5_SET(sqc, sqc, ts_format, ts_format);
+
MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC);
MLX5_SET(wq, wq, uar_page, mdev->mlx5e_res.hw_objs.bfreg.index);
MLX5_SET(wq, wq, log_wq_pg_sz, sq->wq_ctrl.buf.page_shift -
{
struct mlx5_mpfs *mpfs = dev->priv.mpfs;
- if (!MLX5_ESWITCH_MANAGER(dev))
+ if (!mpfs)
return;
WARN_ON(!hlist_empty(mpfs->hash));
int err = 0;
u32 index;
- if (!MLX5_ESWITCH_MANAGER(dev))
+ if (!mpfs)
return 0;
mutex_lock(&mpfs->lock);
int err = 0;
u32 index;
- if (!MLX5_ESWITCH_MANAGER(dev))
+ if (!mpfs)
return 0;
mutex_lock(&mpfs->lock);
err = mlx5_load_one(dev, false);
+ if (!err)
+ devlink_health_reporter_state_update(dev->priv.health.fw_fatal_reporter,
+ DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
+
mlx5_pci_trace(dev, "Done, err = %d, device %s\n", err,
!err ? "recovered" : "Failed");
}
}
remove_from_nic_tbl:
- mlx5dr_matcher_remove_from_tbl_nic(dmn, nic_matcher);
+ if (!nic_matcher->rules)
+ mlx5dr_matcher_remove_from_tbl_nic(dmn, nic_matcher);
free_hw_ste:
mlx5dr_domain_nic_unlock(nic_dmn);
char banner[sizeof(version)];
struct ksz_switch *sw = NULL;
- result = pci_enable_device(pdev);
+ result = pcim_enable_device(pdev);
if (result)
return result;
stats->rx_dropped = dev->stats.rx_dropped +
lan966x->stats[idx + SYS_COUNT_RX_LONG] +
lan966x->stats[idx + SYS_COUNT_DR_LOCAL] +
- lan966x->stats[idx + SYS_COUNT_DR_TAIL];
+ lan966x->stats[idx + SYS_COUNT_DR_TAIL] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_0] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_1] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_2] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_3] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_4] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_5] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_6] +
+ lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_7];
for (i = 0; i < LAN966X_NUM_TC; i++) {
stats->rx_dropped +=
lan966x, FDMA_CH_DB_DISCARD);
tx->activated = false;
+ tx->last_in_use = -1;
}
static void lan966x_fdma_tx_reload(struct lan966x_tx *tx)
/* Get the received frame and unmap it */
db = &rx->dcbs[rx->dcb_index].db[rx->db_index];
page = rx->page[rx->dcb_index][rx->db_index];
+
+ dma_sync_single_for_cpu(lan966x->dev, (dma_addr_t)db->dataptr,
+ FDMA_DCB_STATUS_BLOCKL(db->status),
+ DMA_FROM_DEVICE);
+
skb = build_skb(page_address(page), PAGE_SIZE << rx->page_order);
if (unlikely(!skb))
goto unmap_page;
- dma_unmap_single(lan966x->dev, (dma_addr_t)db->dataptr,
- FDMA_DCB_STATUS_BLOCKL(db->status),
- DMA_FROM_DEVICE);
skb_put(skb, FDMA_DCB_STATUS_BLOCKL(db->status));
lan966x_ifh_get_src_port(skb->data, &src_port);
if (WARN_ON(src_port >= lan966x->num_phys_ports))
goto free_skb;
+ dma_unmap_single_attrs(lan966x->dev, (dma_addr_t)db->dataptr,
+ PAGE_SIZE << rx->page_order, DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
+
skb->dev = lan966x->ports[src_port]->dev;
skb_pull(skb, IFH_LEN * sizeof(u32));
free_skb:
kfree_skb(skb);
unmap_page:
- dma_unmap_page(lan966x->dev, (dma_addr_t)db->dataptr,
- FDMA_DCB_STATUS_BLOCKL(db->status),
- DMA_FROM_DEVICE);
+ dma_unmap_single_attrs(lan966x->dev, (dma_addr_t)db->dataptr,
+ PAGE_SIZE << rx->page_order, DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
__free_pages(page, rx->page_order);
return NULL;
int i;
for (i = 0; i < lan966x->num_phys_ports; ++i) {
+ struct lan966x_port *port;
int mtu;
- if (!lan966x->ports[i])
+ port = lan966x->ports[i];
+ if (!port)
continue;
- mtu = lan966x->ports[i]->dev->mtu;
+ mtu = lan_rd(lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port));
if (mtu > max_mtu)
max_mtu = mtu;
}
static int lan966x_fdma_reload(struct lan966x *lan966x, int new_mtu)
{
- void *rx_dcbs, *tx_dcbs, *tx_dcbs_buf;
- dma_addr_t rx_dma, tx_dma;
+ dma_addr_t rx_dma;
+ void *rx_dcbs;
u32 size;
int err;
/* Store these for later to free them */
rx_dma = lan966x->rx.dma;
- tx_dma = lan966x->tx.dma;
rx_dcbs = lan966x->rx.dcbs;
- tx_dcbs = lan966x->tx.dcbs;
- tx_dcbs_buf = lan966x->tx.dcbs_buf;
napi_synchronize(&lan966x->napi);
napi_disable(&lan966x->napi);
size = ALIGN(size, PAGE_SIZE);
dma_free_coherent(lan966x->dev, size, rx_dcbs, rx_dma);
- lan966x_fdma_tx_disable(&lan966x->tx);
- err = lan966x_fdma_tx_alloc(&lan966x->tx);
- if (err)
- goto restore_tx;
-
- size = sizeof(struct lan966x_tx_dcb) * FDMA_DCB_MAX;
- size = ALIGN(size, PAGE_SIZE);
- dma_free_coherent(lan966x->dev, size, tx_dcbs, tx_dma);
-
- kfree(tx_dcbs_buf);
-
lan966x_fdma_wakeup_netdev(lan966x);
napi_enable(&lan966x->napi);
lan966x->rx.dcbs = rx_dcbs;
lan966x_fdma_rx_start(&lan966x->rx);
-restore_tx:
- lan966x->tx.dma = tx_dma;
- lan966x->tx.dcbs = tx_dcbs;
- lan966x->tx.dcbs_buf = tx_dcbs_buf;
-
return err;
}
max_mtu = lan966x_fdma_get_max_mtu(lan966x);
max_mtu += IFH_LEN * sizeof(u32);
+ max_mtu += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ max_mtu += VLAN_HLEN * 2;
if (round_up(max_mtu, PAGE_SIZE) / PAGE_SIZE - 1 ==
lan966x->rx.page_order)
int old_mtu = dev->mtu;
int err;
- lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(new_mtu),
+ lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(LAN966X_HW_MTU(new_mtu)),
lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port));
dev->mtu = new_mtu;
err = lan966x_fdma_change_mtu(lan966x);
if (err) {
- lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(old_mtu),
+ lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(LAN966X_HW_MTU(old_mtu)),
lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port));
dev->mtu = old_mtu;
}
#define LAN966X_BUFFER_MEMORY (160 * 1024)
#define LAN966X_BUFFER_MIN_SZ 60
+#define LAN966X_HW_MTU(mtu) ((mtu) + ETH_HLEN + ETH_FCS_LEN)
+
#define PGID_AGGR 64
#define PGID_SRC 80
#define PGID_ENTRIES 89
#define DEV_MAC_MAXLEN_CFG_MAX_LEN_GET(x)\
FIELD_GET(DEV_MAC_MAXLEN_CFG_MAX_LEN, x)
+/* DEV:MAC_CFG_STATUS:MAC_TAGS_CFG */
+#define DEV_MAC_TAGS_CFG(t) __REG(TARGET_DEV, t, 8, 28, 0, 1, 44, 12, 0, 1, 4)
+
+#define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA BIT(1)
+#define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_SET(x)\
+ FIELD_PREP(DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA, x)
+#define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_GET(x)\
+ FIELD_GET(DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA, x)
+
+#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0)
+#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_SET(x)\
+ FIELD_PREP(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA, x)
+#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_GET(x)\
+ FIELD_GET(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA, x)
+
/* DEV:MAC_CFG_STATUS:MAC_IFG_CFG */
#define DEV_MAC_IFG_CFG(t) __REG(TARGET_DEV, t, 8, 28, 0, 1, 44, 20, 0, 1, 4)
ANA_VLAN_CFG_VLAN_POP_CNT,
lan966x, ANA_VLAN_CFG(port->chip_port));
+ lan_rmw(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_SET(port->vlan_aware) |
+ DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_SET(port->vlan_aware),
+ DEV_MAC_TAGS_CFG_VLAN_AWR_ENA |
+ DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA,
+ lan966x, DEV_MAC_TAGS_CFG(port->chip_port));
+
/* Drop frames with multicast source address */
val = ANA_DROP_CFG_DROP_MC_SMAC_ENA_SET(1);
if (port->vlan_aware && !pvid)
return val;
}
-static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff)
+static void nfp_pf_cfg_hwinfo(struct nfp_pf *pf)
{
struct nfp_nsp *nsp;
char hwinfo[32];
+ bool sp_indiff;
int err;
nsp = nfp_nsp_open(pf->cpp);
if (IS_ERR(nsp))
- return PTR_ERR(nsp);
+ return;
+
+ if (!nfp_nsp_has_hwinfo_set(nsp))
+ goto end;
+ sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
+ (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
+
+ /* No need to clean `sp_indiff` in driver, management firmware
+ * will do it when application firmware is unloaded.
+ */
snprintf(hwinfo, sizeof(hwinfo), "sp_indiff=%d", sp_indiff);
err = nfp_nsp_hwinfo_set(nsp, hwinfo, sizeof(hwinfo));
/* Not a fatal error, no need to return error to stop driver from loading */
pf->eth_tbl = __nfp_eth_read_ports(pf->cpp, nsp);
}
+end:
nfp_nsp_close(nsp);
- return 0;
-}
-
-static int nfp_pf_nsp_cfg(struct nfp_pf *pf)
-{
- bool sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
- (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
-
- return nfp_pf_cfg_hwinfo(pf, sp_indiff);
-}
-
-static void nfp_pf_nsp_clean(struct nfp_pf *pf)
-{
- nfp_pf_cfg_hwinfo(pf, false);
}
static int nfp_pci_probe(struct pci_dev *pdev,
goto err_fw_unload;
}
- err = nfp_pf_nsp_cfg(pf);
- if (err)
- goto err_fw_unload;
+ nfp_pf_cfg_hwinfo(pf);
err = nfp_net_pci_probe(pf);
if (err)
- goto err_nsp_clean;
+ goto err_fw_unload;
err = nfp_hwmon_register(pf);
if (err) {
err_net_remove:
nfp_net_pci_remove(pf);
-err_nsp_clean:
- nfp_pf_nsp_clean(pf);
err_fw_unload:
kfree(pf->rtbl);
nfp_mip_close(pf->mip);
nfp_net_pci_remove(pf);
- nfp_pf_nsp_clean(pf);
vfree(pf->dumpspec);
kfree(pf->rtbl);
nfp_mip_close(pf->mip);
* than the full array, but leave the qcq shells in place
*/
for (i = lif->nxqs; i < lif->ionic->ntxqs_per_lif; i++) {
- lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
- ionic_qcq_free(lif, lif->txqcqs[i]);
+ if (lif->txqcqs && lif->txqcqs[i]) {
+ lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+ ionic_qcq_free(lif, lif->txqcqs[i]);
+ }
- lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
- ionic_qcq_free(lif, lif->rxqcqs[i]);
+ if (lif->rxqcqs && lif->rxqcqs[i]) {
+ lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+ ionic_qcq_free(lif, lif->rxqcqs[i]);
+ }
}
if (err)
bool was_enabled = efx->port_enabled;
int rc;
+#ifdef CONFIG_SFC_SRIOV
+ /* If this function is a VF and we have access to the parent PF,
+ * then use the PF control path to attempt to change the VF MAC address.
+ */
+ if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
+ struct efx_nic *efx_pf = pci_get_drvdata(efx->pci_dev->physfn);
+ struct efx_ef10_nic_data *nic_data = efx->nic_data;
+ u8 mac[ETH_ALEN];
+
+ /* net_dev->dev_addr can be zeroed by efx_net_stop in
+ * efx_ef10_sriov_set_vf_mac, so pass in a copy.
+ */
+ ether_addr_copy(mac, efx->net_dev->dev_addr);
+
+ rc = efx_ef10_sriov_set_vf_mac(efx_pf, nic_data->vf_index, mac);
+ if (!rc)
+ return 0;
+
+ netif_dbg(efx, drv, efx->net_dev,
+ "Updating VF mac via PF failed (%d), setting directly\n",
+ rc);
+ }
+#endif
+
efx_device_detach_sync(efx);
efx_net_stop(efx->net_dev);
efx_net_open(efx->net_dev);
efx_device_attach_if_not_resetting(efx);
-#ifdef CONFIG_SFC_SRIOV
- if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
- struct efx_ef10_nic_data *nic_data = efx->nic_data;
- struct pci_dev *pci_dev_pf = efx->pci_dev->physfn;
-
- if (rc == -EPERM) {
- struct efx_nic *efx_pf;
-
- /* Switch to PF and change MAC address on vport */
- efx_pf = pci_get_drvdata(pci_dev_pf);
-
- rc = efx_ef10_sriov_set_vf_mac(efx_pf,
- nic_data->vf_index,
- efx->net_dev->dev_addr);
- } else if (!rc) {
- struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf);
- struct efx_ef10_nic_data *nic_data = efx_pf->nic_data;
- unsigned int i;
-
- /* MAC address successfully changed by VF (with MAC
- * spoofing) so update the parent PF if possible.
- */
- for (i = 0; i < efx_pf->vf_count; ++i) {
- struct ef10_vf *vf = nic_data->vf + i;
-
- if (vf->efx == efx) {
- ether_addr_copy(vf->mac,
- efx->net_dev->dev_addr);
- return 0;
- }
- }
- }
- } else
-#endif
if (rc == -EPERM) {
netif_err(efx, drv, efx->net_dev,
"Cannot change MAC address; use sfboot to enable"
/* Allocate and initialise a struct net_device */
net_dev = alloc_etherdev_mq(sizeof(probe_data), EFX_MAX_CORE_TX_QUEUES);
- if (!net_dev)
- return -ENOMEM;
+ if (!net_dev) {
+ rc = -ENOMEM;
+ goto fail0;
+ }
probe_ptr = netdev_priv(net_dev);
*probe_ptr = probe_data;
efx->net_dev = net_dev;
WARN_ON(rc > 0);
netif_dbg(efx, drv, efx->net_dev, "initialisation failed. rc=%d\n", rc);
free_netdev(net_dev);
+ fail0:
+ kfree(probe_data);
return rc;
}
u32 priority:2;
u32 flags:6;
u32 dmaq_id:12;
- u32 vport_id;
u32 rss_context;
- __be16 outer_vid __aligned(4); /* allow jhash2() of match values */
+ u32 vport_id;
+ __be16 outer_vid;
__be16 inner_vid;
u8 loc_mac[ETH_ALEN];
u8 rem_mac[ETH_ALEN];
(EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX)))
return false;
- return memcmp(&left->outer_vid, &right->outer_vid,
+ return memcmp(&left->vport_id, &right->vport_id,
sizeof(struct efx_filter_spec) -
- offsetof(struct efx_filter_spec, outer_vid)) == 0;
+ offsetof(struct efx_filter_spec, vport_id)) == 0;
}
u32 efx_filter_spec_hash(const struct efx_filter_spec *spec)
{
- BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3);
- return jhash2((const u32 *)&spec->outer_vid,
+ BUILD_BUG_ON(offsetof(struct efx_filter_spec, vport_id) & 3);
+ return jhash2((const u32 *)&spec->vport_id,
(sizeof(struct efx_filter_spec) -
- offsetof(struct efx_filter_spec, outer_vid)) / 4,
+ offsetof(struct efx_filter_spec, vport_id)) / 4,
0);
}
ret = PTR_ERR(priv->phydev);
dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
priv->phydev = NULL;
+ mdiobus_unregister(bus);
return -ENODEV;
}
ret = phy_device_register(priv->phydev);
if (ret) {
+ phy_device_free(priv->phydev);
mdiobus_unregister(bus);
dev_err(priv->dev,
"phy_device_register err(%d)\n", ret);
phy_support_asym_pause(phydev);
+ phydev->mac_managed_pm = true;
+
phy_attached_info(phydev);
return 0;
ave_global_reset(ndev);
+ ret = phy_init_hw(ndev->phydev);
+ if (ret)
+ return ret;
+
ave_ethtool_get_wol(ndev, &wol);
wol.wolopts = priv->wolopts;
__ave_ethtool_set_wol(ndev, &wol);
struct stmmac_resources res;
struct device_node *np;
int ret, i, phy_mode;
- bool mdio = false;
np = dev_of_node(&pdev->dev);
if (!plat)
return -ENOMEM;
+ plat->mdio_node = of_get_child_by_name(np, "mdio");
if (plat->mdio_node) {
- dev_err(&pdev->dev, "Found MDIO subnode\n");
- mdio = true;
- }
+ dev_info(&pdev->dev, "Found MDIO subnode\n");
- if (mdio) {
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
sizeof(*plat->mdio_bus_data),
GFP_KERNEL);
.set_rgmii_speed = rk3588_set_gmac_speed,
.set_rmii_speed = rk3588_set_gmac_speed,
.set_clock_selection = rk3588_set_clock_selection,
+ .regs_valid = true,
+ .regs = {
+ 0xfe1b0000, /* gmac0 */
+ 0xfe1c0000, /* gmac1 */
+ 0x0, /* sentinel */
+ },
};
#define RV1108_GRF_GMAC_CON0 0X0900
if (priv->plat->tx_queues_to_use > 1)
priv->phylink_config.mac_capabilities &=
~(MAC_10HD | MAC_100HD | MAC_1000HD);
+ priv->phylink_config.mac_managed_pm = true;
phylink = phylink_create(&priv->phylink_config, fwnode,
mode, &stmmac_phylink_mac_ops);
void __iomem *erxregs = hp->erxregs;
void __iomem *bregs = hp->bigmacregs;
void __iomem *tregs = hp->tcvregs;
- const char *bursts;
+ const char *bursts = "64";
u32 regtmp, rxcfg;
/* If auto-negotiation timer is running, kill it. */
* @next_tx_buf_to_use: next Tx buffer to write to
* @next_rx_buf_to_use: next Rx buffer to read from
* @base_addr: base address of the Emaclite device
- * @reset_lock: lock used for synchronization
+ * @reset_lock: lock to serialize xmit and tx_timeout execution
* @deferred_skb: holds an skb (for transmission at a later time) when the
* Tx buffer is not free
* @phy_dev: pointer to the PHY device
#include <linux/vmalloc.h>
#include <linux/rtnetlink.h>
#include <linux/ucs2_string.h>
+#include <linux/string.h>
#include "hyperv_net.h"
#include "netvsc_trace.h"
if (resp->msg_len <=
sizeof(struct rndis_message) + RNDIS_EXT_LEN) {
memcpy(&request->response_msg, resp, RNDIS_HEADER_SIZE + sizeof(*req_id));
- memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id),
+ unsafe_memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id),
data + RNDIS_HEADER_SIZE + sizeof(*req_id),
- resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id));
+ resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id),
+ "request->response_msg is followed by a padding of RNDIS_EXT_LEN inside rndis_request");
if (request->request_msg.ndis_msg_type ==
RNDIS_MSG_QUERY && request->request_msg.msg.
query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS)
static const struct ipa_resource ipa_resource_src[] = {
[IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS] = {
.limits[IPA_RSRC_GROUP_SRC_LWA_DL] = {
- .min = 1, .max = 255,
+ .min = 1, .max = 63,
},
.limits[IPA_RSRC_GROUP_SRC_UL_DL] = {
- .min = 1, .max = 255,
+ .min = 1, .max = 63,
},
.limits[IPA_RSRC_GROUP_SRC_UC_RX_Q] = {
.min = 1, .max = 63,
const struct ipa_reg *reg;
u32 val;
+ if (ipa->version < IPA_VERSION_3_5_1)
+ return;
+
reg = ipa_reg(ipa, IDLE_INDICATION_CFG);
val = ipa_reg_encode(reg, ENTER_IDLE_DEBOUNCE_THRESH,
enter_idle_debounce_thresh);
IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
0x00000400, 0x0020);
static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
0x00000404, 0x0020);
static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
0x00000408, 0x0020);
static const u32 ipa_reg_src_rsrc_grp_67_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type,
0x0000040c, 0x0020);
static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
0x00000500, 0x0020);
static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
0x00000504, 0x0020);
static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
0x00000508, 0x0020);
static const u32 ipa_reg_dst_rsrc_grp_67_rsrc_type_fmask[] = {
- [X_MIN_LIM] = GENMASK(5, 0),
- /* Bits 6-7 reserved */
- [X_MAX_LIM] = GENMASK(13, 8),
- /* Bits 14-15 reserved */
- [Y_MIN_LIM] = GENMASK(21, 16),
- /* Bits 22-23 reserved */
- [Y_MAX_LIM] = GENMASK(29, 24),
- /* Bits 30-31 reserved */
+ [X_MIN_LIM] = GENMASK(7, 0),
+ [X_MAX_LIM] = GENMASK(15, 8),
+ [Y_MIN_LIM] = GENMASK(23, 16),
+ [Y_MAX_LIM] = GENMASK(31, 24),
};
IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type,
}
spin_unlock(&port->bc_queue.lock);
- schedule_work(&port->bc_work);
+ queue_work(system_unbound_wq, &port->bc_work);
if (err)
goto free_nskb;
static void nsim_bus_dev_release(struct device *dev)
{
+ struct nsim_bus_dev *nsim_bus_dev;
+
+ nsim_bus_dev = container_of(dev, struct nsim_bus_dev, dev);
+ kfree(nsim_bus_dev);
}
static struct device_type nsim_bus_dev_type = {
err_nsim_bus_dev_id_free:
ida_free(&nsim_bus_dev_ids, nsim_bus_dev->dev.id);
+ put_device(&nsim_bus_dev->dev);
+ nsim_bus_dev = NULL;
err_nsim_bus_dev_free:
kfree(nsim_bus_dev);
return ERR_PTR(err);
{
/* Disallow using nsim_bus_dev */
smp_store_release(&nsim_bus_dev->init, false);
- device_unregister(&nsim_bus_dev->dev);
ida_free(&nsim_bus_dev_ids, nsim_bus_dev->dev.id);
- kfree(nsim_bus_dev);
+ device_unregister(&nsim_bus_dev->dev);
}
static struct device_driver nsim_driver = {
if (IS_ERR(nsim_dev->ddir))
return PTR_ERR(nsim_dev->ddir);
nsim_dev->ports_ddir = debugfs_create_dir("ports", nsim_dev->ddir);
- if (IS_ERR(nsim_dev->ports_ddir))
- return PTR_ERR(nsim_dev->ports_ddir);
+ if (IS_ERR(nsim_dev->ports_ddir)) {
+ err = PTR_ERR(nsim_dev->ports_ddir);
+ goto err_ddir;
+ }
debugfs_create_bool("fw_update_status", 0600, nsim_dev->ddir,
&nsim_dev->fw_update_status);
debugfs_create_u32("fw_update_overwrite_mask", 0600, nsim_dev->ddir,
nsim_dev->nodes_ddir = debugfs_create_dir("rate_nodes", nsim_dev->ddir);
if (IS_ERR(nsim_dev->nodes_ddir)) {
err = PTR_ERR(nsim_dev->nodes_ddir);
- goto err_out;
+ goto err_ports_ddir;
}
debugfs_create_bool("fail_trap_drop_counter_get", 0600,
nsim_dev->ddir,
nsim_udp_tunnels_debugfs_create(nsim_dev);
return 0;
-err_out:
+err_ports_ddir:
debugfs_remove_recursive(nsim_dev->ports_ddir);
+err_ddir:
debugfs_remove_recursive(nsim_dev->ddir);
return err;
}
¶ms);
if (err) {
pr_err("Failed to register IPv4 top resource\n");
- goto out;
+ goto err_out;
}
err = devl_resource_register(devlink, "fib", (u64)-1,
NSIM_RESOURCE_IPV4, ¶ms);
if (err) {
pr_err("Failed to register IPv4 FIB resource\n");
- return err;
+ goto err_out;
}
err = devl_resource_register(devlink, "fib-rules", (u64)-1,
NSIM_RESOURCE_IPV4, ¶ms);
if (err) {
pr_err("Failed to register IPv4 FIB rules resource\n");
- return err;
+ goto err_out;
}
/* Resources for IPv6 */
¶ms);
if (err) {
pr_err("Failed to register IPv6 top resource\n");
- goto out;
+ goto err_out;
}
err = devl_resource_register(devlink, "fib", (u64)-1,
NSIM_RESOURCE_IPV6, ¶ms);
if (err) {
pr_err("Failed to register IPv6 FIB resource\n");
- return err;
+ goto err_out;
}
err = devl_resource_register(devlink, "fib-rules", (u64)-1,
NSIM_RESOURCE_IPV6, ¶ms);
if (err) {
pr_err("Failed to register IPv6 FIB rules resource\n");
- return err;
+ goto err_out;
}
/* Resources for nexthops */
NSIM_RESOURCE_NEXTHOPS,
DEVLINK_RESOURCE_ID_PARENT_TOP,
¶ms);
+ if (err) {
+ pr_err("Failed to register NEXTHOPS resource\n");
+ goto err_out;
+ }
+ return 0;
-out:
+err_out:
+ devl_resources_unregister(devlink);
return err;
}
DP83822_EEE_ERROR_CHANGE_INT_EN);
if (!dp83822->fx_enabled)
- misr_status |= DP83822_MDI_XOVER_INT_EN |
- DP83822_ANEG_ERR_INT_EN |
+ misr_status |= DP83822_ANEG_ERR_INT_EN |
DP83822_WOL_PKT_INT_EN;
err = phy_write(phydev, MII_DP83822_MISR2, misr_status);
else
val &= ~DP83867_SGMII_TYPE;
phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_SGMIICTL, val);
+
+ /* This is a SW workaround for link instability if RX_CTRL is
+ * not strapped to mode 3 or 4 in HW. This is required for SGMII
+ * in addition to clearing bit 7, handled above.
+ */
+ if (dp83867->rxctrl_strap_quirk)
+ phy_set_bits_mmd(phydev, DP83867_DEVADDR, DP83867_CFG4,
+ BIT(8));
}
val = phy_read(phydev, DP83867_CFG3);
}
for (i = 0; i < PHY_MAX_ADDR; i++) {
- if ((bus->phy_mask & (1 << i)) == 0) {
+ if ((bus->phy_mask & BIT(i)) == 0) {
struct phy_device *phydev;
phydev = mdiobus_scan(bus, i);
if (phy_interrupt_is_valid(phy))
phy_request_interrupt(phy);
+ if (pl->config->mac_managed_pm)
+ phy->mac_managed_pm = true;
+
return 0;
}
int err;
int i;
- if (it->nr_segs > MAX_SKB_FRAGS + 1)
+ if (it->nr_segs > MAX_SKB_FRAGS + 1 ||
+ len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN))
return ERR_PTR(-EMSGSIZE);
local_bh_disable();
return ERR_PTR(err);
err_free_dev:
- kfree(dev);
+ put_device(&dev->dev);
return ERR_PTR(err);
}
static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
{
struct fdp_nci_info *info = nci_get_drvdata(ndev);
+ int ret;
if (atomic_dec_and_test(&info->data_pkt_counter))
info->data_pkt_counter_cb(ndev);
- return info->phy_ops->write(info->phy, skb);
+ ret = info->phy_ops->write(info->phy, skb);
+ if (ret < 0) {
+ kfree_skb(skb);
+ return ret;
+ }
+
+ consume_skb(skb);
+ return 0;
}
static int fdp_nci_request_firmware(struct nci_dev *ndev)
ret = -EREMOTEIO;
} else
ret = 0;
+ }
+
+ if (ret) {
kfree_skb(skb);
+ return ret;
}
- return ret;
+ consume_skb(skb);
+ return 0;
}
static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
return -EINVAL;
r = info->phy_ops->write(info->phy_id, skb);
- if (r < 0)
+ if (r < 0) {
kfree_skb(skb);
+ return r;
+ }
- return r;
+ consume_skb(skb);
+ return 0;
}
static int nxp_nci_rf_pll_unlocked_ntf(struct nci_dev *ndev,
}
ret = s3fwrn5_write(info, skb);
- if (ret < 0)
+ if (ret < 0) {
kfree_skb(skb);
+ mutex_unlock(&info->mutex);
+ return ret;
+ }
+ consume_skb(skb);
mutex_unlock(&info->mutex);
- return ret;
+ return 0;
}
static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
mutex_lock(&nci_mutex);
if (state != virtual_ncidev_enabled) {
mutex_unlock(&nci_mutex);
+ kfree_skb(skb);
return 0;
}
if (send_buff) {
mutex_unlock(&nci_mutex);
+ kfree_skb(skb);
return -1;
}
send_buff = skb_copy(skb, GFP_KERNEL);
mutex_unlock(&nci_mutex);
wake_up_interruptible(&wq);
+ consume_skb(skb);
return 0;
}
dma_max_mapping_size(anv->dev) >> 9);
anv->ctrl.max_segments = NVME_MAX_SEGS;
+ dma_set_max_seg_size(anv->dev, 0xffffffff);
+
/*
* Enable NVMMU and linear submission queues.
* While we could keep those disabled and pretend this is slightly
return ret;
if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
+ /*
+ * Do not return errors unless we are in a controller reset,
+ * the controller works perfectly fine without hwmon.
+ */
ret = nvme_hwmon_init(ctrl);
- if (ret < 0)
+ if (ret == -EINTR)
return ret;
}
return 0;
out_cleanup_admin_q:
- blk_mq_destroy_queue(ctrl->fabrics_q);
+ blk_mq_destroy_queue(ctrl->admin_q);
out_free_tagset:
blk_mq_free_tag_set(ctrl->admin_tagset);
return ret;
struct nvme_hwmon_data {
struct nvme_ctrl *ctrl;
- struct nvme_smart_log log;
+ struct nvme_smart_log *log;
struct mutex read_lock;
};
static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data)
{
return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,
- NVME_CSI_NVM, &data->log, sizeof(data->log), 0);
+ NVME_CSI_NVM, data->log, sizeof(*data->log), 0);
}
static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
struct nvme_hwmon_data *data = dev_get_drvdata(dev);
- struct nvme_smart_log *log = &data->log;
+ struct nvme_smart_log *log = data->log;
int temp;
int err;
case hwmon_temp_max:
case hwmon_temp_min:
if ((!channel && data->ctrl->wctemp) ||
- (channel && data->log.temp_sensor[channel - 1])) {
+ (channel && data->log->temp_sensor[channel - 1])) {
if (data->ctrl->quirks &
NVME_QUIRK_NO_TEMP_THRESH_CHANGE)
return 0444;
break;
case hwmon_temp_input:
case hwmon_temp_label:
- if (!channel || data->log.temp_sensor[channel - 1])
+ if (!channel || data->log->temp_sensor[channel - 1])
return 0444;
break;
default:
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
- return 0;
+ return -ENOMEM;
+
+ data->log = kzalloc(sizeof(*data->log), GFP_KERNEL);
+ if (!data->log) {
+ err = -ENOMEM;
+ goto err_free_data;
+ }
data->ctrl = ctrl;
mutex_init(&data->read_lock);
err = nvme_hwmon_get_smart_log(data);
if (err) {
dev_warn(dev, "Failed to read smart log (error %d)\n", err);
- kfree(data);
- return err;
+ goto err_free_log;
}
hwmon = hwmon_device_register_with_info(dev, "nvme",
NULL);
if (IS_ERR(hwmon)) {
dev_warn(dev, "Failed to instantiate hwmon device\n");
- kfree(data);
- return PTR_ERR(hwmon);
+ err = PTR_ERR(hwmon);
+ goto err_free_log;
}
ctrl->hwmon_device = hwmon;
return 0;
+
+err_free_log:
+ kfree(data->log);
+err_free_data:
+ kfree(data);
+ return err;
}
void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
hwmon_device_unregister(ctrl->hwmon_device);
ctrl->hwmon_device = NULL;
+ kfree(data->log);
kfree(data);
}
}
/* set to a default value of 512 until the disk is validated */
blk_queue_logical_block_size(head->disk->queue, 512);
blk_set_stacking_limits(&head->disk->queue->limits);
+ blk_queue_dma_alignment(head->disk->queue, 3);
/* we need to propagate up the VMC settings */
if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
{ PCI_DEVICE(0x2646, 0x2263), /* KINGSTON A2000 NVMe SSD */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0x2646, 0x5018), /* KINGSTON OM8SFP4xxxxP OS21012 NVMe SSD */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x2646, 0x5016), /* KINGSTON OM3PGP4xxxxP OS21011 NVMe SSD */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x2646, 0x501A), /* KINGSTON OM8PGP4xxxxP OS21005 NVMe SSD */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x2646, 0x501B), /* KINGSTON OM8PGP4xxxxQ OS21005 NVMe SSD */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x2646, 0x501E), /* KINGSTON OM3PGP4xxxxQ OS21011 NVMe SSD */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_DEVICE(0x1e4B, 0x1001), /* MAXIO MAP1001 */
.driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1e4B, 0x1002), /* MAXIO MAP1002 */
{
struct scatterlist sg;
- sg_init_marker(&sg, 1);
+ sg_init_table(&sg, 1);
sg_set_page(&sg, page, len, off);
ahash_request_set_crypt(hash, &sg, NULL, len);
crypto_ahash_update(hash);
static int nvme_tcp_try_send(struct nvme_tcp_queue *queue)
{
struct nvme_tcp_request *req;
+ unsigned int noreclaim_flag;
int ret = 1;
if (!queue->request) {
}
req = queue->request;
+ noreclaim_flag = memalloc_noreclaim_save();
if (req->state == NVME_TCP_SEND_CMD_PDU) {
ret = nvme_tcp_try_send_cmd_pdu(req);
if (ret <= 0)
goto done;
if (!nvme_tcp_has_inline_data(req))
- return ret;
+ goto out;
}
if (req->state == NVME_TCP_SEND_H2C_PDU) {
nvme_tcp_fail_request(queue->request);
nvme_tcp_done_send_req(queue);
}
+out:
+ memalloc_noreclaim_restore(noreclaim_flag);
return ret;
}
struct page *page;
struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
struct nvme_tcp_queue *queue = &ctrl->queues[qid];
+ unsigned int noreclaim_flag;
if (!test_and_clear_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
return;
__page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias);
queue->pf_cache.va = NULL;
}
+
+ noreclaim_flag = memalloc_noreclaim_save();
sock_release(queue->sock);
+ memalloc_noreclaim_restore(noreclaim_flag);
+
kfree(queue->pdu);
mutex_destroy(&queue->send_mutex);
mutex_destroy(&queue->queue_lock);
static ssize_t nvmet_subsys_attr_qid_max_store(struct config_item *item,
const char *page, size_t cnt)
{
- struct nvmet_port *port = to_nvmet_port(item);
u16 qid_max;
- if (nvmet_is_port_enabled(port, __func__))
- return -EACCES;
-
if (sscanf(page, "%hu\n", &qid_max) != 1)
return -EINVAL;
* reset the keep alive timer when the controller is enabled.
*/
if (ctrl->kato)
- mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ mod_delayed_work(nvmet_wq, &ctrl->ka_work, ctrl->kato * HZ);
}
static void nvmet_clear_ctrl(struct nvmet_ctrl *ctrl)
return vi->txn_irq;
}
+EXPORT_SYMBOL(iosapic_serial_irq);
#endif
* all) PA-RISC machines should have them. Anyway, for safety reasons, the
* following code can deal with just 96 bytes of Stable Storage, and all
* sizes between 96 and 192 bytes (provided they are multiple of struct
- * device_path size, eg: 128, 160 and 192) to provide full information.
+ * pdc_module_path size, eg: 128, 160 and 192) to provide full information.
* One last word: there's one path we can always count on: the primary path.
* Anything above 224 bytes is used for 'osdep2' OS-dependent storage area.
*
short ready; /* entry record is valid if != 0 */
unsigned long addr; /* entry address in stable storage */
char *name; /* entry name */
- struct device_path devpath; /* device path in parisc representation */
+ struct pdc_module_path devpath; /* device path in parisc representation */
struct device *dev; /* corresponding device */
struct kobject kobj;
};
static int
pdcspath_fetch(struct pdcspath_entry *entry)
{
- struct device_path *devpath;
+ struct pdc_module_path *devpath;
if (!entry)
return -EINVAL;
return -EIO;
/* Find the matching device.
- NOTE: hardware_path overlays with device_path, so the nice cast can
+ NOTE: hardware_path overlays with pdc_module_path, so the nice cast can
be used */
entry->dev = hwpath_to_device((struct hardware_path *)devpath);
static void
pdcspath_store(struct pdcspath_entry *entry)
{
- struct device_path *devpath;
+ struct pdc_module_path *devpath;
BUG_ON(!entry);
pdcspath_hwpath_read(struct pdcspath_entry *entry, char *buf)
{
char *out = buf;
- struct device_path *devpath;
+ struct pdc_module_path *devpath;
short i;
if (!entry || !buf)
return -ENODATA;
for (i = 0; i < 6; i++) {
- if (devpath->bc[i] >= 128)
+ if (devpath->path.bc[i] < 0)
continue;
- out += sprintf(out, "%u/", (unsigned char)devpath->bc[i]);
+ out += sprintf(out, "%d/", devpath->path.bc[i]);
}
- out += sprintf(out, "%u\n", (unsigned char)devpath->mod);
+ out += sprintf(out, "%u\n", (unsigned char)devpath->path.mod);
return out - buf;
}
for (i=5; ((temp = strrchr(in, '/'))) && (temp-in > 0) && (likely(i)); i--) {
hwpath.bc[i] = simple_strtoul(temp+1, NULL, 10);
in[temp-in] = '\0';
- DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]);
+ DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]);
}
/* Store the final field */
hwpath.bc[i] = simple_strtoul(in, NULL, 10);
- DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]);
+ DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]);
/* Now we check that the user isn't trying to lure us */
if (!(dev = hwpath_to_device((struct hardware_path *)&hwpath))) {
pdcspath_layer_read(struct pdcspath_entry *entry, char *buf)
{
char *out = buf;
- struct device_path *devpath;
+ struct pdc_module_path *devpath;
short i;
if (!entry || !buf)
pathentry = &pdcspath_entry_primary;
read_lock(&pathentry->rw_lock);
- out += sprintf(out, "%s\n", (pathentry->devpath.flags & knob) ?
+ out += sprintf(out, "%s\n", (pathentry->devpath.path.flags & knob) ?
"On" : "Off");
read_unlock(&pathentry->rw_lock);
/* print the timer value in seconds */
read_lock(&pathentry->rw_lock);
- out += sprintf(out, "%u\n", (pathentry->devpath.flags & PF_TIMER) ?
- (1 << (pathentry->devpath.flags & PF_TIMER)) : 0);
+ out += sprintf(out, "%u\n", (pathentry->devpath.path.flags & PF_TIMER) ?
+ (1 << (pathentry->devpath.path.flags & PF_TIMER)) : 0);
read_unlock(&pathentry->rw_lock);
return out - buf;
/* Be nice to the existing flag record */
read_lock(&pathentry->rw_lock);
- flags = pathentry->devpath.flags;
+ flags = pathentry->devpath.path.flags;
read_unlock(&pathentry->rw_lock);
DPRINTK("%s: flags before: 0x%X\n", __func__, flags);
write_lock(&pathentry->rw_lock);
/* Change the path entry flags first */
- pathentry->devpath.flags = flags;
+ pathentry->devpath.path.flags = flags;
/* Now, dive in. Write back to the hardware */
pdcspath_store(pathentry);
* address (access to which generates correct config transaction) falls in
* this 4 KiB region.
*/
+static unsigned int tegra_pcie_conf_offset(u8 bus, unsigned int devfn,
+ unsigned int where)
+{
+ return ((where & 0xf00) << 16) | (bus << 16) | (PCI_SLOT(devfn) << 11) |
+ (PCI_FUNC(devfn) << 8) | (where & 0xff);
+}
+
static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus,
unsigned int devfn,
int where)
unsigned int offset;
u32 base;
- offset = PCI_CONF1_EXT_ADDRESS(bus->number, PCI_SLOT(devfn),
- PCI_FUNC(devfn), where) &
- ~PCI_CONF1_ENABLE;
+ offset = tegra_pcie_conf_offset(bus->number, devfn, where);
/* move 4 KiB window to offset within the FPCI region */
base = 0xfe100000 + ((offset & ~(SZ_4K - 1)) >> 8);
static const struct group_desc jz4755_groups[] = {
INGENIC_PIN_GROUP("uart0-data", jz4755_uart0_data, 0),
INGENIC_PIN_GROUP("uart0-hwflow", jz4755_uart0_hwflow, 0),
- INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 0),
+ INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 1),
INGENIC_PIN_GROUP("uart2-data", jz4755_uart2_data, 1),
INGENIC_PIN_GROUP("ssi-dt-b", jz4755_ssi_dt_b, 0),
INGENIC_PIN_GROUP("ssi-dt-f", jz4755_ssi_dt_f, 0),
"ssi-ce1-b", "ssi-ce1-f",
};
static const char *jz4755_mmc0_groups[] = { "mmc0-1bit", "mmc0-4bit", };
-static const char *jz4755_mmc1_groups[] = { "mmc0-1bit", "mmc0-4bit", };
+static const char *jz4755_mmc1_groups[] = { "mmc1-1bit", "mmc1-4bit", };
static const char *jz4755_i2c_groups[] = { "i2c-data", };
static const char *jz4755_cim_groups[] = { "cim-data", };
static const char *jz4755_lcd_groups[] = {
if (val & bit)
ack = true;
+ /* Try to clear any rising edges */
+ if (!active && ack)
+ regmap_write_bits(info->map, REG(OCELOT_GPIO_INTR, info, gpio),
+ bit, bit);
+
/* Enable the interrupt now */
gpiochip_enable_irq(chip, gpio);
regmap_update_bits(info->map, REG(OCELOT_GPIO_INTR_ENA, info, gpio),
bit, bit);
/*
- * In case the interrupt line is still active and the interrupt
- * controller has not seen any changes in the interrupt line, then it
- * means that there happen another interrupt while the line was active.
+ * In case the interrupt line is still active then it means that
+ * there happen another interrupt while the line was active.
* So we missed that one, so we need to kick the interrupt again
* handler.
*/
- if (active && !ack) {
+ regmap_read(info->map, REG(OCELOT_GPIO_IN, info, gpio), &val);
+ if ((!(val & bit) && trigger_level == IRQ_TYPE_LEVEL_LOW) ||
+ (val & bit && trigger_level == IRQ_TYPE_LEVEL_HIGH))
+ active = true;
+
+ if (active) {
struct ocelot_irq_work *work;
work = kmalloc(sizeof(*work), GFP_ATOMIC);
break;
case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
- param = PM_PINCTRL_CONFIG_TRI_STATE;
- arg = PM_PINCTRL_TRI_STATE_ENABLE;
- ret = zynqmp_pm_pinctrl_set_config(pin, param, arg);
- break;
case PIN_CONFIG_MODE_LOW_POWER:
/*
* These cases are mentioned in dts but configurable
*/
ret = 0;
break;
- case PIN_CONFIG_OUTPUT_ENABLE:
- param = PM_PINCTRL_CONFIG_TRI_STATE;
- arg = PM_PINCTRL_TRI_STATE_DISABLE;
- ret = zynqmp_pm_pinctrl_set_config(pin, param, arg);
- break;
default:
dev_warn(pctldev->dev,
"unsupported configuration parameter '%u'\n",
* detection.
* @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller
* @disabled_for_mux: These IRQs were disabled because we muxed away.
+ * @ever_gpio: This bit is set the first time we mux a pin to gpio_func.
* @soc: Reference to soc_data of platform specific data.
* @regs: Base addresses for the TLMM tiles.
* @phys_base: Physical base address
DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO);
DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO);
DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO);
+ DECLARE_BITMAP(ever_gpio, MAX_NR_GPIO);
const struct msm_pinctrl_soc_data *soc;
void __iomem *regs[MAX_NR_TILES];
val = msm_readl_ctl(pctrl, g);
+ /*
+ * If this is the first time muxing to GPIO and the direction is
+ * output, make sure that we're not going to be glitching the pin
+ * by reading the current state of the pin and setting it as the
+ * output.
+ */
+ if (i == gpio_func && (val & BIT(g->oe_bit)) &&
+ !test_and_set_bit(group, pctrl->ever_gpio)) {
+ u32 io_val = msm_readl_io(pctrl, g);
+
+ if (io_val & BIT(g->in_bit)) {
+ if (!(io_val & BIT(g->out_bit)))
+ msm_writel_io(io_val | BIT(g->out_bit), pctrl, g);
+ } else {
+ if (io_val & BIT(g->out_bit))
+ msm_writel_io(io_val & ~BIT(g->out_bit), pctrl, g);
+ }
+ }
+
if (egpio_func && i == egpio_func) {
if (val & BIT(g->egpio_present))
val &= ~BIT(g->egpio_enable);
struct key_entry ke;
struct backlight_device *bd;
+ bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM);
+ if (bd) {
+ loongson_laptop_backlight_update(bd) ?
+ pr_warn("Loongson_backlight: resume brightness failed") :
+ pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness);
+ }
+
/*
* Only if the firmware supports SW_LID event model, we can handle the
* event. This is for the consideration of development board without EC.
}
}
- bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM);
- if (bd) {
- loongson_laptop_backlight_update(bd) ?
- pr_warn("Loongson_backlight: resume brightness failed") :
- pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness);
- }
-
return 0;
}
if (ret < 0) {
pr_err("Failed to setup input device keymap\n");
input_free_device(generic_inputdev);
+ generic_inputdev = NULL;
return ret;
}
if (ret)
return -EINVAL;
- if (sub_driver->init)
- sub_driver->init(sub_driver);
+ if (sub_driver->init) {
+ ret = sub_driver->init(sub_driver);
+ if (ret)
+ goto err_out;
+ }
if (sub_driver->notify) {
ret = setup_acpi_notify(sub_driver);
err_out:
generic_subdriver_exit(sub_driver);
- return (ret < 0) ? ret : 0;
+ return ret;
}
static void generic_subdriver_exit(struct generic_sub_driver *sub_driver)
struct rtc_time tm;
int rc;
+ /* we haven't yet read SMU version */
+ if (!pdev->major) {
+ rc = amd_pmc_get_smu_version(pdev);
+ if (rc)
+ return rc;
+ }
+
if (pdev->major < 64 || (pdev->major == 64 && pdev->minor < 53))
return 0;
},
.driver_data = &quirk_asus_tablet_mode,
},
+ {
+ .callback = dmi_matched,
+ .ident = "ASUS ROG FLOW X16",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "GV601R"),
+ },
+ .driver_data = &quirk_asus_tablet_mode,
+ },
{},
};
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &tgl_reg_map),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_reg_map),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &tgl_reg_map),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &adl_reg_map),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &adl_reg_map),
{}
};
#define TPACPI_DBG_BRGHT 0x0020
#define TPACPI_DBG_MIXER 0x0040
+#define FAN_NOT_PRESENT 65535
+
#define strlencmp(a, b) (strncmp((a), (b), strlen(b)))
/* Try and probe the 2nd fan */
tp_features.second_fan = 1; /* needed for get_speed to work */
res = fan2_get_speed(&speed);
- if (res >= 0) {
+ if (res >= 0 && speed != FAN_NOT_PRESENT) {
/* It responded - so let's assume it's there */
tp_features.second_fan = 1;
tp_features.second_fan_ctl = 1;
static inline void rtc_wake_setup(struct device *dev)
{
+ if (acpi_disabled)
+ return;
+
acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev);
/*
* After the RTC handler is installed, the Fixed_RTC event should
use_acpi_alarm_quirks();
- rtc_wake_setup(dev);
acpi_rtc_info.wake_on = rtc_wake_on;
acpi_rtc_info.wake_off = rtc_wake_off;
{
}
+static void rtc_wake_setup(struct device *dev)
+{
+}
#endif
#ifdef CONFIG_PNP
{
int irq, ret;
+ cmos_wake_setup(&pnp->dev);
+
if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) {
irq = 0;
#ifdef CONFIG_X86
if (ret)
return ret;
- cmos_wake_setup(&pnp->dev);
+ rtc_wake_setup(&pnp->dev);
return 0;
}
int irq, ret;
cmos_of_init(pdev);
+ cmos_wake_setup(&pdev->dev);
if (RTC_IOMAPPED)
resource = platform_get_resource(pdev, IORESOURCE_IO, 0);
if (ret)
return ret;
- cmos_wake_setup(&pdev->dev);
+ rtc_wake_setup(&pdev->dev);
return 0;
}
{
struct idset *set = data;
struct subchannel *sch = to_subchannel(dev);
- struct ccw_device *cdev;
- if (sch->st == SUBCHANNEL_TYPE_IO) {
- cdev = sch_get_cdev(sch);
- if (cdev && cdev->online)
- idset_sch_del(set, sch->schid);
- }
+ if (sch->st == SUBCHANNEL_TYPE_IO && sch->config.ena)
+ idset_sch_del(set, sch->schid);
return 0;
}
struct mutex guests_lock; /* serializes access to each KVM guest */
struct mdev_parent parent;
struct mdev_type mdev_type;
- struct mdev_type *mdev_types[];
+ struct mdev_type *mdev_types[1];
};
extern struct ap_matrix_dev *matrix_dev;
*
* This function obtains the transmit and receive ids required to send
* an unsolicited ct command with a payload. A special lpfc FsType and CmdRsp
- * flags are used to the unsolicted response handler is able to process
+ * flags are used to the unsolicited response handler is able to process
* the ct command sent on the same port.
**/
static int lpfcdiag_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,
* @len: Number of data bytes
*
* This function allocates and posts a data buffer of sufficient size to receive
- * an unsolicted CT command.
+ * an unsolicited CT command.
**/
static int lpfcdiag_sli3_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
size_t len)
get_job_ulpstatus(phba, piocbq));
}
lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
- "0145 Ignoring unsolicted CT HBQ Size:%d "
+ "0145 Ignoring unsolicited CT HBQ Size:%d "
"status = x%x\n",
size, get_job_ulpstatus(phba, piocbq));
}
rc = lpfc_vmid_res_alloc(phba, vport);
if (rc)
- goto out;
+ goto out_put_shost;
/* Initialize all internally managed lists. */
INIT_LIST_HEAD(&vport->fc_nodes);
error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev);
if (error)
- goto out_put_shost;
+ goto out_free_vmid;
spin_lock_irq(&phba->port_list_lock);
list_add_tail(&vport->listentry, &phba->port_list);
spin_unlock_irq(&phba->port_list_lock);
return vport;
-out_put_shost:
+out_free_vmid:
kfree(vport->vmid);
bitmap_free(vport->vmid_priority_range);
+out_put_shost:
scsi_host_put(shost);
out:
return NULL;
static
int megasas_get_device_list(struct megasas_instance *instance)
{
- memset(instance->pd_list, 0,
- (MEGASAS_MAX_PD * sizeof(struct megasas_pd_list)));
- memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS);
-
if (instance->enable_fw_dev_list) {
if (megasas_host_device_list_query(instance, true))
return FAILED;
if (!fusion->ioc_init_request) {
dev_err(&pdev->dev,
- "Failed to allocate PD list buffer\n");
+ "Failed to allocate ioc init request\n");
return -ENOMEM;
}
(instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0071SKINNY))
instance->flag_ieee = 1;
- megasas_dbg_lvl = 0;
instance->flag = 0;
instance->unload = 1;
instance->last_time = 0;
int megasas_update_device_list(struct megasas_instance *instance,
int event_type)
{
- int dcmd_ret = DCMD_SUCCESS;
+ int dcmd_ret;
if (instance->enable_fw_dev_list) {
- dcmd_ret = megasas_host_device_list_query(instance, false);
- if (dcmd_ret != DCMD_SUCCESS)
- goto out;
+ return megasas_host_device_list_query(instance, false);
} else {
if (event_type & SCAN_PD_CHANNEL) {
dcmd_ret = megasas_get_pd_list(instance);
-
if (dcmd_ret != DCMD_SUCCESS)
- goto out;
+ return dcmd_ret;
}
if (event_type & SCAN_VD_CHANNEL) {
if (!instance->requestorId ||
megasas_get_ld_vf_affiliation(instance, 0)) {
- dcmd_ret = megasas_ld_list_query(instance,
+ return megasas_ld_list_query(instance,
MR_LD_QUERY_TYPE_EXPOSED_TO_HOST);
- if (dcmd_ret != DCMD_SUCCESS)
- goto out;
}
}
}
-
-out:
- return dcmd_ret;
+ return DCMD_SUCCESS;
}
/**
sdev1 = scsi_device_lookup(instance->host,
MEGASAS_MAX_PD_CHANNELS +
(ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL),
- (ld_target_id - MEGASAS_MAX_DEV_PER_CHANNEL),
+ (ld_target_id % MEGASAS_MAX_DEV_PER_CHANNEL),
0);
if (sdev1)
megasas_remove_scsi_device(sdev1);
*/
pr_info("megasas: %s\n", MEGASAS_VERSION);
+ megasas_dbg_lvl = 0;
support_poll_for_event = 2;
support_device_change = 1;
support_nvme_encapsulation = true;
tristate "Broadcom MPI3 Storage Controller Device Driver"
depends on PCI && SCSI
select BLK_DEV_BSGLIB
+ select SCSI_SAS_ATTRS
help
MPI3 based Storage & RAID Controllers Driver.
u64 coherent_dma_mask, dma_mask;
if (ioc->is_mcpu_endpoint || sizeof(dma_addr_t) == 4 ||
- dma_get_required_mask(&pdev->dev) <= 32) {
+ dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32)) {
ioc->dma_mask = 32;
coherent_dma_mask = dma_mask = DMA_BIT_MASK(32);
/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
static struct scsi_host_template pm8001_sht = {
.module = THIS_MODULE,
.name = DRV_NAME,
+ .proc_name = DRV_NAME,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
if (!capable(CAP_SYS_ADMIN) || off != 0 || count > DCBX_TLV_DATA_SIZE)
return 0;
+ mutex_lock(&vha->hw->optrom_mutex);
if (ha->dcbx_tlv)
goto do_read;
- mutex_lock(&vha->hw->optrom_mutex);
if (qla2x00_chip_is_down(vha)) {
mutex_unlock(&vha->hw->optrom_mutex);
return 0;
.bsg_timeout = qla24xx_bsg_timeout,
};
+static uint
+qla2x00_get_host_supported_speeds(scsi_qla_host_t *vha, uint speeds)
+{
+ uint supported_speeds = FC_PORTSPEED_UNKNOWN;
+
+ if (speeds & FDMI_PORT_SPEED_64GB)
+ supported_speeds |= FC_PORTSPEED_64GBIT;
+ if (speeds & FDMI_PORT_SPEED_32GB)
+ supported_speeds |= FC_PORTSPEED_32GBIT;
+ if (speeds & FDMI_PORT_SPEED_16GB)
+ supported_speeds |= FC_PORTSPEED_16GBIT;
+ if (speeds & FDMI_PORT_SPEED_8GB)
+ supported_speeds |= FC_PORTSPEED_8GBIT;
+ if (speeds & FDMI_PORT_SPEED_4GB)
+ supported_speeds |= FC_PORTSPEED_4GBIT;
+ if (speeds & FDMI_PORT_SPEED_2GB)
+ supported_speeds |= FC_PORTSPEED_2GBIT;
+ if (speeds & FDMI_PORT_SPEED_1GB)
+ supported_speeds |= FC_PORTSPEED_1GBIT;
+
+ return supported_speeds;
+}
+
void
qla2x00_init_host_attr(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
- u32 speeds = FC_PORTSPEED_UNKNOWN;
+ u32 speeds = 0, fdmi_speed = 0;
fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count;
fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name);
fc_host_max_npiv_vports(vha->host) = ha->max_npiv_vports;
fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count;
- speeds = qla25xx_fdmi_port_speed_capability(ha);
+ fdmi_speed = qla25xx_fdmi_port_speed_capability(ha);
+ speeds = qla2x00_get_host_supported_speeds(vha, fdmi_speed);
fc_host_supported_speeds(vha->host) = speeds;
}
}
mutex_lock(&sdev->state_mutex);
+ switch (sdev->sdev_state) {
+ case SDEV_RUNNING:
+ case SDEV_OFFLINE:
+ break;
+ default:
+ mutex_unlock(&sdev->state_mutex);
+ return -EINVAL;
+ }
if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
ret = 0;
} else {
ret = pm_genpd_init(&domain->genpd, NULL, domain->init_off);
if (ret)
- return ret;
+ goto err_clk_unprepare;
platform_set_drvdata(pdev, domain);
- return of_genpd_add_provider_simple(np, &domain->genpd);
+ ret = of_genpd_add_provider_simple(np, &domain->genpd);
+ if (ret)
+ goto err_genpd_remove;
+
+ return 0;
+
+err_genpd_remove:
+ pm_genpd_remove(&domain->genpd);
+
+err_clk_unprepare:
+ if (!domain->init_off)
+ clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
+
+ return ret;
}
static const struct of_device_id imx93_pd_ids[] = {
windows[cs].cs = cs;
windows[cs].size = data->segment_end(aspi, reg_val) -
data->segment_start(aspi, reg_val);
- windows[cs].offset = cs ? windows[cs - 1].offset + windows[cs - 1].size : 0;
+ windows[cs].offset = data->segment_start(aspi, reg_val) - aspi->ahb_base_phy;
dev_vdbg(aspi->dev, "CE%d offset=0x%.8x size=0x%x\n", cs,
windows[cs].offset, windows[cs].size);
}
else
master->num_chipselect = num_cs;
+ master->use_gpio_descriptors = true;
+ master->max_native_cs = SPI_NUM_CHIPSELECTS;
master->bus_num = pdev->id;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
pci/atomisp_compat_css20.o \
pci/atomisp_csi2.o \
pci/atomisp_drvfs.o \
- pci/atomisp_file.o \
pci/atomisp_fops.o \
pci/atomisp_ioctl.o \
pci/atomisp_subdev.o \
if (!ov2680_info)
return -EINVAL;
- mutex_lock(&dev->input_lock);
-
res = v4l2_find_nearest_size(ov2680_res_preview,
ARRAY_SIZE(ov2680_res_preview), width,
height, fmt->width, fmt->height);
fmt->code = MEDIA_BUS_FMT_SBGGR10_1X10;
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
sd_state->pads->try_fmt = *fmt;
- mutex_unlock(&dev->input_lock);
return 0;
}
dev_dbg(&client->dev, "%s: %dx%d\n",
__func__, fmt->width, fmt->height);
+ mutex_lock(&dev->input_lock);
+
/* s_power has not been called yet for std v4l2 clients (camorama) */
power_up(sd);
ret = ov2680_write_reg_array(client, dev->res->regs);
- if (ret)
+ if (ret) {
dev_err(&client->dev,
"ov2680 write resolution register err: %d\n", ret);
+ goto err;
+ }
vts = dev->res->lines_per_frame;
vts = dev->exposure + OV2680_INTEGRATION_TIME_MARGIN;
ret = ov2680_write_reg(client, 2, OV2680_TIMING_VTS_H, vts);
- if (ret)
+ if (ret) {
dev_err(&client->dev, "ov2680 write vts err: %d\n", ret);
+ goto err;
+ }
ret = ov2680_get_intg_factor(client, ov2680_info, res);
if (ret) {
if (v_flag)
ov2680_v_flip(sd, v_flag);
- /*
- * ret = startup(sd);
- * if (ret)
- * dev_err(&client->dev, "ov2680 startup err\n");
- */
+ dev->res = res;
err:
mutex_unlock(&dev->input_lock);
return ret;
#define check_bo_null_return_void(bo) \
check_null_return_void(bo, "NULL hmm buffer object.\n")
-#define HMM_MAX_ORDER 3
-#define HMM_MIN_ORDER 0
-
#define ISP_VM_START 0x0
#define ISP_VM_SIZE (0x7FFFFFFF) /* 2G address space */
#define ISP_PTR_NULL NULL
#define HMM_BO_VMAPED 0x10
#define HMM_BO_VMAPED_CACHED 0x20
#define HMM_BO_ACTIVE 0x1000
-#define HMM_BO_MEM_TYPE_USER 0x1
-#define HMM_BO_MEM_TYPE_PFN 0x2
struct hmm_bo_device {
struct isp_mmu mmu;
enum hmm_bo_type type;
int mmap_count;
int status;
- int mem_type;
void *vmap_addr; /* kernel virtual address by vmap */
struct rb_node node;
ATOMISP_FRAME_STATUS_FLASH_FAILED,
};
-/* ISP memories, isp2400 */
-enum atomisp_acc_memory {
- ATOMISP_ACC_MEMORY_PMEM0 = 0,
- ATOMISP_ACC_MEMORY_DMEM0,
- /* for backward compatibility */
- ATOMISP_ACC_MEMORY_DMEM = ATOMISP_ACC_MEMORY_DMEM0,
- ATOMISP_ACC_MEMORY_VMEM0,
- ATOMISP_ACC_MEMORY_VAMEM0,
- ATOMISP_ACC_MEMORY_VAMEM1,
- ATOMISP_ACC_MEMORY_VAMEM2,
- ATOMISP_ACC_MEMORY_HMEM0,
- ATOMISP_ACC_NR_MEMORY
-};
-
enum atomisp_ext_isp_id {
EXT_ISP_CID_ISO = 0,
EXT_ISP_CID_CAPTURE_HDR,
int atomisp_gmin_remove_subdev(struct v4l2_subdev *sd);
int gmin_get_var_int(struct device *dev, bool is_gmin,
const char *var, int def);
-int camera_sensor_csi(struct v4l2_subdev *sd, u32 port,
- u32 lanes, u32 format, u32 bayer_order, int flag);
struct camera_sensor_platform_data *
gmin_camera_platform_data(
struct v4l2_subdev *subdev,
struct intel_v4l2_subdev_table *subdevs;
};
-/* Describe the capacities of one single sensor. */
-struct atomisp_sensor_caps {
- /* The number of streams this sensor can output. */
- int stream_num;
- bool is_slave;
-};
-
-/* Describe the capacities of sensors connected to one camera port. */
-struct atomisp_camera_caps {
- /* The number of sensors connected to this camera port. */
- int sensor_num;
- /* The capacities of each sensor. */
- struct atomisp_sensor_caps sensor[MAX_SENSORS_PER_PORT];
- /* Define whether stream control is required for multiple streams. */
- bool multi_stream_ctrl;
-};
-
/*
* Sensor of external ISP can send multiple steams with different mipi data
* type in the same virtual channel. This information needs to come from the
};
const struct atomisp_platform_data *atomisp_get_platform_data(void);
-const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void);
/* API from old platform_camera.h, new CPUID implementation */
#define __IS_SOC(x) (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && \
this means that unlike in fixed pipelines the soft pipelines
on the ISP can do multiple processing steps in a single pipeline
element (in a single binary).
+
+###
+
+The sensor drivers use of v4l2_get_subdev_hostdata(), which returns
+a camera_mipi_info struct. This struct is allocated/managed by
+the core atomisp code. The most important parts of the struct
+are filled by the atomisp core itself, like e.g. the port number.
+
+The sensor drivers on a set_fmt call do fill in camera_mipi_info.data
+which is a atomisp_sensor_mode_data struct. This gets filled from
+a function called <sensor_name>_get_intg_factor(). This struct is not
+used by the atomisp code at all. It is returned to userspace by
+a ATOMISP_IOC_G_SENSOR_MODE_DATA and the Android userspace does use this.
+
+Other members of camera_mipi_info which are set by some drivers are:
+-metadata_width, metadata_height, metadata_effective_width, set by
+ the ov5693 driver (and used by the atomisp core)
+-raw_bayer_order, adjusted by the ov2680 driver when flipping since
+ flipping can change the bayer order
} ptr;
};
+static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id);
+
/*
* get sensor:dis71430/ov2720 related info from v4l2_subdev->priv data field.
* subdev->priv is set in mrst.c
container_of(dev, struct atomisp_video_pipe, vdev);
}
-/*
- * get struct atomisp_acc_pipe from v4l2 video_device
- */
-struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev)
-{
- return (struct atomisp_acc_pipe *)
- container_of(dev, struct atomisp_acc_pipe, vdev);
-}
-
static unsigned short atomisp_get_sensor_fps(struct atomisp_sub_device *asd)
{
struct v4l2_subdev_frame_interval fi = { 0 };
enum ia_css_pipe_id css_pipe_id,
enum ia_css_buffer_type buf_type)
{
- struct atomisp_device *isp = asd->isp;
-
- if (css_pipe_id == IA_CSS_PIPE_ID_COPY &&
- isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num > 1) {
- switch (stream_id) {
- case ATOMISP_INPUT_STREAM_PREVIEW:
- return &asd->video_out_preview;
- case ATOMISP_INPUT_STREAM_POSTVIEW:
- return &asd->video_out_vf;
- case ATOMISP_INPUT_STREAM_VIDEO:
- return &asd->video_out_video_capture;
- case ATOMISP_INPUT_STREAM_CAPTURE:
- default:
- return &asd->video_out_capture;
- }
- }
-
/* video is same in online as in continuouscapture mode */
if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT) {
/*
enum atomisp_metadata_type md_type;
struct atomisp_device *isp = asd->isp;
struct v4l2_control ctrl;
- bool reset_wdt_timer = false;
+
+ lockdep_assert_held(&isp->mutex);
if (
buf_type != IA_CSS_BUFFER_TYPE_METADATA &&
break;
case IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME:
case IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME:
- if (IS_ISP2401)
- reset_wdt_timer = true;
-
pipe->buffers_in_css--;
frame = buffer.css_buffer.data.frame;
if (!frame) {
break;
case IA_CSS_BUFFER_TYPE_OUTPUT_FRAME:
case IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME:
- if (IS_ISP2401)
- reset_wdt_timer = true;
-
pipe->buffers_in_css--;
frame = buffer.css_buffer.data.frame;
if (!frame) {
*/
wake_up(&vb->done);
}
- if (IS_ISP2401)
- atomic_set(&pipe->wdt_count, 0);
/*
* Requeue should only be done for 3a and dis buffers.
}
if (!error && q_buffers)
atomisp_qbuffers_to_css(asd);
-
- if (IS_ISP2401) {
- /* If there are no buffers queued then
- * delete wdt timer. */
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- return;
- if (!atomisp_buffers_queued_pipe(pipe))
- atomisp_wdt_stop_pipe(pipe, false);
- else if (reset_wdt_timer)
- /* SOF irq should not reset wdt timer. */
- atomisp_wdt_refresh_pipe(pipe,
- ATOMISP_WDT_KEEP_CURRENT_DELAY);
- }
}
void atomisp_delayed_init_work(struct work_struct *work)
bool stream_restart[MAX_STREAM_NUM] = {0};
bool depth_mode = false;
int i, ret, depth_cnt = 0;
+ unsigned long flags;
- if (!isp->sw_contex.file_input)
- atomisp_css_irq_enable(isp,
- IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
+ lockdep_assert_held(&isp->mutex);
+
+ if (!atomisp_streaming_count(isp))
+ return;
+
+ atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
BUG_ON(isp->num_of_streams > MAX_STREAM_NUM);
stream_restart[asd->index] = true;
+ spin_lock_irqsave(&isp->lock, flags);
asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
+ spin_unlock_irqrestore(&isp->lock, flags);
/* stream off sensor */
ret = v4l2_subdev_call(
css_pipe_id = atomisp_get_css_pipe_id(asd);
atomisp_css_stop(asd, css_pipe_id, true);
+ spin_lock_irqsave(&isp->lock, flags);
asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+ spin_unlock_irqrestore(&isp->lock, flags);
asd->preview_exp_id = 1;
asd->postview_exp_id = 1;
IA_CSS_INPUT_MODE_BUFFERED_SENSOR);
css_pipe_id = atomisp_get_css_pipe_id(asd);
- if (atomisp_css_start(asd, css_pipe_id, true))
+ if (atomisp_css_start(asd, css_pipe_id, true)) {
dev_warn(isp->dev,
"start SP failed, so do not set streaming to be enable!\n");
- else
+ } else {
+ spin_lock_irqsave(&isp->lock, flags);
asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED;
+ spin_unlock_irqrestore(&isp->lock, flags);
+ }
atomisp_csi2_configure(asd);
}
- if (!isp->sw_contex.file_input) {
- atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
- atomisp_css_valid_sof(isp));
+ atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
+ atomisp_css_valid_sof(isp));
- if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0)
- dev_dbg(isp->dev, "DFS auto failed while recovering!\n");
- } else {
- if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, true) < 0)
- dev_dbg(isp->dev, "DFS max failed while recovering!\n");
- }
+ if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0)
+ dev_dbg(isp->dev, "DFS auto failed while recovering!\n");
for (i = 0; i < isp->num_of_streams; i++) {
struct atomisp_sub_device *asd;
}
}
-void atomisp_wdt_work(struct work_struct *work)
+void atomisp_assert_recovery_work(struct work_struct *work)
{
struct atomisp_device *isp = container_of(work, struct atomisp_device,
- wdt_work);
- int i;
- unsigned int pipe_wdt_cnt[MAX_STREAM_NUM][4] = { {0} };
- bool css_recover = true;
-
- rt_mutex_lock(&isp->mutex);
- if (!atomisp_streaming_count(isp)) {
- atomic_set(&isp->wdt_work_queued, 0);
- rt_mutex_unlock(&isp->mutex);
- return;
- }
-
- if (!IS_ISP2401) {
- dev_err(isp->dev, "timeout %d of %d\n",
- atomic_read(&isp->wdt_count) + 1,
- ATOMISP_ISP_MAX_TIMEOUT_COUNT);
- } else {
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- pipe_wdt_cnt[i][0] +=
- atomic_read(&asd->video_out_capture.wdt_count);
- pipe_wdt_cnt[i][1] +=
- atomic_read(&asd->video_out_vf.wdt_count);
- pipe_wdt_cnt[i][2] +=
- atomic_read(&asd->video_out_preview.wdt_count);
- pipe_wdt_cnt[i][3] +=
- atomic_read(&asd->video_out_video_capture.wdt_count);
- css_recover =
- (pipe_wdt_cnt[i][0] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
- pipe_wdt_cnt[i][1] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
- pipe_wdt_cnt[i][2] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
- pipe_wdt_cnt[i][3] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT)
- ? true : false;
- dev_err(isp->dev,
- "pipe on asd%d timeout cnt: (%d, %d, %d, %d) of %d, recover = %d\n",
- asd->index, pipe_wdt_cnt[i][0], pipe_wdt_cnt[i][1],
- pipe_wdt_cnt[i][2], pipe_wdt_cnt[i][3],
- ATOMISP_ISP_MAX_TIMEOUT_COUNT, css_recover);
- }
- }
-
- if (css_recover) {
- ia_css_debug_dump_sp_sw_debug_info();
- ia_css_debug_dump_debug_info(__func__);
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- continue;
- dev_err(isp->dev, "%s, vdev %s buffers in css: %d\n",
- __func__,
- asd->video_out_capture.vdev.name,
- asd->video_out_capture.
- buffers_in_css);
- dev_err(isp->dev,
- "%s, vdev %s buffers in css: %d\n",
- __func__,
- asd->video_out_vf.vdev.name,
- asd->video_out_vf.
- buffers_in_css);
- dev_err(isp->dev,
- "%s, vdev %s buffers in css: %d\n",
- __func__,
- asd->video_out_preview.vdev.name,
- asd->video_out_preview.
- buffers_in_css);
- dev_err(isp->dev,
- "%s, vdev %s buffers in css: %d\n",
- __func__,
- asd->video_out_video_capture.vdev.name,
- asd->video_out_video_capture.
- buffers_in_css);
- dev_err(isp->dev,
- "%s, s3a buffers in css preview pipe:%d\n",
- __func__,
- asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_PREVIEW]);
- dev_err(isp->dev,
- "%s, s3a buffers in css capture pipe:%d\n",
- __func__,
- asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_CAPTURE]);
- dev_err(isp->dev,
- "%s, s3a buffers in css video pipe:%d\n",
- __func__,
- asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_VIDEO]);
- dev_err(isp->dev,
- "%s, dis buffers in css: %d\n",
- __func__, asd->dis_bufs_in_css);
- dev_err(isp->dev,
- "%s, metadata buffers in css preview pipe:%d\n",
- __func__,
- asd->metadata_bufs_in_css
- [ATOMISP_INPUT_STREAM_GENERAL]
- [IA_CSS_PIPE_ID_PREVIEW]);
- dev_err(isp->dev,
- "%s, metadata buffers in css capture pipe:%d\n",
- __func__,
- asd->metadata_bufs_in_css
- [ATOMISP_INPUT_STREAM_GENERAL]
- [IA_CSS_PIPE_ID_CAPTURE]);
- dev_err(isp->dev,
- "%s, metadata buffers in css video pipe:%d\n",
- __func__,
- asd->metadata_bufs_in_css
- [ATOMISP_INPUT_STREAM_GENERAL]
- [IA_CSS_PIPE_ID_VIDEO]);
- if (asd->enable_raw_buffer_lock->val) {
- unsigned int j;
-
- dev_err(isp->dev, "%s, raw_buffer_locked_count %d\n",
- __func__, asd->raw_buffer_locked_count);
- for (j = 0; j <= ATOMISP_MAX_EXP_ID / 32; j++)
- dev_err(isp->dev, "%s, raw_buffer_bitmap[%d]: 0x%x\n",
- __func__, j,
- asd->raw_buffer_bitmap[j]);
- }
- }
-
- /*sh_css_dump_sp_state();*/
- /*sh_css_dump_isp_state();*/
- } else {
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- if (asd->streaming ==
- ATOMISP_DEVICE_STREAMING_ENABLED) {
- atomisp_clear_css_buffer_counters(asd);
- atomisp_flush_bufs_and_wakeup(asd);
- complete(&asd->init_done);
- }
- if (IS_ISP2401)
- atomisp_wdt_stop(asd, false);
- }
-
- if (!IS_ISP2401) {
- atomic_set(&isp->wdt_count, 0);
- } else {
- isp->isp_fatal_error = true;
- atomic_set(&isp->wdt_work_queued, 0);
-
- rt_mutex_unlock(&isp->mutex);
- return;
- }
- }
+ assert_recovery_work);
+ mutex_lock(&isp->mutex);
__atomisp_css_recover(isp, true);
- if (IS_ISP2401) {
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- continue;
-
- atomisp_wdt_refresh(asd,
- isp->sw_contex.file_input ?
- ATOMISP_ISP_FILE_TIMEOUT_DURATION :
- ATOMISP_ISP_TIMEOUT_DURATION);
- }
- }
-
- dev_err(isp->dev, "timeout recovery handling done\n");
- atomic_set(&isp->wdt_work_queued, 0);
-
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
}
void atomisp_css_flush(struct atomisp_device *isp)
{
- int i;
-
- if (!atomisp_streaming_count(isp))
- return;
-
- /* Disable wdt */
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- atomisp_wdt_stop(asd, true);
- }
-
/* Start recover */
__atomisp_css_recover(isp, false);
- /* Restore wdt */
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- if (asd->streaming !=
- ATOMISP_DEVICE_STREAMING_ENABLED)
- continue;
- atomisp_wdt_refresh(asd,
- isp->sw_contex.file_input ?
- ATOMISP_ISP_FILE_TIMEOUT_DURATION :
- ATOMISP_ISP_TIMEOUT_DURATION);
- }
dev_dbg(isp->dev, "atomisp css flush done\n");
}
-void atomisp_wdt(struct timer_list *t)
-{
- struct atomisp_sub_device *asd;
- struct atomisp_device *isp;
-
- if (!IS_ISP2401) {
- asd = from_timer(asd, t, wdt);
- isp = asd->isp;
- } else {
- struct atomisp_video_pipe *pipe = from_timer(pipe, t, wdt);
-
- asd = pipe->asd;
- isp = asd->isp;
-
- atomic_inc(&pipe->wdt_count);
- dev_warn(isp->dev,
- "[WARNING]asd %d pipe %s ISP timeout %d!\n",
- asd->index, pipe->vdev.name,
- atomic_read(&pipe->wdt_count));
- }
-
- if (atomic_read(&isp->wdt_work_queued)) {
- dev_dbg(isp->dev, "ISP watchdog was put into workqueue\n");
- return;
- }
- atomic_set(&isp->wdt_work_queued, 1);
- queue_work(isp->wdt_work_queue, &isp->wdt_work);
-}
-
-/* ISP2400 */
-void atomisp_wdt_start(struct atomisp_sub_device *asd)
-{
- atomisp_wdt_refresh(asd, ATOMISP_ISP_TIMEOUT_DURATION);
-}
-
-/* ISP2401 */
-void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
- unsigned int delay)
-{
- unsigned long next;
-
- if (!pipe->asd) {
- dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, pipe->vdev.name);
- return;
- }
-
- if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
- pipe->wdt_duration = delay;
-
- next = jiffies + pipe->wdt_duration;
-
- /* Override next if it has been pushed beyon the "next" time */
- if (atomisp_is_wdt_running(pipe) && time_after(pipe->wdt_expires, next))
- next = pipe->wdt_expires;
-
- pipe->wdt_expires = next;
-
- if (atomisp_is_wdt_running(pipe))
- dev_dbg(pipe->asd->isp->dev, "WDT will hit after %d ms (%s)\n",
- ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name);
- else
- dev_dbg(pipe->asd->isp->dev, "WDT starts with %d ms period (%s)\n",
- ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name);
-
- mod_timer(&pipe->wdt, next);
-}
-
-void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay)
-{
- if (!IS_ISP2401) {
- unsigned long next;
-
- if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
- asd->wdt_duration = delay;
-
- next = jiffies + asd->wdt_duration;
-
- /* Override next if it has been pushed beyon the "next" time */
- if (atomisp_is_wdt_running(asd) && time_after(asd->wdt_expires, next))
- next = asd->wdt_expires;
-
- asd->wdt_expires = next;
-
- if (atomisp_is_wdt_running(asd))
- dev_dbg(asd->isp->dev, "WDT will hit after %d ms\n",
- ((int)(next - jiffies) * 1000 / HZ));
- else
- dev_dbg(asd->isp->dev, "WDT starts with %d ms period\n",
- ((int)(next - jiffies) * 1000 / HZ));
-
- mod_timer(&asd->wdt, next);
- atomic_set(&asd->isp->wdt_count, 0);
- } else {
- dev_dbg(asd->isp->dev, "WDT refresh all:\n");
- if (atomisp_is_wdt_running(&asd->video_out_capture))
- atomisp_wdt_refresh_pipe(&asd->video_out_capture, delay);
- if (atomisp_is_wdt_running(&asd->video_out_preview))
- atomisp_wdt_refresh_pipe(&asd->video_out_preview, delay);
- if (atomisp_is_wdt_running(&asd->video_out_vf))
- atomisp_wdt_refresh_pipe(&asd->video_out_vf, delay);
- if (atomisp_is_wdt_running(&asd->video_out_video_capture))
- atomisp_wdt_refresh_pipe(&asd->video_out_video_capture, delay);
- }
-}
-
-/* ISP2401 */
-void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync)
-{
- if (!pipe->asd) {
- dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, pipe->vdev.name);
- return;
- }
-
- if (!atomisp_is_wdt_running(pipe))
- return;
-
- dev_dbg(pipe->asd->isp->dev,
- "WDT stop asd %d (%s)\n", pipe->asd->index, pipe->vdev.name);
-
- if (sync) {
- del_timer_sync(&pipe->wdt);
- cancel_work_sync(&pipe->asd->isp->wdt_work);
- } else {
- del_timer(&pipe->wdt);
- }
-}
-
-/* ISP 2401 */
-void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe)
-{
- atomisp_wdt_refresh_pipe(pipe, ATOMISP_ISP_TIMEOUT_DURATION);
-}
-
-void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync)
-{
- dev_dbg(asd->isp->dev, "WDT stop:\n");
-
- if (!IS_ISP2401) {
- if (sync) {
- del_timer_sync(&asd->wdt);
- cancel_work_sync(&asd->isp->wdt_work);
- } else {
- del_timer(&asd->wdt);
- }
- } else {
- atomisp_wdt_stop_pipe(&asd->video_out_capture, sync);
- atomisp_wdt_stop_pipe(&asd->video_out_preview, sync);
- atomisp_wdt_stop_pipe(&asd->video_out_vf, sync);
- atomisp_wdt_stop_pipe(&asd->video_out_video_capture, sync);
- }
-}
-
void atomisp_setup_flash(struct atomisp_sub_device *asd)
{
struct atomisp_device *isp = asd->isp;
* For CSS2.0: we change the way to not dequeue all the event at one
* time, instead, dequue one and process one, then another
*/
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
if (atomisp_css_isr_thread(isp, frame_done_found, css_pipe_done))
goto out;
atomisp_setup_flash(asd);
}
out:
- rt_mutex_unlock(&isp->mutex);
- for (i = 0; i < isp->num_of_streams; i++) {
- asd = &isp->asd[i];
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED
- && css_pipe_done[asd->index]
- && isp->sw_contex.file_input)
- v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
- video, s_stream, 1);
- }
+ mutex_unlock(&isp->mutex);
dev_dbg(isp->dev, "<%s\n", __func__);
return IRQ_HANDLED;
{
struct atomisp_device *isp = asd->isp;
int err;
- u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
if (atomisp_css_get_grid_info(asd, pipe_id, source_pad))
return;
the grid size. */
atomisp_css_free_stat_buffers(asd);
- err = atomisp_alloc_css_stat_bufs(asd, stream_id);
+ err = atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL);
if (err) {
dev_err(isp->dev, "stat_buf allocate error\n");
goto err;
unsigned long irqflags;
bool need_to_enqueue_buffer = false;
+ lockdep_assert_held(&asd->isp->mutex);
+
if (!asd) {
dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
__func__, pipe->vdev.name);
return;
atomisp_qbuffers_to_css(asd);
-
- if (!IS_ISP2401) {
- if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd))
- atomisp_wdt_start(asd);
- } else {
- if (atomisp_buffers_queued_pipe(pipe)) {
- if (!atomisp_is_wdt_running(pipe))
- atomisp_wdt_start_pipe(pipe);
- else
- atomisp_wdt_refresh_pipe(pipe,
- ATOMISP_WDT_KEEP_CURRENT_DELAY);
- }
- }
}
/*
struct atomisp_css_params *css_param = &asd->params.css_param;
int ret;
+ lockdep_assert_held(&asd->isp->mutex);
+
if (!asd) {
dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
__func__, vdev->name);
const struct atomisp_format_bridge *fmt;
struct atomisp_input_stream_info *stream_info =
(struct atomisp_input_stream_info *)snr_mbus_fmt->reserved;
- u16 stream_index;
- int source_pad = atomisp_subdev_source_pad(vdev);
int ret;
if (!asd) {
if (!isp->inputs[asd->input_curr].camera)
return -EINVAL;
- stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
fmt = atomisp_get_format_bridge(f->pixelformat);
if (!fmt) {
dev_err(isp->dev, "unsupported pixelformat!\n");
snr_mbus_fmt->width = f->width;
snr_mbus_fmt->height = f->height;
- __atomisp_init_stream_info(stream_index, stream_info);
+ __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info);
dev_dbg(isp->dev, "try_mbus_fmt: asking for %ux%u\n",
snr_mbus_fmt->width, snr_mbus_fmt->height);
return 0;
}
- if (snr_mbus_fmt->width < f->width
- && snr_mbus_fmt->height < f->height) {
+ if (!res_overflow || (snr_mbus_fmt->width < f->width &&
+ snr_mbus_fmt->height < f->height)) {
f->width = snr_mbus_fmt->width;
f->height = snr_mbus_fmt->height;
/* Set the flag when resolution requested is
return 0;
}
-static int
-atomisp_try_fmt_file(struct atomisp_device *isp, struct v4l2_format *f)
-{
- u32 width = f->fmt.pix.width;
- u32 height = f->fmt.pix.height;
- u32 pixelformat = f->fmt.pix.pixelformat;
- enum v4l2_field field = f->fmt.pix.field;
- u32 depth;
-
- if (!atomisp_get_format_bridge(pixelformat)) {
- dev_err(isp->dev, "Wrong output pixelformat\n");
- return -EINVAL;
- }
-
- depth = atomisp_get_pixel_depth(pixelformat);
-
- if (field == V4L2_FIELD_ANY) {
- field = V4L2_FIELD_NONE;
- } else if (field != V4L2_FIELD_NONE) {
- dev_err(isp->dev, "Wrong output field\n");
- return -EINVAL;
- }
-
- f->fmt.pix.field = field;
- f->fmt.pix.width = clamp_t(u32,
- rounddown(width, (u32)ATOM_ISP_STEP_WIDTH),
- ATOM_ISP_MIN_WIDTH, ATOM_ISP_MAX_WIDTH);
- f->fmt.pix.height = clamp_t(u32, rounddown(height,
- (u32)ATOM_ISP_STEP_HEIGHT),
- ATOM_ISP_MIN_HEIGHT, ATOM_ISP_MAX_HEIGHT);
- f->fmt.pix.bytesperline = (width * depth) >> 3;
-
- return 0;
-}
-
enum mipi_port_id __get_mipi_port(struct atomisp_device *isp,
enum atomisp_camera_port port)
{
int (*configure_pp_input)(struct atomisp_sub_device *asd,
unsigned int width, unsigned int height) =
configure_pp_input_nop;
- u16 stream_index;
const struct atomisp_in_fmt_conv *fc;
int ret, i;
__func__, vdev->name);
return -EINVAL;
}
- stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
v4l2_fh_init(&fh.vfh, vdev);
dev_err(isp->dev, "mipi_info is NULL\n");
return -EINVAL;
}
- if (atomisp_set_sensor_mipi_to_isp(asd, stream_index,
+ if (atomisp_set_sensor_mipi_to_isp(asd, ATOMISP_INPUT_STREAM_GENERAL,
mipi_info))
return -EINVAL;
fc = atomisp_find_in_fmt_conv_by_atomisp_in_fmt(
/* ISP2401 new input system need to use copy pipe */
if (asd->copy_mode) {
pipe_id = IA_CSS_PIPE_ID_COPY;
- atomisp_css_capture_enable_online(asd, stream_index, false);
+ atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL, false);
} else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
/* video same in continuouscapture and online modes */
configure_output = atomisp_css_video_configure_output;
pipe_id = IA_CSS_PIPE_ID_CAPTURE;
atomisp_update_capture_mode(asd);
- atomisp_css_capture_enable_online(asd, stream_index, false);
+ atomisp_css_capture_enable_online(asd,
+ ATOMISP_INPUT_STREAM_GENERAL,
+ false);
}
}
} else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
if (!asd->continuous_mode->val)
/* in case of ANR, force capture pipe to offline mode */
- atomisp_css_capture_enable_online(asd, stream_index,
+ atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL,
asd->params.low_light ?
false : asd->params.online_process);
pipe_id = IA_CSS_PIPE_ID_YUVPP;
if (asd->copy_mode)
- ret = atomisp_css_copy_configure_output(asd, stream_index,
+ ret = atomisp_css_copy_configure_output(asd, ATOMISP_INPUT_STREAM_GENERAL,
pix->width, pix->height,
format->planar ? pix->bytesperline :
pix->bytesperline * 8 / format->depth,
return -EINVAL;
}
if (asd->copy_mode)
- ret = atomisp_css_copy_get_output_frame_info(asd, stream_index,
- output_info);
+ ret = atomisp_css_copy_get_output_frame_info(asd,
+ ATOMISP_INPUT_STREAM_GENERAL,
+ output_info);
else
ret = get_frame_info(asd, output_info);
if (ret) {
ia_css_frame_free(asd->raw_output_frame);
asd->raw_output_frame = NULL;
- if (!asd->continuous_mode->val &&
- !asd->params.online_process && !isp->sw_contex.file_input &&
+ if (!asd->continuous_mode->val && !asd->params.online_process &&
ia_css_frame_allocate_from_info(&asd->raw_output_frame,
raw_output_info))
return -ENOMEM;
src = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
V4L2_SUBDEV_FORMAT_ACTIVE, source_pad);
- if ((sink->code == src->code &&
- sink->width == f->width &&
- sink->height == f->height) ||
- ((asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) &&
- (asd->isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num > 1)))
+ if (sink->code == src->code && sink->width == f->width && sink->height == f->height)
asd->copy_mode = true;
else
asd->copy_mode = false;
struct atomisp_device *isp;
struct atomisp_input_stream_info *stream_info =
(struct atomisp_input_stream_info *)ffmt->reserved;
- u16 stream_index = ATOMISP_INPUT_STREAM_GENERAL;
int source_pad = atomisp_subdev_source_pad(vdev);
struct v4l2_subdev_fh fh;
int ret;
v4l2_fh_init(&fh.vfh, vdev);
- stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
-
format = atomisp_get_format_bridge(pixelformat);
if (!format)
return -EINVAL;
ffmt->width, ffmt->height, padding_w, padding_h,
dvs_env_w, dvs_env_h);
- __atomisp_init_stream_info(stream_index, stream_info);
+ __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info);
req_ffmt = ffmt;
if (ret)
return ret;
- __atomisp_update_stream_env(asd, stream_index, stream_info);
+ __atomisp_update_stream_env(asd, ATOMISP_INPUT_STREAM_GENERAL, stream_info);
dev_dbg(isp->dev, "sensor width: %d, height: %d\n",
ffmt->width, ffmt->height);
return css_input_resolution_changed(asd, ffmt);
}
-int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
+int atomisp_set_fmt(struct file *file, void *unused, struct v4l2_format *f)
{
+ struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
struct atomisp_sub_device *asd = pipe->asd;
struct v4l2_subdev_fh fh;
int ret;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
+ ret = atomisp_pipe_check(pipe, true);
+ if (ret)
+ return ret;
if (source_pad >= ATOMISP_SUBDEV_PADS_NUM)
return -EINVAL;
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
- dev_warn(isp->dev, "ISP does not support set format while at streaming!\n");
- return -EBUSY;
- }
-
dev_dbg(isp->dev,
"setting resolution %ux%u on pad %u for asd%d, bytesperline %u\n",
f->fmt.pix.width, f->fmt.pix.height, source_pad,
f->fmt.pix.height = r.height;
}
- if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
- (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) &&
- (asd->isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num > 1)) {
- /* For M10MO outputing YUV preview images. */
- u16 video_index =
- atomisp_source_pad_to_stream_id(asd,
- ATOMISP_SUBDEV_PAD_SOURCE_VIDEO);
-
- ret = atomisp_css_copy_get_output_frame_info(asd,
- video_index, &output_info);
- if (ret) {
- dev_err(isp->dev,
- "copy_get_output_frame_info ret %i", ret);
- return -EINVAL;
- }
- if (!asd->yuvpp_mode) {
- /*
- * If viewfinder was configured into copy_mode,
- * we switch to using yuvpp pipe instead.
- */
- asd->yuvpp_mode = true;
- ret = atomisp_css_copy_configure_output(
- asd, video_index, 0, 0, 0, 0);
- if (ret) {
- dev_err(isp->dev,
- "failed to disable copy pipe");
- return -EINVAL;
- }
- ret = atomisp_css_yuvpp_configure_output(
- asd, video_index,
- output_info.res.width,
- output_info.res.height,
- output_info.padded_width,
- output_info.format);
- if (ret) {
- dev_err(isp->dev,
- "failed to set up yuvpp pipe\n");
- return -EINVAL;
- }
- atomisp_css_video_enable_online(asd, false);
- atomisp_css_preview_enable_online(asd,
- ATOMISP_INPUT_STREAM_GENERAL, false);
- }
- atomisp_css_yuvpp_configure_viewfinder(asd, video_index,
- f->fmt.pix.width, f->fmt.pix.height,
- format_bridge->planar ? f->fmt.pix.bytesperline
- : f->fmt.pix.bytesperline * 8
- / format_bridge->depth, format_bridge->sh_fmt);
- atomisp_css_yuvpp_get_viewfinder_frame_info(
- asd, video_index, &output_info);
- } else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
+ if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
atomisp_css_video_configure_viewfinder(asd,
f->fmt.pix.width, f->fmt.pix.height,
format_bridge->planar ? f->fmt.pix.bytesperline
return 0;
}
-int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f)
-{
- struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
- struct atomisp_sub_device *asd = pipe->asd;
- struct v4l2_mbus_framefmt ffmt = {0};
- const struct atomisp_format_bridge *format_bridge;
- struct v4l2_subdev_fh fh;
- int ret;
-
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
- v4l2_fh_init(&fh.vfh, vdev);
-
- dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n",
- f->fmt.pix.width, f->fmt.pix.height, f->fmt.pix.pixelformat);
- ret = atomisp_try_fmt_file(isp, f);
- if (ret) {
- dev_err(isp->dev, "atomisp_try_fmt_file err: %d\n", ret);
- return ret;
- }
-
- format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
- if (!format_bridge) {
- dev_dbg(isp->dev, "atomisp_get_format_bridge err! fmt:0x%x\n",
- f->fmt.pix.pixelformat);
- return -EINVAL;
- }
-
- pipe->pix = f->fmt.pix;
- atomisp_css_input_set_mode(asd, IA_CSS_INPUT_MODE_FIFO);
- atomisp_css_input_configure_port(asd,
- __get_mipi_port(isp, ATOMISP_CAMERA_PORT_PRIMARY), 2, 0xffff4,
- 0, 0, 0, 0);
- ffmt.width = f->fmt.pix.width;
- ffmt.height = f->fmt.pix.height;
- ffmt.code = format_bridge->mbus_code;
-
- atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- ATOMISP_SUBDEV_PAD_SINK, &ffmt);
-
- return 0;
-}
-
int atomisp_set_shading_table(struct atomisp_sub_device *asd,
struct atomisp_shading_table *user_shading_table)
{
{
struct v4l2_ctrl *c;
+ lockdep_assert_held(&asd->isp->mutex);
+
/*
* In case of M10MO ZSL capture case, we need to issue a separate
* capture request to M10MO which will output captured jpeg image
return 0;
}
-int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd,
- uint16_t source_pad)
-{
- int stream_id;
- struct atomisp_device *isp = asd->isp;
-
- if (isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num == 1)
- return ATOMISP_INPUT_STREAM_GENERAL;
-
- switch (source_pad) {
- case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE:
- stream_id = ATOMISP_INPUT_STREAM_CAPTURE;
- break;
- case ATOMISP_SUBDEV_PAD_SOURCE_VF:
- stream_id = ATOMISP_INPUT_STREAM_POSTVIEW;
- break;
- case ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW:
- stream_id = ATOMISP_INPUT_STREAM_PREVIEW;
- break;
- case ATOMISP_SUBDEV_PAD_SOURCE_VIDEO:
- stream_id = ATOMISP_INPUT_STREAM_VIDEO;
- break;
- default:
- stream_id = ATOMISP_INPUT_STREAM_GENERAL;
- }
-
- return stream_id;
-}
-
bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe)
{
struct atomisp_sub_device *asd = pipe->asd;
spin_unlock_irqrestore(&asd->raw_buffer_bitmap_lock, flags);
}
-int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id)
+static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id)
{
int *bitmap, bit;
unsigned long flags;
int value = *exp_id;
int ret;
+ lockdep_assert_held(&isp->mutex);
+
ret = __is_raw_buffer_locked(asd, value);
if (ret) {
dev_err(isp->dev, "%s exp_id %d invalid %d.\n", __func__, value, ret);
int value = *exp_id;
int ret;
+ lockdep_assert_held(&isp->mutex);
+
ret = __clear_raw_buffer_bitmap(asd, value);
if (ret) {
dev_err(isp->dev, "%s exp_id %d invalid %d.\n", __func__, value, ret);
if (!event || asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
return -EINVAL;
+ lockdep_assert_held(&asd->isp->mutex);
+
dev_dbg(asd->isp->dev, "%s: trying to inject a fake event 0x%x\n",
__func__, *event);
struct ia_css_pipe_info p_info;
int ret;
- if (!asd) {
- dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
- if (asd->isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num > 1) {
- /* External ISP */
- *invalid_frame_num = 0;
- return 0;
- }
-
pipe_id = atomisp_get_pipe_id(pipe);
if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id]) {
dev_warn(asd->isp->dev,
unsigned int size);
struct camera_mipi_info *atomisp_to_sensor_mipi_info(struct v4l2_subdev *sd);
struct atomisp_video_pipe *atomisp_to_video_pipe(struct video_device *dev);
-struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev);
int atomisp_reset(struct atomisp_device *isp);
void atomisp_flush_bufs_and_wakeup(struct atomisp_sub_device *asd);
void atomisp_clear_css_buffer_counters(struct atomisp_sub_device *asd);
/* Interrupt functions */
void atomisp_msi_irq_init(struct atomisp_device *isp);
void atomisp_msi_irq_uninit(struct atomisp_device *isp);
-void atomisp_wdt_work(struct work_struct *work);
-void atomisp_wdt(struct timer_list *t);
+void atomisp_assert_recovery_work(struct work_struct *work);
void atomisp_setup_flash(struct atomisp_sub_device *asd);
irqreturn_t atomisp_isr(int irq, void *dev);
irqreturn_t atomisp_isr_thread(int irq, void *isp_ptr);
int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
bool *res_overflow);
-int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f);
-int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f);
+int atomisp_set_fmt(struct file *file, void *fh, struct v4l2_format *f);
int atomisp_set_shading_table(struct atomisp_sub_device *asd,
struct atomisp_shading_table *shading_table);
bool q_buffers, enum atomisp_input_stream_id stream_id);
void atomisp_css_flush(struct atomisp_device *isp);
-int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd,
- uint16_t source_pad);
/* Events. Only one event has to be exported for now. */
void atomisp_eof_event(struct atomisp_sub_device *asd, uint8_t exp_id);
int atomisp_exp_id_unlock(struct atomisp_sub_device *asd, int *exp_id);
int atomisp_exp_id_capture(struct atomisp_sub_device *asd, int *exp_id);
-/* Function to update Raw Buffer bitmap */
-int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id);
void atomisp_init_raw_buffer_bitmap(struct atomisp_sub_device *asd);
/* Function to enable/disable zoom for capture pipe */
void atomisp_free_metadata_output_buf(struct atomisp_sub_device *asd);
-void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd,
- struct atomisp_css_buffer *isp_css_buffer,
- struct ia_css_isp_dvs_statistics_map *dvs_map);
-
void atomisp_css_temp_pipe_to_pipe_id(struct atomisp_sub_device *asd,
struct atomisp_css_event *current_event);
void atomisp_css_morph_table_free(struct ia_css_morph_table *table);
-void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp,
- unsigned int overlap);
-
int atomisp_css_get_dis_stat(struct atomisp_sub_device *asd,
struct atomisp_dis_statistics *stats);
int atomisp_css_update_stream(struct atomisp_sub_device *asd);
-struct atomisp_acc_fw;
-int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw);
-
int atomisp_css_isr_thread(struct atomisp_device *isp,
bool *frame_done_found,
bool *css_pipe_done);
struct ia_css_pipe_info p_info;
struct ia_css_grid_info old_info;
struct atomisp_device *isp = asd->isp;
- int stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
int md_width = asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].
stream_config.metadata_config.resolution.width;
memset(&old_info, 0, sizeof(struct ia_css_grid_info));
if (ia_css_pipe_get_info(
- asd->stream_env[stream_index].pipes[pipe_id],
+ asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id],
&p_info) != 0) {
dev_err(isp->dev, "ia_css_pipe_get_info failed\n");
return -EINVAL;
}
}
-void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd,
- struct atomisp_css_buffer *isp_css_buffer,
- struct ia_css_isp_dvs_statistics_map *dvs_map)
-{
- if (asd->params.dvs_stat) {
- if (dvs_map)
- ia_css_translate_dvs2_statistics(
- asd->params.dvs_stat, dvs_map);
- else
- ia_css_get_dvs2_statistics(asd->params.dvs_stat,
- isp_css_buffer->css_buffer.data.stats_dvs);
- }
-}
-
void atomisp_css_temp_pipe_to_pipe_id(struct atomisp_sub_device *asd,
struct atomisp_css_event *current_event)
{
struct atomisp_device *isp = asd->isp;
if (ATOMISP_SOC_CAMERA(asd)) {
- stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
+ stream_index = ATOMISP_INPUT_STREAM_GENERAL;
} else {
stream_index = (pipe_index == IA_CSS_PIPE_ID_YUVPP) ?
ATOMISP_INPUT_STREAM_VIDEO :
- atomisp_source_pad_to_stream_id(asd, source_pad);
+ ATOMISP_INPUT_STREAM_GENERAL;
}
if (0 != ia_css_pipe_get_info(asd->stream_env[stream_index]
struct atomisp_dis_buf *dis_buf;
unsigned long flags;
+ lockdep_assert_held(&isp->mutex);
+
if (!asd->params.dvs_stat->hor_prod.odd_real ||
!asd->params.dvs_stat->hor_prod.odd_imag ||
!asd->params.dvs_stat->hor_prod.even_real ||
return -EINVAL;
/* isp needs to be streaming to get DIS statistics */
- spin_lock_irqsave(&isp->lock, flags);
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) {
- spin_unlock_irqrestore(&isp->lock, flags);
+ if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
return -EINVAL;
- }
- spin_unlock_irqrestore(&isp->lock, flags);
if (atomisp_compare_dvs_grid(asd, &stats->dvs2_stat.grid_info) != 0)
/* If the grid info in the argument differs from the current
ia_css_morph_table_free(table);
}
-void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp,
- unsigned int overlap)
-{
- /* CSS 2.0 doesn't support this API. */
- dev_dbg(isp->dev, "set cont prev start time is not supported.\n");
- return;
-}
-
-/* Set the ACC binary arguments */
-int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw)
-{
- unsigned int mem;
-
- for (mem = 0; mem < ATOMISP_ACC_NR_MEMORY; mem++) {
- if (acc_fw->args[mem].length == 0)
- continue;
-
- ia_css_isp_param_set_css_mem_init(&acc_fw->fw->mem_initializers,
- IA_CSS_PARAM_CLASS_PARAM, mem,
- acc_fw->args[mem].css_ptr,
- acc_fw->args[mem].length);
- }
-
- return 0;
-}
-
static struct atomisp_sub_device *__get_atomisp_subdev(
struct ia_css_pipe *css_pipe,
struct atomisp_device *isp,
enum atomisp_input_stream_id stream_id = 0;
struct atomisp_css_event current_event;
struct atomisp_sub_device *asd;
- bool reset_wdt_timer[MAX_STREAM_NUM] = {false};
- int i;
+
+ lockdep_assert_held(&isp->mutex);
while (!ia_css_dequeue_psys_event(¤t_event.event)) {
if (current_event.event.type ==
__func__,
current_event.event.fw_assert_module_id,
current_event.event.fw_assert_line_no);
- for (i = 0; i < isp->num_of_streams; i++)
- atomisp_wdt_stop(&isp->asd[i], 0);
-
- if (!IS_ISP2401)
- atomisp_wdt(&isp->asd[0].wdt);
- else
- queue_work(isp->wdt_work_queue, &isp->wdt_work);
+ queue_work(system_long_wq, &isp->assert_recovery_work);
return -EINVAL;
} else if (current_event.event.type == IA_CSS_EVENT_TYPE_FW_WARNING) {
dev_warn(isp->dev, "%s: ISP reports warning, code is %d, exp_id %d\n",
frame_done_found[asd->index] = true;
atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_OUTPUT_FRAME,
current_event.pipe, true, stream_id);
-
- if (!IS_ISP2401)
- reset_wdt_timer[asd->index] = true; /* ISP running */
-
break;
case IA_CSS_EVENT_TYPE_SECOND_OUTPUT_FRAME_DONE:
dev_dbg(isp->dev, "event: Second output frame done");
frame_done_found[asd->index] = true;
atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME,
current_event.pipe, true, stream_id);
-
- if (!IS_ISP2401)
- reset_wdt_timer[asd->index] = true; /* ISP running */
-
break;
case IA_CSS_EVENT_TYPE_3A_STATISTICS_DONE:
dev_dbg(isp->dev, "event: 3A stats frame done");
atomisp_buf_done(asd, 0,
IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME,
current_event.pipe, true, stream_id);
-
- if (!IS_ISP2401)
- reset_wdt_timer[asd->index] = true; /* ISP running */
-
break;
case IA_CSS_EVENT_TYPE_SECOND_VF_OUTPUT_FRAME_DONE:
dev_dbg(isp->dev, "event: second VF output frame done");
atomisp_buf_done(asd, 0,
IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME,
current_event.pipe, true, stream_id);
- if (!IS_ISP2401)
- reset_wdt_timer[asd->index] = true; /* ISP running */
-
break;
case IA_CSS_EVENT_TYPE_DIS_STATISTICS_DONE:
dev_dbg(isp->dev, "event: dis stats frame done");
}
}
- if (IS_ISP2401)
- return 0;
-
- /* ISP2400: If there are no buffers queued then delete wdt timer. */
- for (i = 0; i < isp->num_of_streams; i++) {
- asd = &isp->asd[i];
- if (!asd)
- continue;
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- continue;
- if (!atomisp_buffers_queued(asd))
- atomisp_wdt_stop(asd, false);
- else if (reset_wdt_timer[i])
- /* SOF irq should not reset wdt timer. */
- atomisp_wdt_refresh(asd,
- ATOMISP_WDT_KEEP_CURRENT_DELAY);
- }
-
return 0;
}
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Support for Medifield PNW Camera Imaging ISP subsystem.
- *
- * Copyright (c) 2010 Intel Corporation. All Rights Reserved.
- *
- * Copyright (c) 2010 Silicon Hive www.siliconhive.com.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License version
- * 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- *
- */
-
-#include <media/v4l2-event.h>
-#include <media/v4l2-mediabus.h>
-
-#include <media/videobuf-vmalloc.h>
-#include <linux/delay.h>
-
-#include "ia_css.h"
-
-#include "atomisp_cmd.h"
-#include "atomisp_common.h"
-#include "atomisp_file.h"
-#include "atomisp_internal.h"
-#include "atomisp_ioctl.h"
-
-static void file_work(struct work_struct *work)
-{
- struct atomisp_file_device *file_dev =
- container_of(work, struct atomisp_file_device, work);
- struct atomisp_device *isp = file_dev->isp;
- /* only support file injection on subdev0 */
- struct atomisp_sub_device *asd = &isp->asd[0];
- struct atomisp_video_pipe *out_pipe = &asd->video_in;
- unsigned short *buf = videobuf_to_vmalloc(out_pipe->outq.bufs[0]);
- struct v4l2_mbus_framefmt isp_sink_fmt;
-
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- return;
-
- dev_dbg(isp->dev, ">%s: ready to start streaming\n", __func__);
- isp_sink_fmt = *atomisp_subdev_get_ffmt(&asd->subdev, NULL,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- ATOMISP_SUBDEV_PAD_SINK);
-
- while (!ia_css_isp_has_started())
- usleep_range(1000, 1500);
-
- ia_css_stream_send_input_frame(asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream,
- buf, isp_sink_fmt.width,
- isp_sink_fmt.height);
- dev_dbg(isp->dev, "<%s: streaming done\n", __func__);
-}
-
-static int file_input_s_stream(struct v4l2_subdev *sd, int enable)
-{
- struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd);
- struct atomisp_device *isp = file_dev->isp;
- /* only support file injection on subdev0 */
- struct atomisp_sub_device *asd = &isp->asd[0];
-
- dev_dbg(isp->dev, "%s: enable %d\n", __func__, enable);
- if (enable) {
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
- return 0;
-
- queue_work(file_dev->work_queue, &file_dev->work);
- return 0;
- }
- cancel_work_sync(&file_dev->work);
- return 0;
-}
-
-static int file_input_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_state *sd_state,
- struct v4l2_subdev_format *format)
-{
- struct v4l2_mbus_framefmt *fmt = &format->format;
- struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd);
- struct atomisp_device *isp = file_dev->isp;
- /* only support file injection on subdev0 */
- struct atomisp_sub_device *asd = &isp->asd[0];
- struct v4l2_mbus_framefmt *isp_sink_fmt;
-
- if (format->pad)
- return -EINVAL;
- isp_sink_fmt = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- ATOMISP_SUBDEV_PAD_SINK);
-
- fmt->width = isp_sink_fmt->width;
- fmt->height = isp_sink_fmt->height;
- fmt->code = isp_sink_fmt->code;
-
- return 0;
-}
-
-static int file_input_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_state *sd_state,
- struct v4l2_subdev_format *format)
-{
- struct v4l2_mbus_framefmt *fmt = &format->format;
-
- if (format->pad)
- return -EINVAL;
- file_input_get_fmt(sd, sd_state, format);
- if (format->which == V4L2_SUBDEV_FORMAT_TRY)
- sd_state->pads->try_fmt = *fmt;
- return 0;
-}
-
-static int file_input_log_status(struct v4l2_subdev *sd)
-{
- /*to fake*/
- return 0;
-}
-
-static int file_input_s_power(struct v4l2_subdev *sd, int on)
-{
- /* to fake */
- return 0;
-}
-
-static int file_input_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_state *sd_state,
- struct v4l2_subdev_mbus_code_enum *code)
-{
- /*to fake*/
- return 0;
-}
-
-static int file_input_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_state *sd_state,
- struct v4l2_subdev_frame_size_enum *fse)
-{
- /*to fake*/
- return 0;
-}
-
-static int file_input_enum_frame_ival(struct v4l2_subdev *sd,
- struct v4l2_subdev_state *sd_state,
- struct v4l2_subdev_frame_interval_enum
- *fie)
-{
- /*to fake*/
- return 0;
-}
-
-static const struct v4l2_subdev_video_ops file_input_video_ops = {
- .s_stream = file_input_s_stream,
-};
-
-static const struct v4l2_subdev_core_ops file_input_core_ops = {
- .log_status = file_input_log_status,
- .s_power = file_input_s_power,
-};
-
-static const struct v4l2_subdev_pad_ops file_input_pad_ops = {
- .enum_mbus_code = file_input_enum_mbus_code,
- .enum_frame_size = file_input_enum_frame_size,
- .enum_frame_interval = file_input_enum_frame_ival,
- .get_fmt = file_input_get_fmt,
- .set_fmt = file_input_set_fmt,
-};
-
-static const struct v4l2_subdev_ops file_input_ops = {
- .core = &file_input_core_ops,
- .video = &file_input_video_ops,
- .pad = &file_input_pad_ops,
-};
-
-void
-atomisp_file_input_unregister_entities(struct atomisp_file_device *file_dev)
-{
- media_entity_cleanup(&file_dev->sd.entity);
- v4l2_device_unregister_subdev(&file_dev->sd);
-}
-
-int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev,
- struct v4l2_device *vdev)
-{
- /* Register the subdev and video nodes. */
- return v4l2_device_register_subdev(vdev, &file_dev->sd);
-}
-
-void atomisp_file_input_cleanup(struct atomisp_device *isp)
-{
- struct atomisp_file_device *file_dev = &isp->file_dev;
-
- if (file_dev->work_queue) {
- destroy_workqueue(file_dev->work_queue);
- file_dev->work_queue = NULL;
- }
-}
-
-int atomisp_file_input_init(struct atomisp_device *isp)
-{
- struct atomisp_file_device *file_dev = &isp->file_dev;
- struct v4l2_subdev *sd = &file_dev->sd;
- struct media_pad *pads = file_dev->pads;
- struct media_entity *me = &sd->entity;
-
- file_dev->isp = isp;
- file_dev->work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1);
- if (!file_dev->work_queue) {
- dev_err(isp->dev, "Failed to initialize file inject workq\n");
- return -ENOMEM;
- }
-
- INIT_WORK(&file_dev->work, file_work);
-
- v4l2_subdev_init(sd, &file_input_ops);
- sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
- strscpy(sd->name, "file_input_subdev", sizeof(sd->name));
- v4l2_set_subdevdata(sd, file_dev);
-
- pads[0].flags = MEDIA_PAD_FL_SINK;
- me->function = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN;
-
- return media_entity_pads_init(me, 1, pads);
-}
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Support for Medifield PNW Camera Imaging ISP subsystem.
- *
- * Copyright (c) 2010 Intel Corporation. All Rights Reserved.
- *
- * Copyright (c) 2010 Silicon Hive www.siliconhive.com.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License version
- * 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- *
- */
-
-#ifndef __ATOMISP_FILE_H__
-#define __ATOMISP_FILE_H__
-
-#include <media/media-entity.h>
-#include <media/v4l2-subdev.h>
-
-struct atomisp_device;
-
-struct atomisp_file_device {
- struct v4l2_subdev sd;
- struct atomisp_device *isp;
- struct media_pad pads[1];
-
- struct workqueue_struct *work_queue;
- struct work_struct work;
-};
-
-void atomisp_file_input_cleanup(struct atomisp_device *isp);
-int atomisp_file_input_init(struct atomisp_device *isp);
-void atomisp_file_input_unregister_entities(
- struct atomisp_file_device *file_dev);
-int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev,
- struct v4l2_device *vdev);
-#endif /* __ATOMISP_FILE_H__ */
return IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME;
}
-static int atomisp_qbuffers_to_css_for_all_pipes(struct atomisp_sub_device *asd)
-{
- enum ia_css_buffer_type buf_type;
- enum ia_css_pipe_id css_capture_pipe_id = IA_CSS_PIPE_ID_COPY;
- enum ia_css_pipe_id css_preview_pipe_id = IA_CSS_PIPE_ID_COPY;
- enum ia_css_pipe_id css_video_pipe_id = IA_CSS_PIPE_ID_COPY;
- enum atomisp_input_stream_id input_stream_id;
- struct atomisp_video_pipe *capture_pipe;
- struct atomisp_video_pipe *preview_pipe;
- struct atomisp_video_pipe *video_pipe;
-
- capture_pipe = &asd->video_out_capture;
- preview_pipe = &asd->video_out_preview;
- video_pipe = &asd->video_out_video_capture;
-
- buf_type = atomisp_get_css_buf_type(
- asd, css_preview_pipe_id,
- atomisp_subdev_source_pad(&preview_pipe->vdev));
- input_stream_id = ATOMISP_INPUT_STREAM_PREVIEW;
- atomisp_q_video_buffers_to_css(asd, preview_pipe,
- input_stream_id,
- buf_type, css_preview_pipe_id);
-
- buf_type = atomisp_get_css_buf_type(asd, css_capture_pipe_id,
- atomisp_subdev_source_pad(&capture_pipe->vdev));
- input_stream_id = ATOMISP_INPUT_STREAM_GENERAL;
- atomisp_q_video_buffers_to_css(asd, capture_pipe,
- input_stream_id,
- buf_type, css_capture_pipe_id);
-
- buf_type = atomisp_get_css_buf_type(asd, css_video_pipe_id,
- atomisp_subdev_source_pad(&video_pipe->vdev));
- input_stream_id = ATOMISP_INPUT_STREAM_VIDEO;
- atomisp_q_video_buffers_to_css(asd, video_pipe,
- input_stream_id,
- buf_type, css_video_pipe_id);
- return 0;
-}
-
/* queue all available buffers to css */
int atomisp_qbuffers_to_css(struct atomisp_sub_device *asd)
{
bool raw_mode = atomisp_is_mbuscode_raw(
asd->fmt[asd->capture_pad].fmt.code);
- if (asd->isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num == 2 &&
- !asd->yuvpp_mode)
- return atomisp_qbuffers_to_css_for_all_pipes(asd);
-
if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
video_pipe = &asd->video_out_video_capture;
css_video_pipe_id = IA_CSS_PIPE_ID_VIDEO;
atomisp_videobuf_free_buf(vb);
}
-static int atomisp_buf_setup_output(struct videobuf_queue *vq,
- unsigned int *count, unsigned int *size)
-{
- struct atomisp_video_pipe *pipe = vq->priv_data;
-
- *size = pipe->pix.sizeimage;
-
- return 0;
-}
-
-static int atomisp_buf_prepare_output(struct videobuf_queue *vq,
- struct videobuf_buffer *vb,
- enum v4l2_field field)
-{
- struct atomisp_video_pipe *pipe = vq->priv_data;
-
- vb->size = pipe->pix.sizeimage;
- vb->width = pipe->pix.width;
- vb->height = pipe->pix.height;
- vb->field = field;
- vb->state = VIDEOBUF_PREPARED;
-
- return 0;
-}
-
-static void atomisp_buf_queue_output(struct videobuf_queue *vq,
- struct videobuf_buffer *vb)
-{
- struct atomisp_video_pipe *pipe = vq->priv_data;
-
- list_add_tail(&vb->queue, &pipe->activeq_out);
- vb->state = VIDEOBUF_QUEUED;
-}
-
-static void atomisp_buf_release_output(struct videobuf_queue *vq,
- struct videobuf_buffer *vb)
-{
- videobuf_vmalloc_free(vb);
- vb->state = VIDEOBUF_NEEDS_INIT;
-}
-
static const struct videobuf_queue_ops videobuf_qops = {
.buf_setup = atomisp_buf_setup,
.buf_prepare = atomisp_buf_prepare,
.buf_release = atomisp_buf_release,
};
-static const struct videobuf_queue_ops videobuf_qops_output = {
- .buf_setup = atomisp_buf_setup_output,
- .buf_prepare = atomisp_buf_prepare_output,
- .buf_queue = atomisp_buf_queue_output,
- .buf_release = atomisp_buf_release_output,
-};
-
static int atomisp_init_pipe(struct atomisp_video_pipe *pipe)
{
/* init locks */
sizeof(struct atomisp_buffer), pipe,
NULL); /* ext_lock: NULL */
- videobuf_queue_vmalloc_init(&pipe->outq, &videobuf_qops_output, NULL,
- &pipe->irq_lock,
- V4L2_BUF_TYPE_VIDEO_OUTPUT,
- V4L2_FIELD_NONE,
- sizeof(struct atomisp_buffer), pipe,
- NULL); /* ext_lock: NULL */
-
INIT_LIST_HEAD(&pipe->activeq);
- INIT_LIST_HEAD(&pipe->activeq_out);
INIT_LIST_HEAD(&pipe->buffers_waiting_for_param);
INIT_LIST_HEAD(&pipe->per_frame_params);
memset(pipe->frame_request_config_id, 0,
{
unsigned int i;
- isp->sw_contex.file_input = false;
isp->need_gfx_throttle = true;
isp->isp_fatal_error = false;
isp->mipi_frame_size = 0;
return asd->video_out_preview.users +
asd->video_out_vf.users +
asd->video_out_capture.users +
- asd->video_out_video_capture.users +
- asd->video_acc.users +
- asd->video_in.users;
+ asd->video_out_video_capture.users;
}
unsigned int atomisp_dev_users(struct atomisp_device *isp)
{
struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_video_pipe *pipe = NULL;
- struct atomisp_acc_pipe *acc_pipe = NULL;
- struct atomisp_sub_device *asd;
- bool acc_node = false;
+ struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+ struct atomisp_sub_device *asd = pipe->asd;
int ret;
dev_dbg(isp->dev, "open device %s\n", vdev->name);
- /*
- * Ensure that if we are still loading we block. Once the loading
- * is over we can proceed. We can't blindly hold the lock until
- * that occurs as if the load fails we'll deadlock the unload
- */
- rt_mutex_lock(&isp->loading);
- /*
- * FIXME: revisit this with a better check once the code structure
- * is cleaned up a bit more
- */
ret = v4l2_fh_open(file);
- if (ret) {
- dev_err(isp->dev,
- "%s: v4l2_fh_open() returned error %d\n",
- __func__, ret);
- rt_mutex_unlock(&isp->loading);
+ if (ret)
return ret;
- }
- if (!isp->ready) {
- rt_mutex_unlock(&isp->loading);
- return -ENXIO;
- }
- rt_mutex_unlock(&isp->loading);
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
- acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
- if (acc_node) {
- acc_pipe = atomisp_to_acc_pipe(vdev);
- asd = acc_pipe->asd;
- } else {
- pipe = atomisp_to_video_pipe(vdev);
- asd = pipe->asd;
- }
asd->subdev.devnode = vdev;
/* Deferred firmware loading case. */
if (isp->css_env.isp_css_fw.bytes == 0) {
isp->css_env.isp_css_fw.data = NULL;
}
- if (acc_node && acc_pipe->users) {
- dev_dbg(isp->dev, "acc node already opened\n");
- rt_mutex_unlock(&isp->mutex);
- return -EBUSY;
- } else if (acc_node) {
- goto dev_init;
- }
-
if (!isp->input_cnt) {
dev_err(isp->dev, "no camera attached\n");
ret = -EINVAL;
*/
if (pipe->users) {
dev_dbg(isp->dev, "video node already opened\n");
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return -EBUSY;
}
if (ret)
goto error;
-dev_init:
if (atomisp_dev_users(isp)) {
dev_dbg(isp->dev, "skip init isp in open\n");
goto init_subdev;
atomisp_subdev_init_struct(asd);
done:
-
- if (acc_node)
- acc_pipe->users++;
- else
- pipe->users++;
- rt_mutex_unlock(&isp->mutex);
+ pipe->users++;
+ mutex_unlock(&isp->mutex);
/* Ensure that a mode is set */
- if (!acc_node)
- v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
+ v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
return 0;
atomisp_css_uninit(isp);
pm_runtime_put(vdev->v4l2_dev->dev);
error:
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
+ v4l2_fh_release(file);
return ret;
}
{
struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_video_pipe *pipe;
- struct atomisp_acc_pipe *acc_pipe;
- struct atomisp_sub_device *asd;
- bool acc_node;
+ struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+ struct atomisp_sub_device *asd = pipe->asd;
struct v4l2_requestbuffers req;
struct v4l2_subdev_fh fh;
struct v4l2_rect clear_compose = {0};
+ unsigned long flags;
int ret = 0;
v4l2_fh_init(&fh.vfh, vdev);
if (!isp)
return -EBADF;
- mutex_lock(&isp->streamoff_mutex);
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
dev_dbg(isp->dev, "release device %s\n", vdev->name);
- acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
- if (acc_node) {
- acc_pipe = atomisp_to_acc_pipe(vdev);
- asd = acc_pipe->asd;
- } else {
- pipe = atomisp_to_video_pipe(vdev);
- asd = pipe->asd;
- }
+
asd->subdev.devnode = vdev;
- if (acc_node) {
- acc_pipe->users--;
- goto subdev_uninit;
- }
+
pipe->users--;
if (pipe->capq.streaming)
__func__);
if (pipe->capq.streaming &&
- __atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) {
- dev_err(isp->dev,
- "atomisp_streamoff failed on release, driver bug");
+ atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) {
+ dev_err(isp->dev, "atomisp_streamoff failed on release, driver bug");
goto done;
}
if (pipe->users)
goto done;
- if (__atomisp_reqbufs(file, NULL, &req)) {
- dev_err(isp->dev,
- "atomisp_reqbufs failed on release, driver bug");
+ if (atomisp_reqbufs(file, NULL, &req)) {
+ dev_err(isp->dev, "atomisp_reqbufs failed on release, driver bug");
goto done;
}
- if (pipe->outq.bufs[0]) {
- mutex_lock(&pipe->outq.vb_lock);
- videobuf_queue_cancel(&pipe->outq);
- mutex_unlock(&pipe->outq.vb_lock);
- }
-
/*
* A little trick here:
* file injection input resolution is recorded in the sink pad,
* The sink pad setting can only be cleared when all device nodes
* get released.
*/
- if (!isp->sw_contex.file_input && asd->fmt_auto->val) {
+ if (asd->fmt_auto->val) {
struct v4l2_mbus_framefmt isp_sink_fmt = { 0 };
atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
V4L2_SUBDEV_FORMAT_ACTIVE,
ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt);
}
-subdev_uninit:
+
if (atomisp_subdev_users(asd))
goto done;
- /* clear the sink pad for file input */
- if (isp->sw_contex.file_input && asd->fmt_auto->val) {
- struct v4l2_mbus_framefmt isp_sink_fmt = { 0 };
-
- atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt);
- }
-
atomisp_css_free_stat_buffers(asd);
atomisp_free_internal_buffers(asd);
ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
/* clear the asd field to show this camera is not used */
isp->inputs[asd->input_curr].asd = NULL;
+ spin_lock_irqsave(&isp->lock, flags);
asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+ spin_unlock_irqrestore(&isp->lock, flags);
if (atomisp_dev_users(isp))
goto done;
dev_err(isp->dev, "Failed to power off device\n");
done:
- if (!acc_node) {
- atomisp_subdev_set_selection(&asd->subdev, fh.state,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- atomisp_subdev_source_pad(vdev),
- V4L2_SEL_TGT_COMPOSE, 0,
- &clear_compose);
- }
- rt_mutex_unlock(&isp->mutex);
- mutex_unlock(&isp->streamoff_mutex);
+ atomisp_subdev_set_selection(&asd->subdev, fh.state,
+ V4L2_SUBDEV_FORMAT_ACTIVE,
+ atomisp_subdev_source_pad(vdev),
+ V4L2_SEL_TGT_COMPOSE, 0,
+ &clear_compose);
+ mutex_unlock(&isp->mutex);
return v4l2_fh_release(file);
}
if (!(vma->vm_flags & (VM_WRITE | VM_READ)))
return -EACCES;
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
if (!(vma->vm_flags & VM_SHARED)) {
/* Map private buffer.
*/
vma->vm_flags |= VM_SHARED;
ret = hmm_mmap(vma, vma->vm_pgoff << PAGE_SHIFT);
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return ret;
}
}
raw_virt_addr->data_bytes = origin_size;
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return 0;
}
ret = -EINVAL;
goto error;
}
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return atomisp_videobuf_mmap_mapper(&pipe->capq, vma);
error:
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return ret;
}
-static int atomisp_file_mmap(struct file *file, struct vm_area_struct *vma)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
- return videobuf_mmap_mapper(&pipe->outq, vma);
-}
-
static __poll_t atomisp_poll(struct file *file,
struct poll_table_struct *pt)
{
struct atomisp_device *isp = video_get_drvdata(vdev);
struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
if (pipe->capq.streaming != 1) {
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return EPOLLERR;
}
- rt_mutex_unlock(&isp->mutex);
+ mutex_unlock(&isp->mutex);
return videobuf_poll_stream(file, &pipe->capq, pt);
}
#endif
.poll = atomisp_poll,
};
-
-const struct v4l2_file_operations atomisp_file_fops = {
- .owner = THIS_MODULE,
- .open = atomisp_open,
- .release = atomisp_release,
- .mmap = atomisp_file_mmap,
- .unlocked_ioctl = video_ioctl2,
-#ifdef CONFIG_COMPAT
- /* .compat_ioctl32 = atomisp_compat_ioctl32, */
-#endif
- .poll = atomisp_poll,
-};
static struct gmin_subdev *find_gmin_subdev(struct v4l2_subdev *subdev);
-/*
- * Legacy/stub behavior copied from upstream platform_camera.c. The
- * atomisp driver relies on these values being non-NULL in a few
- * places, even though they are hard-coded in all current
- * implementations.
- */
-const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void)
-{
- static const struct atomisp_camera_caps caps = {
- .sensor_num = 1,
- .sensor = {
- { .stream_num = 1, },
- },
- };
- return ∩︀
-}
-EXPORT_SYMBOL_GPL(atomisp_get_default_camera_caps);
-
const struct atomisp_platform_data *atomisp_get_platform_data(void)
{
return &pdata;
return ret;
}
+static int camera_sensor_csi_alloc(struct v4l2_subdev *sd, u32 port, u32 lanes,
+ u32 format, u32 bayer_order)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct camera_mipi_info *csi;
+
+ csi = kzalloc(sizeof(*csi), GFP_KERNEL);
+ if (!csi)
+ return -ENOMEM;
+
+ csi->port = port;
+ csi->num_lanes = lanes;
+ csi->input_format = format;
+ csi->raw_bayer_order = bayer_order;
+ v4l2_set_subdev_hostdata(sd, csi);
+ csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED;
+ csi->metadata_effective_width = NULL;
+ dev_info(&client->dev,
+ "camera pdata: port: %d lanes: %d order: %8.8x\n",
+ port, lanes, bayer_order);
+
+ return 0;
+}
+
+static void camera_sensor_csi_free(struct v4l2_subdev *sd)
+{
+ struct camera_mipi_info *csi;
+
+ csi = v4l2_get_subdev_hostdata(sd);
+ kfree(csi);
+}
+
static int gmin_csi_cfg(struct v4l2_subdev *sd, int flag)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
if (!client || !gs)
return -ENODEV;
- return camera_sensor_csi(sd, gs->csi_port, gs->csi_lanes,
- gs->csi_fmt, gs->csi_bayer, flag);
+ if (flag)
+ return camera_sensor_csi_alloc(sd, gs->csi_port, gs->csi_lanes,
+ gs->csi_fmt, gs->csi_bayer);
+ camera_sensor_csi_free(sd);
+ return 0;
}
static struct camera_vcm_control *gmin_get_vcm_ctrl(struct v4l2_subdev *subdev,
if (!strcmp(var, "CamClk"))
return -EINVAL;
- obj = acpi_evaluate_dsm(handle, &atomisp_dsm_guid, 0, 0, NULL);
+ /* Return on unexpected object type */
+ obj = acpi_evaluate_dsm_typed(handle, &atomisp_dsm_guid, 0, 0, NULL,
+ ACPI_TYPE_PACKAGE);
if (!obj) {
dev_info_once(dev, "Didn't find ACPI _DSM table.\n");
return -EINVAL;
}
- /* Return on unexpected object type */
- if (obj->type != ACPI_TYPE_PACKAGE)
- return -EINVAL;
-
#if 0 /* Just for debugging purposes */
for (i = 0; i < obj->package.count; i++) {
union acpi_object *cur = &obj->package.elements[i];
}
EXPORT_SYMBOL_GPL(gmin_get_var_int);
-int camera_sensor_csi(struct v4l2_subdev *sd, u32 port,
- u32 lanes, u32 format, u32 bayer_order, int flag)
-{
- struct i2c_client *client = v4l2_get_subdevdata(sd);
- struct camera_mipi_info *csi = NULL;
-
- if (flag) {
- csi = kzalloc(sizeof(*csi), GFP_KERNEL);
- if (!csi)
- return -ENOMEM;
- csi->port = port;
- csi->num_lanes = lanes;
- csi->input_format = format;
- csi->raw_bayer_order = bayer_order;
- v4l2_set_subdev_hostdata(sd, (void *)csi);
- csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED;
- csi->metadata_effective_width = NULL;
- dev_info(&client->dev,
- "camera pdata: port: %d lanes: %d order: %8.8x\n",
- port, lanes, bayer_order);
- } else {
- csi = v4l2_get_subdev_hostdata(sd);
- kfree(csi);
- }
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(camera_sensor_csi);
-
/* PCI quirk: The BYT ISP advertises PCI runtime PM but it doesn't
* work. Disable so the kernel framework doesn't hang the device
* trying. The driver itself does direct calls to the PUNIT to manage
#include "sh_css_legacy.h"
#include "atomisp_csi2.h"
-#include "atomisp_file.h"
#include "atomisp_subdev.h"
#include "atomisp_tpg.h"
#include "atomisp_compat.h"
#define ATOM_ISP_POWER_DOWN 0
#define ATOM_ISP_POWER_UP 1
-#define ATOM_ISP_MAX_INPUTS 4
+#define ATOM_ISP_MAX_INPUTS 3
#define ATOMISP_SC_TYPE_SIZE 2
#define ATOMISP_ISP_TIMEOUT_DURATION (2 * HZ)
#define ATOMISP_EXT_ISP_TIMEOUT_DURATION (6 * HZ)
-#define ATOMISP_ISP_FILE_TIMEOUT_DURATION (60 * HZ)
#define ATOMISP_WDT_KEEP_CURRENT_DELAY 0
#define ATOMISP_ISP_MAX_TIMEOUT_COUNT 2
#define ATOMISP_CSS_STOP_TIMEOUT_US 200000
#define ATOMISP_DELAYED_INIT_QUEUED 1
#define ATOMISP_DELAYED_INIT_DONE 2
-#define ATOMISP_CALC_CSS_PREV_OVERLAP(lines) \
- ((lines) * 38 / 100 & 0xfffffe)
-
/*
* Define how fast CPU should be able to serve ISP interrupts.
* The bigger the value, the higher risk that the ISP is not
* Moorefield/Baytrail platform.
*/
#define ATOMISP_SOC_CAMERA(asd) \
- (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA \
- && asd->isp->inputs[asd->input_curr].camera_caps-> \
- sensor[asd->sensor_curr].stream_num == 1)
+ (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA)
#define ATOMISP_USE_YUVPP(asd) \
(ATOMISP_SOC_CAMERA(asd) && ATOMISP_CSS_SUPPORT_YUVPP && \
*/
struct atomisp_sub_device *asd;
- const struct atomisp_camera_caps *camera_caps;
int sensor_index;
};
};
struct atomisp_sw_contex {
- bool file_input;
int power_state;
int running_freq;
};
struct atomisp_mipi_csi2_device csi2_port[ATOMISP_CAMERA_NR_PORTS];
struct atomisp_tpg_device tpg;
- struct atomisp_file_device file_dev;
/* Purpose of mutex is to protect and serialize use of isp data
* structures and css API calls. */
- struct rt_mutex mutex;
- /*
- * This mutex ensures that we don't allow an open to succeed while
- * the initialization process is incomplete
- */
- struct rt_mutex loading;
- /* Set once the ISP is ready to allow opens */
- bool ready;
- /*
- * Serialise streamoff: mutex is dropped during streamoff to
- * cancel the watchdog queue. MUST be acquired BEFORE
- * "mutex".
- */
- struct mutex streamoff_mutex;
+ struct mutex mutex;
unsigned int input_cnt;
struct atomisp_input_subdev inputs[ATOM_ISP_MAX_INPUTS];
/* isp timeout status flag */
bool isp_timeout;
bool isp_fatal_error;
- struct workqueue_struct *wdt_work_queue;
- struct work_struct wdt_work;
-
- /* ISP2400 */
- atomic_t wdt_count;
-
- atomic_t wdt_work_queued;
+ struct work_struct assert_recovery_work;
- spinlock_t lock; /* Just for streaming below */
+ spinlock_t lock; /* Protects asd[i].streaming */
bool need_gfx_throttle;
extern struct device *atomisp_dev;
-#define atomisp_is_wdt_running(a) timer_pending(&(a)->wdt)
-
-/* ISP2401 */
-void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
- unsigned int delay);
-void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay);
-
-/* ISP2400 */
-void atomisp_wdt_start(struct atomisp_sub_device *asd);
-
-/* ISP2401 */
-void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe);
-void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync);
-
-void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync);
-
#endif /* __ATOMISP_INTERNAL_H__ */
return NULL;
}
+int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool settings_change)
+{
+ lockdep_assert_held(&pipe->isp->mutex);
+
+ if (pipe->isp->isp_fatal_error)
+ return -EIO;
+
+ switch (pipe->asd->streaming) {
+ case ATOMISP_DEVICE_STREAMING_DISABLED:
+ break;
+ case ATOMISP_DEVICE_STREAMING_ENABLED:
+ if (settings_change) {
+ dev_err(pipe->isp->dev, "Set fmt/input IOCTL while streaming\n");
+ return -EBUSY;
+ }
+ break;
+ case ATOMISP_DEVICE_STREAMING_STOPPING:
+ dev_err(pipe->isp->dev, "IOCTL issued while stopping\n");
+ return -EBUSY;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/*
* v4l2 ioctls
* return ISP capabilities
return asd->video_out_preview.capq.streaming
+ asd->video_out_capture.capq.streaming
+ asd->video_out_video_capture.capq.streaming
- + asd->video_out_vf.capq.streaming
- + asd->video_in.capq.streaming;
+ + asd->video_out_vf.capq.streaming;
}
unsigned int atomisp_streaming_count(struct atomisp_device *isp)
static int atomisp_g_input(struct file *file, void *fh, unsigned int *input)
{
struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
- rt_mutex_lock(&isp->mutex);
*input = asd->input_curr;
- rt_mutex_unlock(&isp->mutex);
-
return 0;
}
{
struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+ struct atomisp_sub_device *asd = pipe->asd;
struct v4l2_subdev *camera = NULL;
struct v4l2_subdev *motor;
int ret;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
+ ret = atomisp_pipe_check(pipe, true);
+ if (ret)
+ return ret;
- rt_mutex_lock(&isp->mutex);
if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) {
dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt);
- ret = -EINVAL;
- goto error;
+ return -EINVAL;
}
/*
dev_err(isp->dev,
"%s, camera is already used by stream: %d\n", __func__,
isp->inputs[input].asd->index);
- ret = -EBUSY;
- goto error;
+ return -EBUSY;
}
camera = isp->inputs[input].camera;
if (!camera) {
dev_err(isp->dev, "%s, no camera\n", __func__);
- ret = -EINVAL;
- goto error;
- }
-
- if (atomisp_subdev_streaming_count(asd)) {
- dev_err(isp->dev,
- "ISP is still streaming, stop first\n");
- ret = -EINVAL;
- goto error;
+ return -EINVAL;
}
/* power off the current owned sensor, as it is not used this time */
ret = v4l2_subdev_call(isp->inputs[input].camera, core, s_power, 1);
if (ret) {
dev_err(isp->dev, "Failed to power-on sensor\n");
- goto error;
+ return ret;
}
/*
* Some sensor driver resets the run mode during power-on, thus force
0, isp->inputs[input].sensor_index, 0);
if (ret && (ret != -ENOIOCTLCMD)) {
dev_err(isp->dev, "Failed to select sensor\n");
- goto error;
+ return ret;
}
if (!IS_ISP2401) {
ret = v4l2_subdev_call(motor, core, s_power, 1);
}
- if (!isp->sw_contex.file_input && motor)
+ if (motor)
ret = v4l2_subdev_call(motor, core, init, 1);
asd->input_curr = input;
/* mark this camera is used by the current stream */
isp->inputs[input].asd = asd;
- rt_mutex_unlock(&isp->mutex);
return 0;
-
-error:
- rt_mutex_unlock(&isp->mutex);
-
- return ret;
}
static int atomisp_enum_framesizes(struct file *file, void *priv,
unsigned int i, fi = 0;
int rval;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
camera = isp->inputs[asd->input_curr].camera;
if(!camera) {
dev_err(isp->dev, "%s(): camera is NULL, device is %s\n",
return -EINVAL;
}
- rt_mutex_lock(&isp->mutex);
-
rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code);
if (rval == -ENOIOCTLCMD) {
dev_warn(isp->dev,
"enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n",
camera->name);
}
- rt_mutex_unlock(&isp->mutex);
if (rval)
return rval;
return -EINVAL;
}
-static int atomisp_g_fmt_file(struct file *file, void *fh,
- struct v4l2_format *f)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
- rt_mutex_lock(&isp->mutex);
- f->fmt.pix = pipe->pix;
- rt_mutex_unlock(&isp->mutex);
-
- return 0;
-}
-
static int atomisp_adjust_fmt(struct v4l2_format *f)
{
const struct atomisp_format_bridge *format_bridge;
struct v4l2_format *f)
{
struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
int ret;
- rt_mutex_lock(&isp->mutex);
- ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL);
- rt_mutex_unlock(&isp->mutex);
+ /*
+ * atomisp_try_fmt() gived results with padding included, note
+ * (this gets removed again by the atomisp_adjust_fmt() call below.
+ */
+ f->fmt.pix.width += pad_w;
+ f->fmt.pix.height += pad_h;
+ ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL);
if (ret)
return ret;
struct v4l2_format *f)
{
struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
struct atomisp_video_pipe *pipe;
- rt_mutex_lock(&isp->mutex);
pipe = atomisp_to_video_pipe(vdev);
- rt_mutex_unlock(&isp->mutex);
f->fmt.pix = pipe->pix;
return atomisp_try_fmt_cap(file, fh, f);
}
-static int atomisp_s_fmt_cap(struct file *file, void *fh,
- struct v4l2_format *f)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- int ret;
-
- rt_mutex_lock(&isp->mutex);
- if (isp->isp_fatal_error) {
- ret = -EIO;
- rt_mutex_unlock(&isp->mutex);
- return ret;
- }
- ret = atomisp_set_fmt(vdev, f);
- rt_mutex_unlock(&isp->mutex);
- return ret;
-}
-
-static int atomisp_s_fmt_file(struct file *file, void *fh,
- struct v4l2_format *f)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- int ret;
-
- rt_mutex_lock(&isp->mutex);
- ret = atomisp_set_fmt_file(vdev, f);
- rt_mutex_unlock(&isp->mutex);
- return ret;
-}
-
/*
* Free videobuffer buffer priv data
*/
/*
* Initiate Memory Mapping or User Pointer I/O
*/
-int __atomisp_reqbufs(struct file *file, void *fh,
- struct v4l2_requestbuffers *req)
+int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req)
{
struct video_device *vdev = video_devdata(file);
struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
struct ia_css_frame *frame;
struct videobuf_vmalloc_memory *vm_mem;
u16 source_pad = atomisp_subdev_source_pad(vdev);
- u16 stream_id;
int ret = 0, i = 0;
- if (!asd) {
- dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
- stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
-
if (req->count == 0) {
mutex_lock(&pipe->capq.vb_lock);
if (!list_empty(&pipe->capq.stream))
if (ret)
return ret;
- atomisp_alloc_css_stat_bufs(asd, stream_id);
+ atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL);
/*
* for user pointer type, buffers are not really allocated here,
return -ENOMEM;
}
-int atomisp_reqbufs(struct file *file, void *fh,
- struct v4l2_requestbuffers *req)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- int ret;
-
- rt_mutex_lock(&isp->mutex);
- ret = __atomisp_reqbufs(file, fh, req);
- rt_mutex_unlock(&isp->mutex);
-
- return ret;
-}
-
-static int atomisp_reqbufs_file(struct file *file, void *fh,
- struct v4l2_requestbuffers *req)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
- if (req->count == 0) {
- mutex_lock(&pipe->outq.vb_lock);
- atomisp_videobuf_free_queue(&pipe->outq);
- mutex_unlock(&pipe->outq.vb_lock);
- return 0;
- }
-
- return videobuf_reqbufs(&pipe->outq, req);
-}
-
/* application query the status of a buffer */
static int atomisp_querybuf(struct file *file, void *fh,
struct v4l2_buffer *buf)
return videobuf_querybuf(&pipe->capq, buf);
}
-static int atomisp_querybuf_file(struct file *file, void *fh,
- struct v4l2_buffer *buf)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
- return videobuf_querybuf(&pipe->outq, buf);
-}
-
/*
* Applications call the VIDIOC_QBUF ioctl to enqueue an empty (capturing) or
* filled (output) buffer in the drivers incoming queue.
struct ia_css_frame *handle = NULL;
u32 length;
u32 pgnr;
- int ret = 0;
-
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
- rt_mutex_lock(&isp->mutex);
- if (isp->isp_fatal_error) {
- ret = -EIO;
- goto error;
- }
+ int ret;
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
- dev_err(isp->dev, "%s: reject, as ISP at stopping.\n",
- __func__);
- ret = -EIO;
- goto error;
- }
+ ret = atomisp_pipe_check(pipe, false);
+ if (ret)
+ return ret;
if (!buf || buf->index >= VIDEO_MAX_FRAME ||
!pipe->capq.bufs[buf->index]) {
dev_err(isp->dev, "Invalid index for qbuf.\n");
- ret = -EINVAL;
- goto error;
+ return -EINVAL;
}
/*
* address and reprograme out page table properly
*/
if (buf->memory == V4L2_MEMORY_USERPTR) {
+ if (offset_in_page(buf->m.userptr)) {
+ dev_err(isp->dev, "Error userptr is not page aligned.\n");
+ return -EINVAL;
+ }
+
vb = pipe->capq.bufs[buf->index];
vm_mem = vb->priv;
- if (!vm_mem) {
- ret = -EINVAL;
- goto error;
- }
+ if (!vm_mem)
+ return -EINVAL;
length = vb->bsize;
pgnr = (length + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
goto done;
if (atomisp_get_css_frame_info(asd,
- atomisp_subdev_source_pad(vdev), &frame_info)) {
- ret = -EIO;
- goto error;
- }
+ atomisp_subdev_source_pad(vdev), &frame_info))
+ return -EIO;
ret = ia_css_frame_map(&handle, &frame_info,
(void __user *)buf->m.userptr,
pgnr);
if (ret) {
dev_err(isp->dev, "Failed to map user buffer\n");
- goto error;
+ return ret;
}
if (vm_mem->vaddr) {
pipe->frame_params[buf->index] = NULL;
- rt_mutex_unlock(&isp->mutex);
-
+ mutex_unlock(&isp->mutex);
ret = videobuf_qbuf(&pipe->capq, buf);
- rt_mutex_lock(&isp->mutex);
+ mutex_lock(&isp->mutex);
if (ret)
- goto error;
+ return ret;
/* TODO: do this better, not best way to queue to css */
if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
atomisp_handle_parameter_and_buffer(pipe);
} else {
atomisp_qbuffers_to_css(asd);
-
- if (!IS_ISP2401) {
- if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd))
- atomisp_wdt_start(asd);
- } else {
- if (!atomisp_is_wdt_running(pipe) &&
- atomisp_buffers_queued_pipe(pipe))
- atomisp_wdt_start_pipe(pipe);
- }
}
}
asd->pending_capture_request++;
dev_dbg(isp->dev, "Add one pending capture request.\n");
}
- rt_mutex_unlock(&isp->mutex);
dev_dbg(isp->dev, "qbuf buffer %d (%s) for asd%d\n", buf->index,
vdev->name, asd->index);
- return ret;
-
-error:
- rt_mutex_unlock(&isp->mutex);
- return ret;
-}
-
-static int atomisp_qbuf_file(struct file *file, void *fh,
- struct v4l2_buffer *buf)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
- int ret;
-
- rt_mutex_lock(&isp->mutex);
- if (isp->isp_fatal_error) {
- ret = -EIO;
- goto error;
- }
-
- if (!buf || buf->index >= VIDEO_MAX_FRAME ||
- !pipe->outq.bufs[buf->index]) {
- dev_err(isp->dev, "Invalid index for qbuf.\n");
- ret = -EINVAL;
- goto error;
- }
-
- if (buf->memory != V4L2_MEMORY_MMAP) {
- dev_err(isp->dev, "Unsupported memory method\n");
- ret = -EINVAL;
- goto error;
- }
-
- if (buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
- dev_err(isp->dev, "Unsupported buffer type\n");
- ret = -EINVAL;
- goto error;
- }
- rt_mutex_unlock(&isp->mutex);
-
- return videobuf_qbuf(&pipe->outq, buf);
-
-error:
- rt_mutex_unlock(&isp->mutex);
-
- return ret;
+ return 0;
}
static int __get_frame_exp_id(struct atomisp_video_pipe *pipe,
struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
struct atomisp_sub_device *asd = pipe->asd;
struct atomisp_device *isp = video_get_drvdata(vdev);
- int ret = 0;
-
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
- rt_mutex_lock(&isp->mutex);
-
- if (isp->isp_fatal_error) {
- rt_mutex_unlock(&isp->mutex);
- return -EIO;
- }
-
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
- rt_mutex_unlock(&isp->mutex);
- dev_err(isp->dev, "%s: reject, as ISP at stopping.\n",
- __func__);
- return -EIO;
- }
+ int ret;
- rt_mutex_unlock(&isp->mutex);
+ ret = atomisp_pipe_check(pipe, false);
+ if (ret)
+ return ret;
+ mutex_unlock(&isp->mutex);
ret = videobuf_dqbuf(&pipe->capq, buf, file->f_flags & O_NONBLOCK);
+ mutex_lock(&isp->mutex);
if (ret) {
if (ret != -EAGAIN)
dev_dbg(isp->dev, "<%s: %d\n", __func__, ret);
return ret;
}
- rt_mutex_lock(&isp->mutex);
+
buf->bytesused = pipe->pix.sizeimage;
buf->reserved = asd->frame_status[buf->index];
if (!(buf->flags & V4L2_BUF_FLAG_ERROR))
buf->reserved |= __get_frame_exp_id(pipe, buf) << 16;
buf->reserved2 = pipe->frame_config_id[buf->index];
- rt_mutex_unlock(&isp->mutex);
dev_dbg(isp->dev,
"dqbuf buffer %d (%s) for asd%d with exp_id %d, isp_config_id %d\n",
static unsigned int atomisp_sensor_start_stream(struct atomisp_sub_device *asd)
{
- struct atomisp_device *isp = asd->isp;
-
- if (isp->inputs[asd->input_curr].camera_caps->
- sensor[asd->sensor_curr].stream_num > 1) {
- if (asd->high_speed_mode)
- return 1;
- else
- return 2;
- }
-
if (asd->vfpp->val != ATOMISP_VFPP_ENABLE ||
asd->copy_mode)
return 1;
int atomisp_stream_on_master_slave_sensor(struct atomisp_device *isp,
bool isp_timeout)
{
- unsigned int master = -1, slave = -1, delay_slave = 0;
- int i, ret;
-
- /*
- * ISP only support 2 streams now so ignore multiple master/slave
- * case to reduce the delay between 2 stream_on calls.
- */
- for (i = 0; i < isp->num_of_streams; i++) {
- int sensor_index = isp->asd[i].input_curr;
-
- if (isp->inputs[sensor_index].camera_caps->
- sensor[isp->asd[i].sensor_curr].is_slave)
- slave = sensor_index;
- else
- master = sensor_index;
- }
+ unsigned int master, slave, delay_slave = 0;
+ int ret;
- if (master == -1 || slave == -1) {
- master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR;
- slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR;
- dev_warn(isp->dev,
- "depth mode use default master=%s.slave=%s.\n",
- isp->inputs[master].camera->name,
- isp->inputs[slave].camera->name);
- }
+ master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR;
+ slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR;
+ dev_warn(isp->dev,
+ "depth mode use default master=%s.slave=%s.\n",
+ isp->inputs[master].camera->name,
+ isp->inputs[slave].camera->name);
ret = v4l2_subdev_call(isp->inputs[master].camera, core,
ioctl, ATOMISP_IOC_G_DEPTH_SYNC_COMP,
return 0;
}
-/* FIXME! ISP2400 */
-static void __wdt_on_master_slave_sensor(struct atomisp_device *isp,
- unsigned int wdt_duration)
-{
- if (atomisp_buffers_queued(&isp->asd[0]))
- atomisp_wdt_refresh(&isp->asd[0], wdt_duration);
- if (atomisp_buffers_queued(&isp->asd[1]))
- atomisp_wdt_refresh(&isp->asd[1], wdt_duration);
-}
-
-/* FIXME! ISP2401 */
-static void __wdt_on_master_slave_sensor_pipe(struct atomisp_video_pipe *pipe,
- unsigned int wdt_duration,
- bool enable)
-{
- static struct atomisp_video_pipe *pipe0;
-
- if (enable) {
- if (atomisp_buffers_queued_pipe(pipe0))
- atomisp_wdt_refresh_pipe(pipe0, wdt_duration);
- if (atomisp_buffers_queued_pipe(pipe))
- atomisp_wdt_refresh_pipe(pipe, wdt_duration);
- } else {
- pipe0 = pipe;
- }
-}
-
-static void atomisp_pause_buffer_event(struct atomisp_device *isp)
-{
- struct v4l2_event event = {0};
- int i;
-
- event.type = V4L2_EVENT_ATOMISP_PAUSE_BUFFER;
-
- for (i = 0; i < isp->num_of_streams; i++) {
- int sensor_index = isp->asd[i].input_curr;
-
- if (isp->inputs[sensor_index].camera_caps->
- sensor[isp->asd[i].sensor_curr].is_slave) {
- v4l2_event_queue(isp->asd[i].subdev.devnode, &event);
- break;
- }
- }
-}
-
/* Input system HW workaround */
/* Input system address translation corrupts burst during */
/* invalidate. SW workaround for this is to set burst length */
struct pci_dev *pdev = to_pci_dev(isp->dev);
enum ia_css_pipe_id css_pipe_id;
unsigned int sensor_start_stream;
- unsigned int wdt_duration = ATOMISP_ISP_TIMEOUT_DURATION;
- int ret = 0;
unsigned long irqflags;
-
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
+ int ret;
dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n",
atomisp_subdev_source_pad(vdev), asd->index);
return -EINVAL;
}
- rt_mutex_lock(&isp->mutex);
- if (isp->isp_fatal_error) {
- ret = -EIO;
- goto out;
- }
-
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
- ret = -EBUSY;
- goto out;
- }
+ ret = atomisp_pipe_check(pipe, false);
+ if (ret)
+ return ret;
if (pipe->capq.streaming)
- goto out;
+ return 0;
/* Input system HW workaround */
atomisp_dma_burst_len_cfg(asd);
if (list_empty(&pipe->capq.stream)) {
spin_unlock_irqrestore(&pipe->irq_lock, irqflags);
dev_dbg(isp->dev, "no buffer in the queue\n");
- ret = -EINVAL;
- goto out;
+ return -EINVAL;
}
spin_unlock_irqrestore(&pipe->irq_lock, irqflags);
ret = videobuf_streamon(&pipe->capq);
if (ret)
- goto out;
+ return ret;
/* Reset pending capture request count. */
asd->pending_capture_request = 0;
- if ((atomisp_subdev_streaming_count(asd) > sensor_start_stream) &&
- (!isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl)) {
+ if (atomisp_subdev_streaming_count(asd) > sensor_start_stream) {
/* trigger still capture */
if (asd->continuous_mode->val &&
atomisp_subdev_source_pad(vdev)
if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) {
flush_work(&asd->delayed_init_work);
- rt_mutex_unlock(&isp->mutex);
- if (wait_for_completion_interruptible(
- &asd->init_done) != 0)
+ mutex_unlock(&isp->mutex);
+ ret = wait_for_completion_interruptible(&asd->init_done);
+ mutex_lock(&isp->mutex);
+ if (ret != 0)
return -ERESTARTSYS;
- rt_mutex_lock(&isp->mutex);
}
/* handle per_frame_setting parameter and buffers */
asd->params.offline_parm.num_captures,
asd->params.offline_parm.skip_frames,
asd->params.offline_parm.offset);
- if (ret) {
- ret = -EINVAL;
- goto out;
- }
- if (asd->depth_mode->val)
- atomisp_pause_buffer_event(isp);
+ if (ret)
+ return -EINVAL;
}
}
atomisp_qbuffers_to_css(asd);
- goto out;
+ return 0;
}
if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
ret = atomisp_css_start(asd, css_pipe_id, false);
if (ret)
- goto out;
+ return ret;
+ spin_lock_irqsave(&isp->lock, irqflags);
asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED;
+ spin_unlock_irqrestore(&isp->lock, irqflags);
atomic_set(&asd->sof_count, -1);
atomic_set(&asd->sequence, -1);
atomic_set(&asd->sequence_temp, -1);
- if (isp->sw_contex.file_input)
- wdt_duration = ATOMISP_ISP_FILE_TIMEOUT_DURATION;
asd->params.dis_proj_data_valid = false;
asd->latest_preview_exp_id = 0;
/* Only start sensor when the last streaming instance started */
if (atomisp_subdev_streaming_count(asd) < sensor_start_stream)
- goto out;
+ return 0;
start_sensor:
if (isp->flash) {
atomisp_setup_flash(asd);
}
- if (!isp->sw_contex.file_input) {
- atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
- atomisp_css_valid_sof(isp));
- atomisp_csi2_configure(asd);
- /*
- * set freq to max when streaming count > 1 which indicate
- * dual camera would run
- */
- if (atomisp_streaming_count(isp) > 1) {
- if (atomisp_freq_scaling(isp,
- ATOMISP_DFS_MODE_MAX, false) < 0)
- dev_dbg(isp->dev, "DFS max mode failed!\n");
- } else {
- if (atomisp_freq_scaling(isp,
- ATOMISP_DFS_MODE_AUTO, false) < 0)
- dev_dbg(isp->dev, "DFS auto mode failed!\n");
- }
- } else {
- if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, false) < 0)
+ atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
+ atomisp_css_valid_sof(isp));
+ atomisp_csi2_configure(asd);
+ /*
+ * set freq to max when streaming count > 1 which indicate
+ * dual camera would run
+ */
+ if (atomisp_streaming_count(isp) > 1) {
+ if (atomisp_freq_scaling(isp,
+ ATOMISP_DFS_MODE_MAX, false) < 0)
dev_dbg(isp->dev, "DFS max mode failed!\n");
+ } else {
+ if (atomisp_freq_scaling(isp,
+ ATOMISP_DFS_MODE_AUTO, false) < 0)
+ dev_dbg(isp->dev, "DFS auto mode failed!\n");
}
if (asd->depth_mode->val && atomisp_streaming_count(isp) ==
ret = atomisp_stream_on_master_slave_sensor(isp, false);
if (ret) {
dev_err(isp->dev, "master slave sensor stream on failed!\n");
- goto out;
+ return ret;
}
- if (!IS_ISP2401)
- __wdt_on_master_slave_sensor(isp, wdt_duration);
- else
- __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, true);
goto start_delay_wq;
} else if (asd->depth_mode->val && (atomisp_streaming_count(isp) <
ATOMISP_DEPTH_SENSOR_STREAMON_COUNT)) {
- if (IS_ISP2401)
- __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, false);
goto start_delay_wq;
}
ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
video, s_stream, 1);
if (ret) {
+ spin_lock_irqsave(&isp->lock, irqflags);
asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
- ret = -EINVAL;
- goto out;
- }
-
- if (!IS_ISP2401) {
- if (atomisp_buffers_queued(asd))
- atomisp_wdt_refresh(asd, wdt_duration);
- } else {
- if (atomisp_buffers_queued_pipe(pipe))
- atomisp_wdt_refresh_pipe(pipe, wdt_duration);
+ spin_unlock_irqrestore(&isp->lock, irqflags);
+ return -EINVAL;
}
start_delay_wq:
if (asd->continuous_mode->val) {
- struct v4l2_mbus_framefmt *sink;
-
- sink = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
- V4L2_SUBDEV_FORMAT_ACTIVE,
- ATOMISP_SUBDEV_PAD_SINK);
+ atomisp_subdev_get_ffmt(&asd->subdev, NULL,
+ V4L2_SUBDEV_FORMAT_ACTIVE,
+ ATOMISP_SUBDEV_PAD_SINK);
reinit_completion(&asd->init_done);
asd->delayed_init = ATOMISP_DELAYED_INIT_QUEUED;
queue_work(asd->delayed_init_workq, &asd->delayed_init_work);
- atomisp_css_set_cont_prev_start_time(isp,
- ATOMISP_CALC_CSS_PREV_OVERLAP(sink->height));
} else {
asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED;
}
-out:
- rt_mutex_unlock(&isp->mutex);
- return ret;
+
+ return 0;
}
-int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
+int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
{
struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
unsigned long flags;
bool first_streamoff = false;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
atomisp_subdev_source_pad(vdev), asd->index);
lockdep_assert_held(&isp->mutex);
- lockdep_assert_held(&isp->streamoff_mutex);
if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
dev_dbg(isp->dev, "unsupported v4l2 buf type\n");
* do only videobuf_streamoff for capture & vf pipes in
* case of continuous capture
*/
- if ((asd->continuous_mode->val ||
- isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) &&
- atomisp_subdev_source_pad(vdev) !=
- ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
- atomisp_subdev_source_pad(vdev) !=
- ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) {
- if (isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) {
- v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
- video, s_stream, 0);
- } else if (atomisp_subdev_source_pad(vdev)
- == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) {
+ if (asd->continuous_mode->val &&
+ atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
+ atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) {
+ if (atomisp_subdev_source_pad(vdev) == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) {
/* stop continuous still capture if needed */
if (asd->params.offline_parm.num_captures == -1)
atomisp_css_offline_capture_configure(asd,
if (!pipe->capq.streaming)
return 0;
- spin_lock_irqsave(&isp->lock, flags);
- if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
- asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
+ if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED)
first_streamoff = true;
- }
- spin_unlock_irqrestore(&isp->lock, flags);
-
- if (first_streamoff) {
- /* if other streams are running, should not disable watch dog */
- rt_mutex_unlock(&isp->mutex);
- atomisp_wdt_stop(asd, true);
-
- /*
- * must stop sending pixels into GP_FIFO before stop
- * the pipeline.
- */
- if (isp->sw_contex.file_input)
- v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
- video, s_stream, 0);
-
- rt_mutex_lock(&isp->mutex);
- }
spin_lock_irqsave(&isp->lock, flags);
if (atomisp_subdev_streaming_count(asd) == 1)
asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+ else
+ asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
spin_unlock_irqrestore(&isp->lock, flags);
if (!first_streamoff) {
}
atomisp_clear_css_buffer_counters(asd);
-
- if (!isp->sw_contex.file_input)
- atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
- false);
+ atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) {
cancel_work_sync(&asd->delayed_init_work);
asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED;
}
- if (first_streamoff) {
- css_pipe_id = atomisp_get_css_pipe_id(asd);
- atomisp_css_stop(asd, css_pipe_id, false);
- }
+
+ css_pipe_id = atomisp_get_css_pipe_id(asd);
+ atomisp_css_stop(asd, css_pipe_id, false);
+
/* cancel work queue*/
if (asd->video_out_capture.users) {
capture_pipe = &asd->video_out_capture;
!= atomisp_sensor_start_stream(asd))
return 0;
- if (!isp->sw_contex.file_input)
- ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
- video, s_stream, 0);
+ ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
+ video, s_stream, 0);
if (isp->flash) {
asd->params.num_flash_frames = 0;
return ret;
}
-static int atomisp_streamoff(struct file *file, void *fh,
- enum v4l2_buf_type type)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
- int rval;
-
- mutex_lock(&isp->streamoff_mutex);
- rt_mutex_lock(&isp->mutex);
- rval = __atomisp_streamoff(file, fh, type);
- rt_mutex_unlock(&isp->mutex);
- mutex_unlock(&isp->streamoff_mutex);
-
- return rval;
-}
-
/*
* To get the current value of a control.
* applications initialize the id field of a struct v4l2_control and
struct atomisp_device *isp = video_get_drvdata(vdev);
int i, ret = -EINVAL;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
for (i = 0; i < ctrls_num; i++) {
if (ci_v4l2_controls[i].id == control->id) {
ret = 0;
if (ret)
return ret;
- rt_mutex_lock(&isp->mutex);
-
switch (control->id) {
case V4L2_CID_IRIS_ABSOLUTE:
case V4L2_CID_EXPOSURE_ABSOLUTE:
case V4L2_CID_TEST_PATTERN_COLOR_GR:
case V4L2_CID_TEST_PATTERN_COLOR_GB:
case V4L2_CID_TEST_PATTERN_COLOR_B:
- rt_mutex_unlock(&isp->mutex);
return v4l2_g_ctrl(isp->inputs[asd->input_curr].camera->
ctrl_handler, control);
case V4L2_CID_COLORFX:
break;
}
- rt_mutex_unlock(&isp->mutex);
return ret;
}
struct atomisp_device *isp = video_get_drvdata(vdev);
int i, ret = -EINVAL;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
for (i = 0; i < ctrls_num; i++) {
if (ci_v4l2_controls[i].id == control->id) {
ret = 0;
if (ret)
return ret;
- rt_mutex_lock(&isp->mutex);
switch (control->id) {
case V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE:
case V4L2_CID_EXPOSURE:
case V4L2_CID_TEST_PATTERN_COLOR_GR:
case V4L2_CID_TEST_PATTERN_COLOR_GB:
case V4L2_CID_TEST_PATTERN_COLOR_B:
- rt_mutex_unlock(&isp->mutex);
return v4l2_s_ctrl(NULL,
isp->inputs[asd->input_curr].camera->
ctrl_handler, control);
ret = -EINVAL;
break;
}
- rt_mutex_unlock(&isp->mutex);
return ret;
}
struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
struct atomisp_device *isp = video_get_drvdata(vdev);
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
switch (qc->id) {
case V4L2_CID_FOCUS_ABSOLUTE:
case V4L2_CID_FOCUS_RELATIVE:
int i;
int ret = 0;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
if (!IS_ISP2401)
motor = isp->inputs[asd->input_curr].motor;
else
&ctrl);
break;
case V4L2_CID_ZOOM_ABSOLUTE:
- rt_mutex_lock(&isp->mutex);
ret = atomisp_digital_zoom(asd, 0, &ctrl.value);
- rt_mutex_unlock(&isp->mutex);
break;
case V4L2_CID_G_SKIP_FRAMES:
ret = v4l2_subdev_call(
int i;
int ret = 0;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
if (!IS_ISP2401)
motor = isp->inputs[asd->input_curr].motor;
else
case V4L2_CID_FLASH_STROBE:
case V4L2_CID_FLASH_MODE:
case V4L2_CID_FLASH_STATUS_REGISTER:
- rt_mutex_lock(&isp->mutex);
if (isp->flash) {
ret =
v4l2_s_ctrl(NULL, isp->flash->ctrl_handler,
asd->params.num_flash_frames = 0;
}
}
- rt_mutex_unlock(&isp->mutex);
break;
case V4L2_CID_ZOOM_ABSOLUTE:
- rt_mutex_lock(&isp->mutex);
ret = atomisp_digital_zoom(asd, 1, &ctrl.value);
- rt_mutex_unlock(&isp->mutex);
break;
default:
ctr = v4l2_ctrl_find(&asd->ctrl_handler, ctrl.id);
struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
struct atomisp_device *isp = video_get_drvdata(vdev);
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
dev_err(isp->dev, "unsupported v4l2 buf type\n");
return -EINVAL;
}
- rt_mutex_lock(&isp->mutex);
parm->parm.capture.capturemode = asd->run_mode->val;
- rt_mutex_unlock(&isp->mutex);
return 0;
}
int rval;
int fps;
- if (!asd) {
- dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
- __func__, vdev->name);
- return -EINVAL;
- }
-
if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
dev_err(isp->dev, "unsupported v4l2 buf type\n");
return -EINVAL;
}
- rt_mutex_lock(&isp->mutex);
-
asd->high_speed_mode = false;
switch (parm->parm.capture.capturemode) {
case CI_MODE_NONE: {
asd->high_speed_mode = true;
}
- goto out;
+ return rval == -ENOIOCTLCMD ? 0 : rval;
}
case CI_MODE_VIDEO:
mode = ATOMISP_RUN_MODE_VIDEO;
mode = ATOMISP_RUN_MODE_PREVIEW;
break;
default:
- rval = -EINVAL;
- goto out;
+ return -EINVAL;
}
rval = v4l2_ctrl_s_ctrl(asd->run_mode, mode);
-out:
- rt_mutex_unlock(&isp->mutex);
-
return rval == -ENOIOCTLCMD ? 0 : rval;
}
-static int atomisp_s_parm_file(struct file *file, void *fh,
- struct v4l2_streamparm *parm)
-{
- struct video_device *vdev = video_devdata(file);
- struct atomisp_device *isp = video_get_drvdata(vdev);
-
- if (parm->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
- dev_err(isp->dev, "unsupported v4l2 buf type for output\n");
- return -EINVAL;
- }
-
- rt_mutex_lock(&isp->mutex);
- isp->sw_contex.file_input = true;
- rt_mutex_unlock(&isp->mutex);
-
- return 0;
-}
-
static long atomisp_vidioc_default(struct file *file, void *fh,
bool valid_prio, unsigned int cmd, void *arg)
{
struct video_device *vdev = video_devdata(file);
struct atomisp_device *isp = video_get_drvdata(vdev);
- struct atomisp_sub_device *asd;
+ struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
struct v4l2_subdev *motor;
- bool acc_node;
int err;
- acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
- if (acc_node)
- asd = atomisp_to_acc_pipe(vdev)->asd;
- else
- asd = atomisp_to_video_pipe(vdev)->asd;
-
if (!IS_ISP2401)
motor = isp->inputs[asd->input_curr].motor;
else
motor = isp->motor;
- switch (cmd) {
- case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA:
- case ATOMISP_IOC_S_EXPOSURE:
- case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP:
- case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA:
- case ATOMISP_IOC_EXT_ISP_CTRL:
- case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO:
- case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE:
- case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE:
- case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT:
- case ATOMISP_IOC_S_SENSOR_EE_CONFIG:
- case ATOMISP_IOC_G_UPDATE_EXPOSURE:
- /* we do not need take isp->mutex for these IOCTLs */
- break;
- default:
- rt_mutex_lock(&isp->mutex);
- break;
- }
switch (cmd) {
case ATOMISP_IOC_S_SENSOR_RUNMODE:
if (IS_ISP2401)
break;
}
- switch (cmd) {
- case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA:
- case ATOMISP_IOC_S_EXPOSURE:
- case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP:
- case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA:
- case ATOMISP_IOC_EXT_ISP_CTRL:
- case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO:
- case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE:
- case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE:
- case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT:
- case ATOMISP_IOC_G_UPDATE_EXPOSURE:
- break;
- default:
- rt_mutex_unlock(&isp->mutex);
- break;
- }
return err;
}
.vidioc_enum_fmt_vid_cap = atomisp_enum_fmt_cap,
.vidioc_try_fmt_vid_cap = atomisp_try_fmt_cap,
.vidioc_g_fmt_vid_cap = atomisp_g_fmt_cap,
- .vidioc_s_fmt_vid_cap = atomisp_s_fmt_cap,
+ .vidioc_s_fmt_vid_cap = atomisp_set_fmt,
.vidioc_reqbufs = atomisp_reqbufs,
.vidioc_querybuf = atomisp_querybuf,
.vidioc_qbuf = atomisp_qbuf,
.vidioc_s_parm = atomisp_s_parm,
.vidioc_g_parm = atomisp_g_parm,
};
-
-const struct v4l2_ioctl_ops atomisp_file_ioctl_ops = {
- .vidioc_querycap = atomisp_querycap,
- .vidioc_g_fmt_vid_out = atomisp_g_fmt_file,
- .vidioc_s_fmt_vid_out = atomisp_s_fmt_file,
- .vidioc_s_parm = atomisp_s_parm_file,
- .vidioc_reqbufs = atomisp_reqbufs_file,
- .vidioc_querybuf = atomisp_querybuf_file,
- .vidioc_qbuf = atomisp_qbuf_file,
-};
const struct
atomisp_format_bridge *atomisp_get_format_bridge_from_mbus(u32 mbus_code);
+int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool streaming_ok);
+
int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd,
uint16_t stream_id);
-int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type);
-int __atomisp_reqbufs(struct file *file, void *fh,
- struct v4l2_requestbuffers *req);
-
-int atomisp_reqbufs(struct file *file, void *fh,
- struct v4l2_requestbuffers *req);
+int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type);
+int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req);
enum ia_css_pipe_id atomisp_get_css_pipe_id(struct atomisp_sub_device
*asd);
void atomisp_videobuf_free_buf(struct videobuf_buffer *vb);
-extern const struct v4l2_file_operations atomisp_file_fops;
-
extern const struct v4l2_ioctl_ops atomisp_ioctl_ops;
-extern const struct v4l2_ioctl_ops atomisp_file_ioctl_ops;
-
unsigned int atomisp_streaming_count(struct atomisp_device *isp);
/* compat_ioctl for 32bit userland app and 64bit kernel */
struct atomisp_sub_device *isp_sd = v4l2_get_subdevdata(sd);
struct atomisp_device *isp = isp_sd->isp;
struct v4l2_mbus_framefmt *ffmt[ATOMISP_SUBDEV_PADS_NUM];
- u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode);
struct v4l2_rect *crop[ATOMISP_SUBDEV_PADS_NUM],
*comp[ATOMISP_SUBDEV_PADS_NUM];
- enum atomisp_input_stream_id stream_id;
unsigned int i;
unsigned int padding_w = pad_w;
unsigned int padding_h = pad_h;
- stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad);
-
isp_get_fmt_rect(sd, sd_state, which, ffmt, crop, comp);
dev_dbg(isp->dev,
dvs_w = dvs_h = 0;
}
atomisp_css_video_set_dis_envelope(isp_sd, dvs_w, dvs_h);
- atomisp_css_input_set_effective_resolution(isp_sd, stream_id,
- crop[pad]->width, crop[pad]->height);
-
+ atomisp_css_input_set_effective_resolution(isp_sd,
+ ATOMISP_INPUT_STREAM_GENERAL,
+ crop[pad]->width,
+ crop[pad]->height);
break;
}
case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE:
if (r->width * crop[ATOMISP_SUBDEV_PAD_SINK]->height <
crop[ATOMISP_SUBDEV_PAD_SINK]->width * r->height)
atomisp_css_input_set_effective_resolution(isp_sd,
- stream_id,
+ ATOMISP_INPUT_STREAM_GENERAL,
rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]->
height * r->width / r->height,
ATOM_ISP_STEP_WIDTH),
crop[ATOMISP_SUBDEV_PAD_SINK]->height);
else
atomisp_css_input_set_effective_resolution(isp_sd,
- stream_id,
+ ATOMISP_INPUT_STREAM_GENERAL,
crop[ATOMISP_SUBDEV_PAD_SINK]->width,
rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]->
width * r->height / r->width,
struct atomisp_device *isp = isp_sd->isp;
struct v4l2_mbus_framefmt *__ffmt =
atomisp_subdev_get_ffmt(sd, sd_state, which, pad);
- u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode);
- enum atomisp_input_stream_id stream_id;
dev_dbg(isp->dev, "ffmt: pad %s w %d h %d code 0x%8.8x which %s\n",
atomisp_pad_str(pad), ffmt->width, ffmt->height, ffmt->code,
which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY"
: "V4L2_SUBDEV_FORMAT_ACTIVE");
- stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad);
-
switch (pad) {
case ATOMISP_SUBDEV_PAD_SINK: {
const struct atomisp_in_fmt_conv *fc =
if (which == V4L2_SUBDEV_FORMAT_ACTIVE) {
atomisp_css_input_set_resolution(isp_sd,
- stream_id, ffmt);
+ ATOMISP_INPUT_STREAM_GENERAL, ffmt);
atomisp_css_input_set_binning_factor(isp_sd,
- stream_id,
+ ATOMISP_INPUT_STREAM_GENERAL,
atomisp_get_sensor_bin_factor(isp_sd));
- atomisp_css_input_set_bayer_order(isp_sd, stream_id,
+ atomisp_css_input_set_bayer_order(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
fc->bayer_order);
- atomisp_css_input_set_format(isp_sd, stream_id,
+ atomisp_css_input_set_format(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
fc->atomisp_in_fmt);
- atomisp_css_set_default_isys_config(isp_sd, stream_id,
+ atomisp_css_set_default_isys_config(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
ffmt);
}
{
struct atomisp_sub_device *asd = container_of(
ctrl->handler, struct atomisp_sub_device, ctrl_handler);
+ unsigned int streaming;
+ unsigned long flags;
switch (ctrl->id) {
case V4L2_CID_RUN_MODE:
return __atomisp_update_run_mode(asd);
case V4L2_CID_DEPTH_MODE:
- if (asd->streaming != ATOMISP_DEVICE_STREAMING_DISABLED) {
+ /* Use spinlock instead of mutex to avoid possible locking issues */
+ spin_lock_irqsave(&asd->isp->lock, flags);
+ streaming = asd->streaming;
+ spin_unlock_irqrestore(&asd->isp->lock, flags);
+ if (streaming != ATOMISP_DEVICE_STREAMING_DISABLED) {
dev_err(asd->isp->dev,
"ISP is streaming, it is not supported to change the depth mode\n");
return -EINVAL;
pipe->isp = asd->isp;
spin_lock_init(&pipe->irq_lock);
INIT_LIST_HEAD(&pipe->activeq);
- INIT_LIST_HEAD(&pipe->activeq_out);
INIT_LIST_HEAD(&pipe->buffers_waiting_for_param);
INIT_LIST_HEAD(&pipe->per_frame_params);
memset(pipe->frame_request_config_id,
sizeof(struct atomisp_css_params_with_list *));
}
-static void atomisp_init_acc_pipe(struct atomisp_sub_device *asd,
- struct atomisp_acc_pipe *pipe)
-{
- pipe->asd = asd;
- pipe->isp = asd->isp;
-}
-
/*
* isp_subdev_init_entities - Initialize V4L2 subdev and media entity
* @asd: ISP CCDC module
if (ret < 0)
return ret;
- atomisp_init_subdev_pipe(asd, &asd->video_in,
- V4L2_BUF_TYPE_VIDEO_OUTPUT);
-
atomisp_init_subdev_pipe(asd, &asd->video_out_preview,
V4L2_BUF_TYPE_VIDEO_CAPTURE);
atomisp_init_subdev_pipe(asd, &asd->video_out_video_capture,
V4L2_BUF_TYPE_VIDEO_CAPTURE);
- atomisp_init_acc_pipe(asd, &asd->video_acc);
-
- ret = atomisp_video_init(&asd->video_in, "MEMORY",
- ATOMISP_RUN_MODE_SDV);
- if (ret < 0)
- return ret;
-
ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE",
ATOMISP_RUN_MODE_STILL_CAPTURE);
if (ret < 0)
if (ret < 0)
return ret;
- atomisp_acc_init(&asd->video_acc, "ACC");
-
ret = v4l2_ctrl_handler_init(&asd->ctrl_handler, 1);
if (ret)
return ret;
return ret;
}
}
- for (i = 0; i < isp->input_cnt - 2; i++) {
+ for (i = 0; i < isp->input_cnt; i++) {
+ /* Don't create links for the test-pattern-generator */
+ if (isp->inputs[i].type == TEST_PATTERN)
+ continue;
+
ret = media_create_pad_link(&isp->inputs[i].camera->entity, 0,
&isp->csi2_port[isp->inputs[i].
port].subdev.entity,
entity, 0, 0);
if (ret < 0)
return ret;
- /*
- * file input only supported on subdev0
- * so do not create pad link for subdevs other then subdev0
- */
- if (asd->index)
- return 0;
- ret = media_create_pad_link(&asd->video_in.vdev.entity,
- 0, &asd->subdev.entity,
- ATOMISP_SUBDEV_PAD_SINK, 0);
- if (ret < 0)
- return ret;
}
return 0;
}
{
atomisp_subdev_cleanup_entities(asd);
v4l2_device_unregister_subdev(&asd->subdev);
- atomisp_video_unregister(&asd->video_in);
atomisp_video_unregister(&asd->video_out_preview);
atomisp_video_unregister(&asd->video_out_vf);
atomisp_video_unregister(&asd->video_out_capture);
atomisp_video_unregister(&asd->video_out_video_capture);
- atomisp_acc_unregister(&asd->video_acc);
}
-int atomisp_subdev_register_entities(struct atomisp_sub_device *asd,
- struct v4l2_device *vdev)
+int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd,
+ struct v4l2_device *vdev)
+{
+ return v4l2_device_register_subdev(vdev, &asd->subdev);
+}
+
+int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd,
+ struct v4l2_device *vdev)
{
int ret;
- u32 device_caps;
/*
* FIXME: check if all device caps are properly initialized.
- * Should any of those use V4L2_CAP_META_OUTPUT? Probably yes.
+ * Should any of those use V4L2_CAP_META_CAPTURE? Probably yes.
*/
- device_caps = V4L2_CAP_VIDEO_CAPTURE |
- V4L2_CAP_STREAMING;
-
- /* Register the subdev and video node. */
-
- ret = v4l2_device_register_subdev(vdev, &asd->subdev);
- if (ret < 0)
- goto error;
-
asd->video_out_preview.vdev.v4l2_dev = vdev;
- asd->video_out_preview.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_OUTPUT;
+ asd->video_out_preview.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
ret = video_register_device(&asd->video_out_preview.vdev,
VFL_TYPE_VIDEO, -1);
if (ret < 0)
goto error;
asd->video_out_capture.vdev.v4l2_dev = vdev;
- asd->video_out_capture.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_OUTPUT;
+ asd->video_out_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
ret = video_register_device(&asd->video_out_capture.vdev,
VFL_TYPE_VIDEO, -1);
if (ret < 0)
goto error;
asd->video_out_vf.vdev.v4l2_dev = vdev;
- asd->video_out_vf.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_OUTPUT;
+ asd->video_out_vf.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
ret = video_register_device(&asd->video_out_vf.vdev,
VFL_TYPE_VIDEO, -1);
if (ret < 0)
goto error;
asd->video_out_video_capture.vdev.v4l2_dev = vdev;
- asd->video_out_video_capture.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_OUTPUT;
+ asd->video_out_video_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
ret = video_register_device(&asd->video_out_video_capture.vdev,
VFL_TYPE_VIDEO, -1);
if (ret < 0)
goto error;
- asd->video_acc.vdev.v4l2_dev = vdev;
- asd->video_acc.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_OUTPUT;
- ret = video_register_device(&asd->video_acc.vdev,
- VFL_TYPE_VIDEO, -1);
- if (ret < 0)
- goto error;
-
- /*
- * file input only supported on subdev0
- * so do not create video node for subdevs other then subdev0
- */
- if (asd->index)
- return 0;
-
- asd->video_in.vdev.v4l2_dev = vdev;
- asd->video_in.vdev.device_caps = device_caps |
- V4L2_CAP_VIDEO_CAPTURE;
- ret = video_register_device(&asd->video_in.vdev,
- VFL_TYPE_VIDEO, -1);
- if (ret < 0)
- goto error;
return 0;
return -ENOMEM;
for (i = 0; i < isp->num_of_streams; i++) {
asd = &isp->asd[i];
- spin_lock_init(&asd->lock);
asd->isp = isp;
isp_subdev_init_params(asd);
asd->index = i;
enum v4l2_buf_type type;
struct media_pad pad;
struct videobuf_queue capq;
- struct videobuf_queue outq;
struct list_head activeq;
- struct list_head activeq_out;
/*
* the buffers waiting for per-frame parameters, this is only valid
* in per-frame setting mode.
unsigned int buffers_in_css;
- /* irq_lock is used to protect video buffer state change operations and
- * also to make activeq, activeq_out, capq and outq list
- * operations atomic. */
+ /*
+ * irq_lock is used to protect video buffer state change operations and
+ * also to make activeq and capq operations atomic.
+ */
spinlock_t irq_lock;
unsigned int users;
*/
unsigned int frame_request_config_id[VIDEO_MAX_FRAME];
struct atomisp_css_params_with_list *frame_params[VIDEO_MAX_FRAME];
-
- /*
- * move wdt from asd struct to create wdt for each pipe
- */
- /* ISP2401 */
- struct timer_list wdt;
- unsigned int wdt_duration; /* in jiffies */
- unsigned long wdt_expires;
- atomic_t wdt_count;
-};
-
-struct atomisp_acc_pipe {
- struct video_device vdev;
- unsigned int users;
- bool running;
- struct atomisp_sub_device *asd;
- struct atomisp_device *isp;
};
struct atomisp_pad_format {
struct list_head list;
};
-struct atomisp_acc_fw {
- struct ia_css_fw_info *fw;
- unsigned int handle;
- unsigned int flags;
- unsigned int type;
- struct {
- size_t length;
- unsigned long css_ptr;
- } args[ATOMISP_ACC_NR_MEMORY];
- struct list_head list;
-};
-
-struct atomisp_map {
- ia_css_ptr ptr;
- size_t length;
- struct list_head list;
- /* FIXME: should keep book which maps are currently used
- * by binaries and not allow releasing those
- * which are in use. Implement by reference counting.
- */
-};
-
struct atomisp_sub_device {
struct v4l2_subdev subdev;
struct media_pad pads[ATOMISP_SUBDEV_PADS_NUM];
enum atomisp_subdev_input_entity input;
unsigned int output;
- struct atomisp_video_pipe video_in;
struct atomisp_video_pipe video_out_capture; /* capture output */
struct atomisp_video_pipe video_out_vf; /* viewfinder output */
struct atomisp_video_pipe video_out_preview; /* preview output */
- struct atomisp_acc_pipe video_acc;
/* video pipe main output */
struct atomisp_video_pipe video_out_video_capture;
/* struct isp_subdev_params params; */
- spinlock_t lock;
struct atomisp_device *isp;
struct v4l2_ctrl_handler ctrl_handler;
struct v4l2_ctrl *fmt_auto;
/* This field specifies which camera (v4l2 input) is selected. */
int input_curr;
- /* This field specifies which sensor is being selected when there
- are multiple sensors connected to the same MIPI port. */
- int sensor_curr;
atomic_t sof_count;
atomic_t sequence; /* Sequence value that is assigned to buffer. */
atomic_t sequence_temp;
- unsigned int streaming; /* Hold both mutex and lock to change this */
+ /*
+ * Writers of streaming must hold both isp->mutex and isp->lock.
+ * Readers of streaming need to hold only one of the two locks.
+ */
+ unsigned int streaming;
bool stream_prepared; /* whether css stream is created */
/* subdev index: will be used to show which subdev is holding the
int raw_buffer_locked_count;
spinlock_t raw_buffer_bitmap_lock;
- /* ISP 2400 */
- struct timer_list wdt;
- unsigned int wdt_duration; /* in jiffies */
- unsigned long wdt_expires;
-
/* ISP2401 */
bool re_trigger_capture;
void atomisp_subdev_cleanup_pending_events(struct atomisp_sub_device *asd);
void atomisp_subdev_unregister_entities(struct atomisp_sub_device *asd);
-int atomisp_subdev_register_entities(struct atomisp_sub_device *asd,
- struct v4l2_device *vdev);
+int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd,
+ struct v4l2_device *vdev);
+int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd,
+ struct v4l2_device *vdev);
int atomisp_subdev_init(struct atomisp_device *isp);
void atomisp_subdev_cleanup(struct atomisp_device *isp);
int atomisp_create_pads_links(struct atomisp_device *isp);
#include "atomisp_cmd.h"
#include "atomisp_common.h"
#include "atomisp_fops.h"
-#include "atomisp_file.h"
#include "atomisp_ioctl.h"
#include "atomisp_internal.h"
#include "atomisp-regs.h"
video->pad.flags = MEDIA_PAD_FL_SINK;
video->vdev.fops = &atomisp_fops;
video->vdev.ioctl_ops = &atomisp_ioctl_ops;
- break;
- case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- direction = "input";
- video->pad.flags = MEDIA_PAD_FL_SOURCE;
- video->vdev.fops = &atomisp_file_fops;
- video->vdev.ioctl_ops = &atomisp_file_ioctl_ops;
+ video->vdev.lock = &video->isp->mutex;
break;
default:
return -EINVAL;
return 0;
}
-void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name)
-{
- video->vdev.fops = &atomisp_fops;
- video->vdev.ioctl_ops = &atomisp_ioctl_ops;
-
- /* Initialize the video device. */
- snprintf(video->vdev.name, sizeof(video->vdev.name),
- "ATOMISP ISP %s", name);
- video->vdev.release = video_device_release_empty;
- video_set_drvdata(&video->vdev, video->isp);
-}
-
void atomisp_video_unregister(struct atomisp_video_pipe *video)
{
if (video_is_registered(&video->vdev)) {
}
}
-void atomisp_acc_unregister(struct atomisp_acc_pipe *video)
-{
- if (video_is_registered(&video->vdev))
- video_unregister_device(&video->vdev);
-}
-
static int atomisp_save_iunit_reg(struct atomisp_device *isp)
{
struct pci_dev *pdev = to_pci_dev(isp->dev);
&subdevs->v4l2_subdev.board_info;
struct i2c_adapter *adapter =
i2c_get_adapter(subdevs->v4l2_subdev.i2c_adapter_id);
- int sensor_num, i;
dev_info(isp->dev, "Probing Subdev %s\n", board_info->type);
* pixel_format.
*/
isp->inputs[isp->input_cnt].frame_size.pixel_format = 0;
- isp->inputs[isp->input_cnt].camera_caps =
- atomisp_get_default_camera_caps();
- sensor_num = isp->inputs[isp->input_cnt]
- .camera_caps->sensor_num;
isp->input_cnt++;
- for (i = 1; i < sensor_num; i++) {
- if (isp->input_cnt >= ATOM_ISP_MAX_INPUTS) {
- dev_warn(isp->dev,
- "atomisp inputs out of range\n");
- break;
- }
- isp->inputs[isp->input_cnt] =
- isp->inputs[isp->input_cnt - 1];
- isp->inputs[isp->input_cnt].sensor_index = i;
- isp->input_cnt++;
- }
break;
case CAMERA_MOTOR:
if (isp->motor) {
for (i = 0; i < isp->num_of_streams; i++)
atomisp_subdev_unregister_entities(&isp->asd[i]);
atomisp_tpg_unregister_entities(&isp->tpg);
- atomisp_file_input_unregister_entities(&isp->file_dev);
for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++)
atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]);
goto csi_and_subdev_probe_failed;
}
- ret =
- atomisp_file_input_register_entities(&isp->file_dev, &isp->v4l2_dev);
- if (ret < 0) {
- dev_err(isp->dev, "atomisp_file_input_register_entities\n");
- goto file_input_register_failed;
- }
-
ret = atomisp_tpg_register_entities(&isp->tpg, &isp->v4l2_dev);
if (ret < 0) {
dev_err(isp->dev, "atomisp_tpg_register_entities\n");
for (i = 0; i < isp->num_of_streams; i++) {
struct atomisp_sub_device *asd = &isp->asd[i];
- ret = atomisp_subdev_register_entities(asd, &isp->v4l2_dev);
+ ret = atomisp_subdev_register_subdev(asd, &isp->v4l2_dev);
if (ret < 0) {
- dev_err(isp->dev,
- "atomisp_subdev_register_entities fail\n");
+ dev_err(isp->dev, "atomisp_subdev_register_subdev fail\n");
for (; i > 0; i--)
atomisp_subdev_unregister_entities(
&isp->asd[i - 1]);
}
}
- dev_dbg(isp->dev,
- "FILE_INPUT enable, camera_cnt: %d\n", isp->input_cnt);
- isp->inputs[isp->input_cnt].type = FILE_INPUT;
- isp->inputs[isp->input_cnt].port = -1;
- isp->inputs[isp->input_cnt].camera_caps =
- atomisp_get_default_camera_caps();
- isp->inputs[isp->input_cnt++].camera = &isp->file_dev.sd;
-
if (isp->input_cnt < ATOM_ISP_MAX_INPUTS) {
dev_dbg(isp->dev,
"TPG detected, camera_cnt: %d\n", isp->input_cnt);
isp->inputs[isp->input_cnt].type = TEST_PATTERN;
isp->inputs[isp->input_cnt].port = -1;
- isp->inputs[isp->input_cnt].camera_caps =
- atomisp_get_default_camera_caps();
isp->inputs[isp->input_cnt++].camera = &isp->tpg.sd;
} else {
dev_warn(isp->dev, "too many atomisp inputs, TPG ignored.\n");
}
- ret = v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
- if (ret < 0)
- goto link_failed;
-
- return media_device_register(&isp->media_dev);
+ return 0;
link_failed:
for (i = 0; i < isp->num_of_streams; i++)
subdev_register_failed:
atomisp_tpg_unregister_entities(&isp->tpg);
tpg_register_failed:
- atomisp_file_input_unregister_entities(&isp->file_dev);
-file_input_register_failed:
for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++)
atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]);
csi_and_subdev_probe_failed:
return ret;
}
+static int atomisp_register_device_nodes(struct atomisp_device *isp)
+{
+ int i, err;
+
+ for (i = 0; i < isp->num_of_streams; i++) {
+ err = atomisp_subdev_register_video_nodes(&isp->asd[i], &isp->v4l2_dev);
+ if (err)
+ return err;
+ }
+
+ err = atomisp_create_pads_links(isp);
+ if (err)
+ return err;
+
+ err = v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
+ if (err)
+ return err;
+
+ return media_device_register(&isp->media_dev);
+}
+
static int atomisp_initialize_modules(struct atomisp_device *isp)
{
int ret;
goto error_mipi_csi2;
}
- ret = atomisp_file_input_init(isp);
- if (ret < 0) {
- dev_err(isp->dev,
- "file input device initialization failed\n");
- goto error_file_input;
- }
-
ret = atomisp_tpg_init(isp);
if (ret < 0) {
dev_err(isp->dev, "tpg initialization failed\n");
error_isp_subdev:
error_tpg:
atomisp_tpg_cleanup(isp);
-error_file_input:
- atomisp_file_input_cleanup(isp);
error_mipi_csi2:
atomisp_mipi_csi2_cleanup(isp);
return ret;
static void atomisp_uninitialize_modules(struct atomisp_device *isp)
{
atomisp_tpg_cleanup(isp);
- atomisp_file_input_cleanup(isp);
atomisp_mipi_csi2_cleanup(isp);
}
return true;
}
-static int init_atomisp_wdts(struct atomisp_device *isp)
-{
- int i, err;
-
- atomic_set(&isp->wdt_work_queued, 0);
- isp->wdt_work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1);
- if (!isp->wdt_work_queue) {
- dev_err(isp->dev, "Failed to initialize wdt work queue\n");
- err = -ENOMEM;
- goto alloc_fail;
- }
- INIT_WORK(&isp->wdt_work, atomisp_wdt_work);
-
- for (i = 0; i < isp->num_of_streams; i++) {
- struct atomisp_sub_device *asd = &isp->asd[i];
-
- if (!IS_ISP2401) {
- timer_setup(&asd->wdt, atomisp_wdt, 0);
- } else {
- timer_setup(&asd->video_out_capture.wdt,
- atomisp_wdt, 0);
- timer_setup(&asd->video_out_preview.wdt,
- atomisp_wdt, 0);
- timer_setup(&asd->video_out_vf.wdt, atomisp_wdt, 0);
- timer_setup(&asd->video_out_video_capture.wdt,
- atomisp_wdt, 0);
- }
- }
- return 0;
-alloc_fail:
- return err;
-}
-
#define ATOM_ISP_PCI_BAR 0
static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
dev_dbg(&pdev->dev, "atomisp mmio base: %p\n", isp->base);
- rt_mutex_init(&isp->mutex);
- rt_mutex_init(&isp->loading);
- mutex_init(&isp->streamoff_mutex);
+ mutex_init(&isp->mutex);
spin_lock_init(&isp->lock);
/* This is not a true PCI device on SoC, so the delay is not needed. */
pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, csi_afe_trim);
}
- rt_mutex_lock(&isp->loading);
-
err = atomisp_initialize_modules(isp);
if (err < 0) {
dev_err(&pdev->dev, "atomisp_initialize_modules (%d)\n", err);
dev_err(&pdev->dev, "atomisp_register_entities failed (%d)\n", err);
goto register_entities_fail;
}
- err = atomisp_create_pads_links(isp);
- if (err < 0)
- goto register_entities_fail;
- /* init atomisp wdts */
- err = init_atomisp_wdts(isp);
- if (err != 0)
- goto wdt_work_queue_fail;
+
+ INIT_WORK(&isp->assert_recovery_work, atomisp_assert_recovery_work);
/* save the iunit context only once after all the values are init'ed. */
atomisp_save_iunit_reg(isp);
release_firmware(isp->firmware);
isp->firmware = NULL;
isp->css_env.isp_css_fw.data = NULL;
- isp->ready = true;
- rt_mutex_unlock(&isp->loading);
+
+ err = atomisp_register_device_nodes(isp);
+ if (err)
+ goto css_init_fail;
atomisp_drvfs_init(isp);
request_irq_fail:
hmm_cleanup();
pm_runtime_get_noresume(&pdev->dev);
- destroy_workqueue(isp->wdt_work_queue);
-wdt_work_queue_fail:
atomisp_unregister_entities(isp);
register_entities_fail:
atomisp_uninitialize_modules(isp);
initialize_modules_fail:
- rt_mutex_unlock(&isp->loading);
cpu_latency_qos_remove_request(&isp->pm_qos);
atomisp_msi_irq_uninit(isp);
pci_free_irq_vectors(pdev);
atomisp_msi_irq_uninit(isp);
atomisp_unregister_entities(isp);
- destroy_workqueue(isp->wdt_work_queue);
- atomisp_file_input_cleanup(isp);
-
release_firmware(isp->firmware);
}
#define __ATOMISP_V4L2_H__
struct atomisp_video_pipe;
-struct atomisp_acc_pipe;
struct v4l2_device;
struct atomisp_device;
struct firmware;
int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
unsigned int run_mode);
-void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name);
void atomisp_video_unregister(struct atomisp_video_pipe *video);
-void atomisp_acc_unregister(struct atomisp_acc_pipe *video);
const struct firmware *atomisp_load_firmware(struct atomisp_device *isp);
int atomisp_csi_lane_config(struct atomisp_device *isp);
#include "hmm/hmm_common.h"
#include "hmm/hmm_bo.h"
-static unsigned int order_to_nr(unsigned int order)
-{
- return 1U << order;
-}
-
-static unsigned int nr_to_order_bottom(unsigned int nr)
-{
- return fls(nr) - 1;
-}
-
static int __bo_init(struct hmm_bo_device *bdev, struct hmm_buffer_object *bo,
unsigned int pgnr)
{
return bo;
}
-static void free_private_bo_pages(struct hmm_buffer_object *bo,
- int free_pgnr)
+static void free_pages_bulk_array(unsigned long nr_pages, struct page **page_array)
{
- int i, ret;
+ unsigned long i;
- for (i = 0; i < free_pgnr; i++) {
- ret = set_pages_wb(bo->pages[i], 1);
- if (ret)
- dev_err(atomisp_dev,
- "set page to WB err ...ret = %d\n",
- ret);
- /*
- W/A: set_pages_wb seldom return value = -EFAULT
- indicate that address of page is not in valid
- range(0xffff880000000000~0xffffc7ffffffffff)
- then, _free_pages would panic; Do not know why page
- address be valid,it maybe memory corruption by lowmemory
- */
- if (!ret) {
- __free_pages(bo->pages[i], 0);
- }
- }
+ for (i = 0; i < nr_pages; i++)
+ __free_pages(page_array[i], 0);
+}
+
+static void free_private_bo_pages(struct hmm_buffer_object *bo)
+{
+ set_pages_array_wb(bo->pages, bo->pgnr);
+ free_pages_bulk_array(bo->pgnr, bo->pages);
}
/*Allocate pages which will be used only by ISP*/
static int alloc_private_pages(struct hmm_buffer_object *bo)
{
+ const gfp_t gfp = __GFP_NOWARN | __GFP_RECLAIM | __GFP_FS;
int ret;
- unsigned int pgnr, order, blk_pgnr, alloc_pgnr;
- struct page *pages;
- gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN; /* REVISIT: need __GFP_FS too? */
- int i, j;
- int failure_number = 0;
- bool reduce_order = false;
- bool lack_mem = true;
-
- pgnr = bo->pgnr;
-
- i = 0;
- alloc_pgnr = 0;
-
- while (pgnr) {
- order = nr_to_order_bottom(pgnr);
- /*
- * if be short of memory, we will set order to 0
- * everytime.
- */
- if (lack_mem)
- order = HMM_MIN_ORDER;
- else if (order > HMM_MAX_ORDER)
- order = HMM_MAX_ORDER;
-retry:
- /*
- * When order > HMM_MIN_ORDER, for performance reasons we don't
- * want alloc_pages() to sleep. In case it fails and fallbacks
- * to HMM_MIN_ORDER or in case the requested order is originally
- * the minimum value, we can allow alloc_pages() to sleep for
- * robustness purpose.
- *
- * REVISIT: why __GFP_FS is necessary?
- */
- if (order == HMM_MIN_ORDER) {
- gfp &= ~GFP_NOWAIT;
- gfp |= __GFP_RECLAIM | __GFP_FS;
- }
-
- pages = alloc_pages(gfp, order);
- if (unlikely(!pages)) {
- /*
- * in low memory case, if allocation page fails,
- * we turn to try if order=0 allocation could
- * succeed. if order=0 fails too, that means there is
- * no memory left.
- */
- if (order == HMM_MIN_ORDER) {
- dev_err(atomisp_dev,
- "%s: cannot allocate pages\n",
- __func__);
- goto cleanup;
- }
- order = HMM_MIN_ORDER;
- failure_number++;
- reduce_order = true;
- /*
- * if fail two times continuously, we think be short
- * of memory now.
- */
- if (failure_number == 2) {
- lack_mem = true;
- failure_number = 0;
- }
- goto retry;
- } else {
- blk_pgnr = order_to_nr(order);
-
- /*
- * set memory to uncacheable -- UC_MINUS
- */
- ret = set_pages_uc(pages, blk_pgnr);
- if (ret) {
- dev_err(atomisp_dev,
- "set page uncacheablefailed.\n");
-
- __free_pages(pages, order);
- goto cleanup;
- }
-
- for (j = 0; j < blk_pgnr; j++, i++) {
- bo->pages[i] = pages + j;
- }
-
- pgnr -= blk_pgnr;
+ ret = alloc_pages_bulk_array(gfp, bo->pgnr, bo->pages);
+ if (ret != bo->pgnr) {
+ free_pages_bulk_array(ret, bo->pages);
+ return -ENOMEM;
+ }
- /*
- * if order is not reduced this time, clear
- * failure_number.
- */
- if (reduce_order)
- reduce_order = false;
- else
- failure_number = 0;
- }
+ ret = set_pages_array_uc(bo->pages, bo->pgnr);
+ if (ret) {
+ dev_err(atomisp_dev, "set pages uncacheable failed.\n");
+ free_pages_bulk_array(bo->pgnr, bo->pages);
+ return ret;
}
return 0;
-cleanup:
- alloc_pgnr = i;
- free_private_bo_pages(bo, alloc_pgnr);
- return -ENOMEM;
}
static void free_user_pages(struct hmm_buffer_object *bo,
{
int i;
- if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
- unpin_user_pages(bo->pages, page_nr);
- } else {
- for (i = 0; i < page_nr; i++)
- put_page(bo->pages[i]);
- }
+ for (i = 0; i < page_nr; i++)
+ put_page(bo->pages[i]);
}
/*
const void __user *userptr)
{
int page_nr;
- struct vm_area_struct *vma;
-
- mutex_unlock(&bo->mutex);
- mmap_read_lock(current->mm);
- vma = find_vma(current->mm, (unsigned long)userptr);
- mmap_read_unlock(current->mm);
- if (!vma) {
- dev_err(atomisp_dev, "find_vma failed\n");
- mutex_lock(&bo->mutex);
- return -EFAULT;
- }
- mutex_lock(&bo->mutex);
- /*
- * Handle frame buffer allocated in other kerenl space driver
- * and map to user space
- */
userptr = untagged_addr(userptr);
- if (vma->vm_flags & (VM_IO | VM_PFNMAP)) {
- page_nr = pin_user_pages((unsigned long)userptr, bo->pgnr,
- FOLL_LONGTERM | FOLL_WRITE,
- bo->pages, NULL);
- bo->mem_type = HMM_BO_MEM_TYPE_PFN;
- } else {
- /*Handle frame buffer allocated in user space*/
- mutex_unlock(&bo->mutex);
- page_nr = get_user_pages_fast((unsigned long)userptr,
- (int)(bo->pgnr), 1, bo->pages);
- mutex_lock(&bo->mutex);
- bo->mem_type = HMM_BO_MEM_TYPE_USER;
- }
-
- dev_dbg(atomisp_dev, "%s: %d %s pages were allocated as 0x%08x\n",
- __func__,
- bo->pgnr,
- bo->mem_type == HMM_BO_MEM_TYPE_USER ? "user" : "pfn", page_nr);
+ /* Handle frame buffer allocated in user space */
+ mutex_unlock(&bo->mutex);
+ page_nr = get_user_pages_fast((unsigned long)userptr, bo->pgnr, 1, bo->pages);
+ mutex_lock(&bo->mutex);
/* can be written by caller, not forced */
if (page_nr != bo->pgnr) {
mutex_lock(&bo->mutex);
check_bo_status_no_goto(bo, HMM_BO_PAGE_ALLOCED, status_err);
- bo->pages = kmalloc_array(bo->pgnr, sizeof(struct page *), GFP_KERNEL);
+ bo->pages = kcalloc(bo->pgnr, sizeof(struct page *), GFP_KERNEL);
if (unlikely(!bo->pages)) {
ret = -ENOMEM;
goto alloc_err;
bo->status &= (~HMM_BO_PAGE_ALLOCED);
if (bo->type == HMM_BO_PRIVATE)
- free_private_bo_pages(bo, bo->pgnr);
+ free_private_bo_pages(bo);
else if (bo->type == HMM_BO_USER)
free_user_pages(bo, bo->pgnr);
else
params->fpn_config.data = NULL;
}
if (!params->fpn_config.data) {
- params->fpn_config.data = kvmalloc(height * width *
- sizeof(short), GFP_KERNEL);
+ params->fpn_config.data = kvmalloc(array3_size(height, width, sizeof(short)),
+ GFP_KERNEL);
if (!params->fpn_config.data) {
IA_CSS_ERROR("out of memory");
IA_CSS_LEAVE_ERR_PRIVATE(-ENOMEM);
mutex_lock(&imxmd->md.graph_mutex);
if (on) {
- ret = __media_pipeline_start(entity, &imxmd->pipe);
+ ret = __media_pipeline_start(entity->pads, &imxmd->pipe);
if (ret)
goto out;
ret = v4l2_subdev_call(sd, video, s_stream, 1);
if (ret)
- __media_pipeline_stop(entity);
+ __media_pipeline_stop(entity->pads);
} else {
v4l2_subdev_call(sd, video, s_stream, 0);
- if (entity->pipe)
- __media_pipeline_stop(entity);
+ if (media_pad_pipeline(entity->pads))
+ __media_pipeline_stop(entity->pads);
}
out:
mutex_lock(&csi->mdev.graph_mutex);
- ret = __media_pipeline_start(&csi->sd.entity, &csi->pipe);
+ ret = __video_device_pipeline_start(csi->vdev, &csi->pipe);
if (ret)
goto err_unlock;
return 0;
err_stop:
- __media_pipeline_stop(&csi->sd.entity);
+ __video_device_pipeline_stop(csi->vdev);
err_unlock:
mutex_unlock(&csi->mdev.graph_mutex);
dev_err(csi->dev, "pipeline start failed with %d\n", ret);
mutex_lock(&csi->mdev.graph_mutex);
v4l2_subdev_call(&csi->sd, video, s_stream, 0);
- __media_pipeline_stop(&csi->sd.entity);
+ __video_device_pipeline_stop(csi->vdev);
mutex_unlock(&csi->mdev.graph_mutex);
/* release all active buffers */
* @b: white balance gain for B channel.
* @gb: white balance gain for Gb channel.
*
- * Precision u3.13, range [0, 8). White balance correction is done by applying
- * a multiplicative gain to each color channels prior to BNR.
+ * For BNR parameters WB gain factor for the three channels [Ggr, Ggb, Gb, Gr].
+ * Their precision is U3.13 and the range is (0, 8) and the actual gain is
+ * Gx + 1, it is typically Gx = 1.
+ *
+ * Pout = {Pin * (1 + Gx)}.
*/
struct ipu3_uapi_bnr_static_config_wb_gains_config {
__u16 gr;
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_selection *sel)
{
- struct v4l2_rect *try_sel, *r;
- struct imgu_v4l2_subdev *imgu_sd = container_of(sd,
- struct imgu_v4l2_subdev,
- subdev);
+ struct imgu_v4l2_subdev *imgu_sd =
+ container_of(sd, struct imgu_v4l2_subdev, subdev);
if (sel->pad != IMGU_NODE_IN)
return -EINVAL;
switch (sel->target) {
case V4L2_SEL_TGT_CROP:
- try_sel = v4l2_subdev_get_try_crop(sd, sd_state, sel->pad);
- r = &imgu_sd->rect.eff;
- break;
+ if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
+ sel->r = *v4l2_subdev_get_try_crop(sd, sd_state,
+ sel->pad);
+ else
+ sel->r = imgu_sd->rect.eff;
+ return 0;
case V4L2_SEL_TGT_COMPOSE:
- try_sel = v4l2_subdev_get_try_compose(sd, sd_state, sel->pad);
- r = &imgu_sd->rect.bds;
- break;
+ if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
+ sel->r = *v4l2_subdev_get_try_compose(sd, sd_state,
+ sel->pad);
+ else
+ sel->r = imgu_sd->rect.bds;
+ return 0;
default:
return -EINVAL;
}
-
- if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
- sel->r = *try_sel;
- else
- sel->r = *r;
-
- return 0;
}
static int imgu_subdev_set_selection(struct v4l2_subdev *sd,
pipe = node->pipe;
imgu_pipe = &imgu->imgu_pipe[pipe];
atomic_set(&node->sequence, 0);
- r = media_pipeline_start(&node->vdev.entity, &imgu_pipe->pipeline);
+ r = video_device_pipeline_start(&node->vdev, &imgu_pipe->pipeline);
if (r < 0)
goto fail_return_bufs;
return 0;
fail_stop_pipeline:
- media_pipeline_stop(&node->vdev.entity);
+ video_device_pipeline_stop(&node->vdev);
fail_return_bufs:
imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_QUEUED);
imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_ERROR);
mutex_unlock(&imgu->streaming_lock);
- media_pipeline_stop(&node->vdev.entity);
+ video_device_pipeline_stop(&node->vdev);
}
/******************** v4l2_ioctl_ops ********************/
err_vdev_release:
video_device_release(vdev);
+ v4l2_device_unregister(&core->v4l2_dev);
return ret;
}
struct amvdec_core *core = platform_get_drvdata(pdev);
video_unregister_device(core->vdev_dec);
+ v4l2_device_unregister(&core->v4l2_dev);
return 0;
}
struct iss_pipeline *pipe;
struct media_pad *pad;
- if (!me->pipe)
- return 0;
pipe = to_iss_pipeline(me);
- if (pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED)
+ if (!pipe || pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED)
return 0;
pad = media_pad_remote_pad_first(&pipe->output->pad);
return pad->entity == me;
* Start streaming on the pipeline. No link touching an entity in the
* pipeline can be activated or deactivated once streaming is started.
*/
- pipe = entity->pipe
- ? to_iss_pipeline(entity) : &video->pipe;
+ pipe = to_iss_pipeline(&video->video.entity) ? : &video->pipe;
pipe->external = NULL;
pipe->external_rate = 0;
pipe->external_bpp = 0;
if (video->iss->pdata->set_constraints)
video->iss->pdata->set_constraints(video->iss, true);
- ret = media_pipeline_start(entity, &pipe->pipe);
+ ret = video_device_pipeline_start(&video->video, &pipe->pipe);
if (ret < 0)
goto err_media_pipeline_start;
err_omap4iss_set_stream:
vb2_streamoff(&vfh->queue, type);
err_iss_video_check_format:
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
err_media_pipeline_start:
if (video->iss->pdata->set_constraints)
video->iss->pdata->set_constraints(video->iss, false);
if (video->iss->pdata->set_constraints)
video->iss->pdata->set_constraints(video->iss, false);
- media_pipeline_stop(&video->video.entity);
+ video_device_pipeline_stop(&video->video);
done:
mutex_unlock(&video->stream_lock);
int external_bpp;
};
-#define to_iss_pipeline(__e) \
- container_of((__e)->pipe, struct iss_pipeline, pipe)
+static inline struct iss_pipeline *to_iss_pipeline(struct media_entity *entity)
+{
+ struct media_pipeline *pipe = media_entity_pipeline(entity);
+
+ if (!pipe)
+ return NULL;
+
+ return container_of(pipe, struct iss_pipeline, pipe);
+}
static inline int iss_pipeline_ready(struct iss_pipeline *pipe)
{
config VIDEO_SUNXI_CEDRUS
tristate "Allwinner Cedrus VPU driver"
depends on VIDEO_DEV
+ depends on RESET_CONTROLLER
depends on HAS_DMA
depends on OF
select MEDIA_CONTROLLER
VI_INCR_SYNCPT_NO_STALL);
/* start the pipeline */
- ret = media_pipeline_start(&chan->video.entity, pipe);
+ ret = video_device_pipeline_start(&chan->video, pipe);
if (ret < 0)
goto error_pipeline_start;
error_kthread_start:
tegra_channel_set_stream(chan, false);
error_set_stream:
- media_pipeline_stop(&chan->video.entity);
+ video_device_pipeline_stop(&chan->video);
error_pipeline_start:
tegra_channel_release_buffers(chan, VB2_BUF_STATE_QUEUED);
return ret;
tegra_channel_release_buffers(chan, VB2_BUF_STATE_ERROR);
tegra_channel_set_stream(chan, false);
- media_pipeline_stop(&chan->video.entity);
+ video_device_pipeline_stop(&chan->video);
}
/*
complete(&deve->pr_comp);
}
+/*
+ * Establish UA condition on SCSI device - all LUNs
+ */
+void target_dev_ua_allocate(struct se_device *dev, u8 asc, u8 ascq)
+{
+ struct se_dev_entry *se_deve;
+ struct se_lun *lun;
+
+ spin_lock(&dev->se_port_lock);
+ list_for_each_entry(lun, &dev->dev_sep_list, lun_dev_link) {
+
+ spin_lock(&lun->lun_deve_lock);
+ list_for_each_entry(se_deve, &lun->lun_deve_list, lun_link)
+ core_scsi3_ua_allocate(se_deve, asc, ascq);
+ spin_unlock(&lun->lun_deve_lock);
+ }
+ spin_unlock(&dev->se_port_lock);
+}
+
static void
target_luns_data_has_changed(struct se_node_acl *nacl, struct se_dev_entry *new,
bool skip_new)
clear_bit(IBD_PLUGF_PLUGGED, &ib_dev_plug->flags);
}
-static unsigned long long iblock_emulate_read_cap_with_block_size(
- struct se_device *dev,
- struct block_device *bd,
- struct request_queue *q)
+static sector_t iblock_get_blocks(struct se_device *dev)
{
- u32 block_size = bdev_logical_block_size(bd);
+ struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+ u32 block_size = bdev_logical_block_size(ib_dev->ibd_bd);
unsigned long long blocks_long =
- div_u64(bdev_nr_bytes(bd), block_size) - 1;
+ div_u64(bdev_nr_bytes(ib_dev->ibd_bd), block_size) - 1;
if (block_size == dev->dev_attrib.block_size)
return blocks_long;
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
}
-static sector_t iblock_get_blocks(struct se_device *dev)
-{
- struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
- struct block_device *bd = ib_dev->ibd_bd;
- struct request_queue *q = bdev_get_queue(bd);
-
- return iblock_emulate_read_cap_with_block_size(dev, bd, q);
-}
-
static sector_t iblock_get_alignment_offset_lbas(struct se_device *dev)
{
struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
void target_free_device(struct se_device *);
int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
void *data);
+void target_dev_ua_allocate(struct se_device *dev, u8 asc, u8 ascq);
/* target_core_configfs.c */
extern struct configfs_item_operations target_core_dev_item_ops;
__core_scsi3_complete_pro_preempt(dev, pr_reg_n,
(preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : NULL,
type, scope, preempt_type);
-
- if (preempt_type == PREEMPT_AND_ABORT)
- core_scsi3_release_preempt_and_abort(
- &preempt_and_abort_list, pr_reg_n);
}
+
spin_unlock(&dev->dev_reservation_lock);
+ /*
+ * SPC-4 5.12.11.2.6 Preempting and aborting
+ * The actions described in this subclause shall be performed
+ * for all I_T nexuses that are registered with the non-zero
+ * SERVICE ACTION RESERVATION KEY value, without regard for
+ * whether the preempted I_T nexuses hold the persistent
+ * reservation. If the SERVICE ACTION RESERVATION KEY field is
+ * set to zero and an all registrants persistent reservation is
+ * present, the device server shall abort all commands for all
+ * registered I_T nexuses.
+ */
+ if (preempt_type == PREEMPT_AND_ABORT) {
+ core_tmr_lun_reset(dev, NULL, &preempt_and_abort_list,
+ cmd);
+ core_scsi3_release_preempt_and_abort(
+ &preempt_and_abort_list, pr_reg_n);
+ }
+
if (pr_tmpl->pr_aptpl_active)
core_scsi3_update_and_write_aptpl(cmd->se_dev, true);
if (calling_it_nexus)
continue;
- if (pr_reg->pr_res_key != sa_res_key)
+ if (sa_res_key && pr_reg->pr_res_key != sa_res_key)
continue;
pr_reg_nacl = pr_reg->pr_reg_nacl;
* transport protocols where port names are not required;
* d) Register the reservation key specified in the SERVICE ACTION
* RESERVATION KEY field;
- * e) Retain the reservation key specified in the SERVICE ACTION
- * RESERVATION KEY field and associated information;
*
* Also, It is not an error for a REGISTER AND MOVE service action to
* register an I_T nexus that is already registered with the same
dest_pr_reg = __core_scsi3_locate_pr_reg(dev, dest_node_acl,
iport_ptr);
new_reg = 1;
+ } else {
+ /*
+ * e) Retain the reservation key specified in the SERVICE ACTION
+ * RESERVATION KEY field and associated information;
+ */
+ dest_pr_reg->pr_res_key = sa_res_key;
}
/*
* f) Release the persistent reservation for the persistent reservation
tmr->response = (!ret) ? TMR_FUNCTION_COMPLETE :
TMR_FUNCTION_REJECTED;
if (tmr->response == TMR_FUNCTION_COMPLETE) {
- target_ua_allocate_lun(cmd->se_sess->se_node_acl,
- cmd->orig_fe_lun, 0x29,
+ target_dev_ua_allocate(dev, 0x29,
ASCQ_29H_BUS_DEVICE_RESET_FUNCTION_OCCURRED);
}
break;
cpus_read_lock();
/* prefer BSP */
- control_cpu = 0;
- if (!cpu_online(control_cpu)) {
- control_cpu = get_cpu();
- put_cpu();
- }
+ control_cpu = cpumask_first(cpu_online_mask);
clamping = true;
schedule_delayed_work(&poll_pkg_cstate_work, 0);
If unsure, say N.
-config SERIAL_8250_GSC
+config SERIAL_8250_PARISC
tristate
- depends on SERIAL_8250 && GSC
+ depends on SERIAL_8250 && PARISC
default SERIAL_8250
config SERIAL_8250_DMA
8250_base-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o
8250_base-$(CONFIG_SERIAL_8250_DWLIB) += 8250_dwlib.o
8250_base-$(CONFIG_SERIAL_8250_FINTEK) += 8250_fintek.o
-obj-$(CONFIG_SERIAL_8250_GSC) += 8250_gsc.o
+obj-$(CONFIG_SERIAL_8250_PARISC) += 8250_parisc.o
obj-$(CONFIG_SERIAL_8250_PCI) += 8250_pci.o
obj-$(CONFIG_SERIAL_8250_EXAR) += 8250_exar.o
obj-$(CONFIG_SERIAL_8250_HP300) += 8250_hp300.o
}
/**
- * ufshcd_utmrl_clear - Clear a bit in UTRMLCLR register
+ * ufshcd_utmrl_clear - Clear a bit in UTMRLCLR register
* @hba: per adapter instance
* @pos: position of the bit to be cleared
*/
if (ret)
dev_err(hba->dev,
- "%s: query attribute, opcode %d, idn %d, failed with error %d after %d retries\n",
+ "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n",
__func__, opcode, idn, ret, retries);
return ret;
}
rgn = hpb->rgn_tbl + rgn_idx;
srgn = rgn->srgn_tbl + srgn_idx;
- /* If command type is WRITE or DISCARD, set bitmap as drity */
+ /* If command type is WRITE or DISCARD, set bitmap as dirty */
if (ufshpb_is_write_or_discard(cmd)) {
ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len, true);
static enum rq_end_io_ret ufshpb_umap_req_compl_fn(struct request *req,
blk_status_t error)
{
- struct ufshpb_req *umap_req = (struct ufshpb_req *)req->end_io_data;
+ struct ufshpb_req *umap_req = req->end_io_data;
ufshpb_put_req(umap_req->hpb, umap_req);
return RQ_END_IO_NONE;
static enum rq_end_io_ret ufshpb_map_req_compl_fn(struct request *req,
blk_status_t error)
{
- struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;
+ struct ufshpb_req *map_req = req->end_io_data;
struct ufshpb_lu *hpb = map_req->hpb;
struct ufshpb_subregion *srgn;
unsigned long flags;
host->ice_mmio = devm_ioremap_resource(dev, res);
if (IS_ERR(host->ice_mmio)) {
err = PTR_ERR(host->ice_mmio);
- dev_err(dev, "Failed to map ICE registers; err=%d\n", err);
return err;
}
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/of.h>
+#include <linux/of_graph.h>
#include <linux/acpi.h>
#include <linux/pinctrl/consumer.h>
#include <linux/reset.h>
* mode. If the controller supports DRD but the dr_mode is not
* specified or set to OTG, then set the mode to peripheral.
*/
- if (mode == USB_DR_MODE_OTG &&
+ if (mode == USB_DR_MODE_OTG && !dwc->edev &&
(!IS_ENABLED(CONFIG_USB_ROLE_SWITCH) ||
!device_property_read_bool(dwc->dev, "usb-role-switch")) &&
!DWC3_VER_IS_PRIOR(DWC3, 330A))
}
}
+static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
+{
+ struct device *dev = dwc->dev;
+ struct device_node *np_phy;
+ struct extcon_dev *edev = NULL;
+ const char *name;
+
+ if (device_property_read_bool(dev, "extcon"))
+ return extcon_get_edev_by_phandle(dev, 0);
+
+ /*
+ * Device tree platforms should get extcon via phandle.
+ * On ACPI platforms, we get the name from a device property.
+ * This device property is for kernel internal use only and
+ * is expected to be set by the glue code.
+ */
+ if (device_property_read_string(dev, "linux,extcon-name", &name) == 0)
+ return extcon_get_extcon_dev(name);
+
+ /*
+ * Try to get an extcon device from the USB PHY controller's "port"
+ * node. Check if it has the "port" node first, to avoid printing the
+ * error message from underlying code, as it's a valid case: extcon
+ * device (and "port" node) may be missing in case of "usb-role-switch"
+ * or OTG mode.
+ */
+ np_phy = of_parse_phandle(dev->of_node, "phys", 0);
+ if (of_graph_is_present(np_phy)) {
+ struct device_node *np_conn;
+
+ np_conn = of_graph_get_remote_node(np_phy, -1, -1);
+ if (np_conn)
+ edev = extcon_find_edev_by_node(np_conn);
+ of_node_put(np_conn);
+ }
+ of_node_put(np_phy);
+
+ return edev;
+}
+
static int dwc3_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
goto err2;
}
+ dwc->edev = dwc3_get_extcon(dwc);
+ if (IS_ERR(dwc->edev)) {
+ ret = dev_err_probe(dwc->dev, PTR_ERR(dwc->edev), "failed to get extcon\n");
+ goto err3;
+ }
+
ret = dwc3_get_dr_mode(dwc);
if (ret)
goto err3;
*/
#include <linux/extcon.h>
-#include <linux/of_graph.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/property.h>
return NOTIFY_DONE;
}
-static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
-{
- struct device *dev = dwc->dev;
- struct device_node *np_phy;
- struct extcon_dev *edev = NULL;
- const char *name;
-
- if (device_property_read_bool(dev, "extcon"))
- return extcon_get_edev_by_phandle(dev, 0);
-
- /*
- * Device tree platforms should get extcon via phandle.
- * On ACPI platforms, we get the name from a device property.
- * This device property is for kernel internal use only and
- * is expected to be set by the glue code.
- */
- if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {
- edev = extcon_get_extcon_dev(name);
- if (!edev)
- return ERR_PTR(-EPROBE_DEFER);
-
- return edev;
- }
-
- /*
- * Try to get an extcon device from the USB PHY controller's "port"
- * node. Check if it has the "port" node first, to avoid printing the
- * error message from underlying code, as it's a valid case: extcon
- * device (and "port" node) may be missing in case of "usb-role-switch"
- * or OTG mode.
- */
- np_phy = of_parse_phandle(dev->of_node, "phys", 0);
- if (of_graph_is_present(np_phy)) {
- struct device_node *np_conn;
-
- np_conn = of_graph_get_remote_node(np_phy, -1, -1);
- if (np_conn)
- edev = extcon_find_edev_by_node(np_conn);
- of_node_put(np_conn);
- }
- of_node_put(np_phy);
-
- return edev;
-}
-
#if IS_ENABLED(CONFIG_USB_ROLE_SWITCH)
#define ROLE_SWITCH 1
static int dwc3_usb_role_switch_set(struct usb_role_switch *sw,
device_property_read_bool(dwc->dev, "usb-role-switch"))
return dwc3_setup_role_switch(dwc);
- dwc->edev = dwc3_get_extcon(dwc);
- if (IS_ERR(dwc->edev))
- return PTR_ERR(dwc->edev);
-
if (dwc->edev) {
dwc->edev_nb.notifier_call = dwc3_drd_notifier;
ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST,
/* Manage SoftReset */
reset_control_deassert(dwc3_data->rstc_rst);
- child = of_get_child_by_name(node, "usb");
+ child = of_get_compatible_child(node, "snps,dwc3");
if (!child) {
dev_err(&pdev->dev, "failed to find dwc3 core node\n");
ret = -ENODEV;
trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
}
- /* always enable Interrupt on Missed ISOC */
- trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
+ if (!no_interrupt && !chain)
+ trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
break;
case USB_ENDPOINT_XFER_BULK:
cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
memset(¶ms, 0, sizeof(params));
ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
+ /*
+ * If the End Transfer command was timed out while the device is
+ * not in SETUP phase, it's possible that an incoming Setup packet
+ * may prevent the command's completion. Let's retry when the
+ * ep0state returns to EP0_SETUP_PHASE.
+ */
+ if (ret == -ETIMEDOUT && dep->dwc->ep0state != EP0_SETUP_PHASE) {
+ dep->flags |= DWC3_EP_DELAY_STOP;
+ return 0;
+ }
WARN_ON_ONCE(ret);
dep->resource_index = 0;
if (event->status & DEPEVT_STATUS_SHORT && !chain)
return 1;
+ if ((trb->ctrl & DWC3_TRB_CTRL_ISP_IMI) &&
+ DWC3_TRB_SIZE_TRBSTS(trb->size) == DWC3_TRBSTS_MISSED_ISOC)
+ return 1;
+
if ((trb->ctrl & DWC3_TRB_CTRL_IOC) ||
(trb->ctrl & DWC3_TRB_CTRL_LST))
return 1;
* timeout. Delay issuing the End Transfer command until the Setup TRB is
* prepared.
*/
- if (dwc->ep0state != EP0_SETUP_PHASE) {
+ if (dwc->ep0state != EP0_SETUP_PHASE && !dwc->delayed_status) {
dep->flags |= DWC3_EP_DELAY_STOP;
return;
}
queue->sequence = 0;
queue->buf_used = 0;
+ queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE;
} else {
ret = vb2_streamoff(&queue->queue, queue->queue.type);
if (ret < 0)
void uvcg_complete_buffer(struct uvc_video_queue *queue,
struct uvc_buffer *buf)
{
- if ((queue->flags & UVC_QUEUE_DROP_INCOMPLETE) &&
- buf->length != buf->bytesused) {
- buf->state = UVC_BUF_STATE_QUEUED;
+ if (queue->flags & UVC_QUEUE_DROP_INCOMPLETE) {
+ queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE;
+ buf->state = UVC_BUF_STATE_ERROR;
vb2_set_plane_payload(&buf->buf.vb2_buf, 0, 0);
+ vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_ERROR);
return;
}
struct uvc_buffer *buf)
{
void *mem = req->buf;
+ struct uvc_request *ureq = req->context;
int len = video->req_size;
int ret;
video->queue.buf_used = 0;
buf->state = UVC_BUF_STATE_DONE;
list_del(&buf->queue);
- uvcg_complete_buffer(&video->queue, buf);
video->fid ^= UVC_STREAM_FID;
+ ureq->last_buf = buf;
video->payload_size = 0;
}
if (video->payload_size == video->max_payload_size ||
+ video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE ||
buf->bytesused == video->queue.buf_used)
video->payload_size = 0;
}
sg = sg_next(sg);
for_each_sg(sg, iter, ureq->sgt.nents - 1, i) {
- if (!len || !buf->sg || !sg_dma_len(buf->sg))
+ if (!len || !buf->sg || !buf->sg->length)
break;
- sg_left = sg_dma_len(buf->sg) - buf->offset;
+ sg_left = buf->sg->length - buf->offset;
part = min_t(unsigned int, len, sg_left);
sg_set_page(iter, sg_page(buf->sg), part, buf->offset);
req->length -= len;
video->queue.buf_used += req->length - header_len;
- if (buf->bytesused == video->queue.buf_used || !buf->sg) {
+ if (buf->bytesused == video->queue.buf_used || !buf->sg ||
+ video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) {
video->queue.buf_used = 0;
buf->state = UVC_BUF_STATE_DONE;
buf->offset = 0;
struct uvc_buffer *buf)
{
void *mem = req->buf;
+ struct uvc_request *ureq = req->context;
int len = video->req_size;
int ret;
req->length = video->req_size - len;
- if (buf->bytesused == video->queue.buf_used) {
+ if (buf->bytesused == video->queue.buf_used ||
+ video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) {
video->queue.buf_used = 0;
buf->state = UVC_BUF_STATE_DONE;
list_del(&buf->queue);
- uvcg_complete_buffer(&video->queue, buf);
video->fid ^= UVC_STREAM_FID;
+ ureq->last_buf = buf;
}
}
case 0:
break;
+ case -EXDEV:
+ uvcg_dbg(&video->uvc->func, "VS request missed xfer.\n");
+ queue->flags |= UVC_QUEUE_DROP_INCOMPLETE;
+ break;
+
case -ESHUTDOWN: /* disconnect from host. */
uvcg_dbg(&video->uvc->func, "VS request cancelled.\n");
uvcg_queue_cancel(queue, 1);
/* Endpoint now owns the request */
req = NULL;
- video->req_int_count++;
+ if (buf->state != UVC_BUF_STATE_DONE)
+ video->req_int_count++;
}
if (!req)
d->gadget.max_speed = USB_SPEED_HIGH;
d->gadget.speed = USB_SPEED_UNKNOWN;
d->gadget.dev.of_node = vhub->pdev->dev.of_node;
+ d->gadget.dev.of_node_reused = true;
rc = usb_add_gadget_udc(d->port_dev, &d->gadget);
if (rc != 0)
bdc->delayed_status = false;
bdc->reinit = reinit;
bdc->test_mode = false;
+ usb_gadget_set_state(&bdc->gadget, USB_STATE_NOTATTACHED);
}
/* TNotify wkaeup timer */
if (dev->eps[i].stream_info)
xhci_free_stream_info(xhci,
dev->eps[i].stream_info);
- /* Endpoints on the TT/root port lists should have been removed
- * when usb_disable_device() was called for the device.
- * We can't drop them anyway, because the udev might have gone
- * away by this point, and we can't tell what speed it was.
+ /*
+ * Endpoints are normally deleted from the bandwidth list when
+ * endpoints are dropped, before device is freed.
+ * If host is dying or being removed then endpoints aren't
+ * dropped cleanly, so delete the endpoint from list here.
+ * Only applicable for hosts with software bandwidth checking.
*/
- if (!list_empty(&dev->eps[i].bw_endpoint_list))
- xhci_warn(xhci, "Slot %u endpoint %u "
- "not removed from BW list!\n",
- slot_id, i);
+
+ if (!list_empty(&dev->eps[i].bw_endpoint_list)) {
+ list_del_init(&dev->eps[i].bw_endpoint_list);
+ xhci_dbg(xhci, "Slot %u endpoint %u not removed from BW list!\n",
+ slot_id, i);
+ }
}
/* If this is a hub, free the TT(s) from the TT list */
xhci_free_tt_info(xhci, dev, slot_id);
#define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af
#define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e
-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
-#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e
-#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI 0x7ec0
+#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
#define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
#define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
#define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 0x161a
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 0x161b
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 0x161d
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c
-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f
#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI))
xhci->quirks |= XHCI_MISSING_CAS;
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)
+ xhci->quirks |= XHCI_RESET_TO_DEFAULT;
+
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
(pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI))
+ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
}
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
- pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) {
+ /*
+ * try to tame the ASMedia 1042 controller which reports 0.96
+ * but appears to behave more like 1.0
+ */
+ xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
xhci->quirks |= XHCI_BROKEN_STREAMS;
+ }
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) {
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
xhci->quirks |= XHCI_NO_SOFT_RETRY;
- if (pdev->vendor == PCI_VENDOR_ID_AMD &&
- (pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
+ /* xHC spec requires PCI devices to support D3hot and D3cold */
+ if (xhci->hci_version >= 0x120)
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (xhci->quirks & XHCI_RESET_ON_RESUME)
spin_lock_irq(&xhci->lock);
xhci_halt(xhci);
- /* Workaround for spurious wakeups at shutdown with HSW */
- if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+
+ /*
+ * Workaround for spurious wakeps at shutdown with HSW, and for boot
+ * firmware delay in ADL-P PCH if port are left in U3 at shutdown
+ */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP ||
+ xhci->quirks & XHCI_RESET_TO_DEFAULT)
xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+
spin_unlock_irq(&xhci->lock);
xhci_cleanup_msix(xhci);
#define XHCI_BROKEN_D3COLD BIT_ULL(41)
#define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)
#define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43)
+#define XHCI_RESET_TO_DEFAULT BIT_ULL(44)
unsigned int num_active_eps;
unsigned int limit_active_eps;
unsigned char VB_ExtTVYFilterIndex;
unsigned char VB_ExtTVYFilterIndexROM661;
unsigned char REFindex;
- char ROMMODEIDX661;
+ signed char ROMMODEIDX661;
};
struct SiS_Ext2 {
}
EXPORT_SYMBOL_GPL(ucsi_send_command);
-int ucsi_resume(struct ucsi *ucsi)
-{
- u64 command;
-
- /* Restore UCSI notification enable mask after system resume */
- command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
-
- return ucsi_send_command(ucsi, command, NULL, 0);
-}
-EXPORT_SYMBOL_GPL(ucsi_resume);
/* -------------------------------------------------------------------------- */
struct ucsi_work {
static int ucsi_check_connection(struct ucsi_connector *con)
{
+ u8 prev_flags = con->status.flags;
u64 command;
int ret;
return ret;
}
+ if (con->status.flags == prev_flags)
+ return 0;
+
if (con->status.flags & UCSI_CONSTAT_CONNECTED) {
- if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) ==
- UCSI_CONSTAT_PWR_OPMODE_PD)
- ucsi_partner_task(con, ucsi_check_altmodes, 30, 0);
+ ucsi_register_partner(con);
+ ucsi_pwr_opmode_change(con);
+ ucsi_partner_change(con);
} else {
ucsi_partner_change(con);
ucsi_port_psy_changed(con);
return ret;
}
+int ucsi_resume(struct ucsi *ucsi)
+{
+ struct ucsi_connector *con;
+ u64 command;
+ int ret;
+
+ /* Restore UCSI notification enable mask after system resume */
+ command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
+ ret = ucsi_send_command(ucsi, command, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ for (con = ucsi->connector; con->port; con++) {
+ mutex_lock(&con->lock);
+ ucsi_check_connection(con);
+ mutex_unlock(&con->lock);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ucsi_resume);
+
static void ucsi_init_work(struct work_struct *work)
{
struct ucsi *ucsi = container_of(work, struct ucsi, work.work);
return 0;
}
+static int ucsi_acpi_resume(struct device *dev)
+{
+ struct ucsi_acpi *ua = dev_get_drvdata(dev);
+
+ return ucsi_resume(ua->ucsi);
+}
+
+static DEFINE_SIMPLE_DEV_PM_OPS(ucsi_acpi_pm_ops, NULL, ucsi_acpi_resume);
+
static const struct acpi_device_id ucsi_acpi_match[] = {
{ "PNP0CA0", 0 },
{ },
static struct platform_driver ucsi_acpi_platform_driver = {
.driver = {
.name = "ucsi_acpi",
+ .pm = pm_ptr(&ucsi_acpi_pm_ops),
.acpi_match_table = ACPI_PTR(ucsi_acpi_match),
},
.probe = ucsi_acpi_probe,
size = pci_resource_len(pdev, bar);
ret = aperture_remove_conflicting_devices(base, size, primary, name);
if (ret)
- break;
+ return ret;
}
- if (ret)
- return ret;
-
/*
* WARNING: Apparently we must kick fbdev drivers before vgacon,
* otherwise the vga fbdev driver falls over.
failed_regions:
cyberpro_free_fb_info(cfb);
failed_release:
+ pci_disable_device(dev);
return err;
}
int_cfb_info = NULL;
pci_release_regions(dev);
+ pci_disable_device(dev);
}
}
if (par->lcd_supply) {
ret = regulator_disable(par->lcd_supply);
if (ret)
- return ret;
+ dev_warn(&dev->dev, "Failed to disable regulator (%pe)\n",
+ ERR_PTR(ret));
}
lcd_disable_raster(DA8XX_FRAME_WAIT);
static ssize_t gbefb_show_memsize(struct device *dev, struct device_attribute *attr, char *buf)
{
- return snprintf(buf, PAGE_SIZE, "%u\n", gbe_mem_size);
+ return sysfs_emit(buf, "%u\n", gbe_mem_size);
}
static DEVICE_ATTR(size, S_IRUGO, gbefb_show_memsize, NULL);
static ssize_t gbefb_show_rev(struct device *device, struct device_attribute *attr, char *buf)
{
- return snprintf(buf, PAGE_SIZE, "%d\n", gbe_revision);
+ return sysfs_emit(buf, "%d\n", gbe_revision);
}
static DEVICE_ATTR(revision, S_IRUGO, gbefb_show_rev, NULL);
* and destination blitting areas overlap and
* adapt the bitmap addresses synchronously
* if the coordinates exceed the valid range.
- * The the areas do not overlap, we do our
+ * The areas do not overlap, we do our
* normal check.
*/
if((mymax - mymin) < height) {
unsigned char VB_ExtTVYFilterIndex;
unsigned char VB_ExtTVYFilterIndexROM661;
unsigned char REFindex;
- char ROMMODEIDX661;
+ signed char ROMMODEIDX661;
};
struct SiS_Ext2 {
ctrl = smc501_readl(info->regs + SM501_DC_CRT_CONTROL);
ctrl &= SM501_DC_CRT_CONTROL_SEL;
- return snprintf(buf, PAGE_SIZE, "%s\n", ctrl ? "crt" : "panel");
+ return sysfs_emit(buf, "%s\n", ctrl ? "crt" : "panel");
}
/* sm501fb_crtsrc_show
struct kref kref;
int fb_count;
bool virtualized; /* true when physical usb device not present */
- struct delayed_work free_framebuffer_work;
atomic_t usb_active; /* 0 = update virtual buffer, but no usb traffic */
atomic_t lost_pixels; /* 1 = a render op failed. Need screen refresh */
u8 *edid; /* null until we read edid from hw or get from sysfs */
{
struct ufx_data *dev = container_of(kref, struct ufx_data, kref);
- /* this function will wait for all in-flight urbs to complete */
- if (dev->urbs.count > 0)
- ufx_free_urb_list(dev);
+ kfree(dev);
+}
- pr_debug("freeing ufx_data %p", dev);
+static void ufx_ops_destory(struct fb_info *info)
+{
+ struct ufx_data *dev = info->par;
+ int node = info->node;
- kfree(dev);
+ /* Assume info structure is freed after this point */
+ framebuffer_release(info);
+
+ pr_debug("fb_info for /dev/fb%d has been freed", node);
+
+ /* release reference taken by kref_init in probe() */
+ kref_put(&dev->kref, ufx_free);
}
+
static void ufx_release_urb_work(struct work_struct *work)
{
struct urb_node *unode = container_of(work, struct urb_node,
up(&unode->dev->urbs.limit_sem);
}
-static void ufx_free_framebuffer_work(struct work_struct *work)
+static void ufx_free_framebuffer(struct ufx_data *dev)
{
- struct ufx_data *dev = container_of(work, struct ufx_data,
- free_framebuffer_work.work);
struct fb_info *info = dev->info;
- int node = info->node;
-
- unregister_framebuffer(info);
if (info->cmap.len != 0)
fb_dealloc_cmap(&info->cmap);
dev->info = NULL;
- /* Assume info structure is freed after this point */
- framebuffer_release(info);
-
- pr_debug("fb_info for /dev/fb%d has been freed", node);
-
/* ref taken in probe() as part of registering framebfufer */
kref_put(&dev->kref, ufx_free);
}
{
struct ufx_data *dev = info->par;
+ mutex_lock(&disconnect_mutex);
+
dev->fb_count--;
/* We can't free fb_info here - fbmem will touch it when we return */
if (dev->virtualized && (dev->fb_count == 0))
- schedule_delayed_work(&dev->free_framebuffer_work, HZ);
+ ufx_free_framebuffer(dev);
if ((dev->fb_count == 0) && (info->fbdefio)) {
fb_deferred_io_cleanup(info);
kref_put(&dev->kref, ufx_free);
+ mutex_unlock(&disconnect_mutex);
+
return 0;
}
.fb_blank = ufx_ops_blank,
.fb_check_var = ufx_ops_check_var,
.fb_set_par = ufx_ops_set_par,
+ .fb_destroy = ufx_ops_destory,
};
/* Assumes &info->lock held by caller
goto destroy_modedb;
}
- INIT_DELAYED_WORK(&dev->free_framebuffer_work,
- ufx_free_framebuffer_work);
-
retval = ufx_reg_read(dev, 0x3000, &id_rev);
check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);
dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);
static void ufx_usb_disconnect(struct usb_interface *interface)
{
struct ufx_data *dev;
+ struct fb_info *info;
mutex_lock(&disconnect_mutex);
dev = usb_get_intfdata(interface);
+ info = dev->info;
pr_debug("USB disconnect starting\n");
/* if clients still have us open, will be freed on last close */
if (dev->fb_count == 0)
- schedule_delayed_work(&dev->free_framebuffer_work, 0);
+ ufx_free_framebuffer(dev);
- /* release reference taken by kref_init in probe() */
- kref_put(&dev->kref, ufx_free);
+ /* this function will wait for all in-flight urbs to complete */
+ if (dev->urbs.count > 0)
+ ufx_free_urb_list(dev);
- /* consider ufx_data freed */
+ pr_debug("freeing ufx_data %p", dev);
+
+ unregister_framebuffer(info);
mutex_unlock(&disconnect_mutex);
}
{
struct stifb_info *fb = container_of(info, struct stifb_info, info);
- if (rect->rop != ROP_COPY)
+ if (rect->rop != ROP_COPY ||
+ (fb->id == S9000_ID_HCRX && fb->info.var.bits_per_pixel == 32))
return cfb_fillrect(info, rect);
SETUP_HW(fb);
return rc;
}
-static int xilinxfb_release(struct device *dev)
+static void xilinxfb_release(struct device *dev)
{
struct xilinxfb_drvdata *drvdata = dev_get_drvdata(dev);
if (!(drvdata->flags & BUS_ACCESS_FLAG))
dcr_unmap(drvdata->dcr_host, drvdata->dcr_len);
#endif
-
- return 0;
}
/* ---------------------------------------------------------------------
static int xilinxfb_of_remove(struct platform_device *op)
{
- return xilinxfb_release(&op->dev);
+ xilinxfb_release(&op->dev);
+
+ return 0;
}
/* Match table for of_platform binding */
&priv->wdt_res, 1,
priv, sizeof(*priv));
if (IS_ERR(n->pdev)) {
+ int err = PTR_ERR(n->pdev);
+
kfree(n);
- return PTR_ERR(n->pdev);
+ return err;
}
list_add_tail(&n->list, &pdev_list);
return (wdtcontrol & ENABLE_MASK) == ENABLE_MASK;
}
-/* This routine finds load value that will reset system in required timout */
+/* This routine finds load value that will reset system in required timeout */
static int wdt_setload(struct watchdog_device *wdd, unsigned int timeout)
{
struct sp805_wdt *wdt = watchdog_get_drvdata(wdd);
#include "watchdog_core.h" /* For watchdog_dev_register/... */
+#define CREATE_TRACE_POINTS
+#include <trace/events/watchdog.h>
+
static DEFINE_IDA(watchdog_ida);
static int stop_on_reboot = -1;
int ret;
ret = wdd->ops->stop(wdd);
+ trace_watchdog_stop(wdd, ret);
if (ret)
return NOTIFY_BAD;
}
#include "watchdog_core.h"
#include "watchdog_pretimeout.h"
+#include <trace/events/watchdog.h>
+
/* the dev_t structure to store the dynamically allocated watchdog devices */
static dev_t watchdog_devt;
/* Reference to watchdog device behind /dev/watchdog */
wd_data->last_hw_keepalive = now;
- if (wdd->ops->ping)
+ if (wdd->ops->ping) {
err = wdd->ops->ping(wdd); /* ping the watchdog */
- else
+ trace_watchdog_ping(wdd, err);
+ } else {
err = wdd->ops->start(wdd); /* restart watchdog */
+ trace_watchdog_start(wdd, err);
+ }
if (err == 0)
watchdog_hrtimer_pretimeout_start(wdd);
}
} else {
err = wdd->ops->start(wdd);
+ trace_watchdog_start(wdd, err);
if (err == 0) {
set_bit(WDOG_ACTIVE, &wdd->status);
wd_data->last_keepalive = started_at;
if (wdd->ops->stop) {
clear_bit(WDOG_HW_RUNNING, &wdd->status);
err = wdd->ops->stop(wdd);
+ trace_watchdog_stop(wdd, err);
} else {
set_bit(WDOG_HW_RUNNING, &wdd->status);
}
if (wdd->ops->set_timeout) {
err = wdd->ops->set_timeout(wdd, timeout);
+ trace_watchdog_set_timeout(wdd, timeout, err);
} else {
wdd->timeout = timeout;
/* Disable pretimeout if it doesn't fit the new timeout */
static inline dma_addr_t grant_to_dma(grant_ref_t grant)
{
- return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
+ return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << XEN_PAGE_SHIFT);
}
static inline grant_ref_t dma_to_grant(dma_addr_t dma)
{
- return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
+ return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> XEN_PAGE_SHIFT);
}
static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
unsigned long attrs)
{
struct xen_grant_dma_data *data;
- unsigned int i, n_pages = PFN_UP(size);
+ unsigned int i, n_pages = XEN_PFN_UP(size);
unsigned long pfn;
grant_ref_t grant;
void *ret;
if (unlikely(data->broken))
return NULL;
- ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+ ret = alloc_pages_exact(n_pages * XEN_PAGE_SIZE, gfp);
if (!ret)
return NULL;
pfn = virt_to_pfn(ret);
if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
- free_pages_exact(ret, n_pages * PAGE_SIZE);
+ free_pages_exact(ret, n_pages * XEN_PAGE_SIZE);
return NULL;
}
dma_addr_t dma_handle, unsigned long attrs)
{
struct xen_grant_dma_data *data;
- unsigned int i, n_pages = PFN_UP(size);
+ unsigned int i, n_pages = XEN_PFN_UP(size);
grant_ref_t grant;
data = find_xen_grant_dma_data(dev);
gnttab_free_grant_reference_seq(grant, n_pages);
- free_pages_exact(vaddr, n_pages * PAGE_SIZE);
+ free_pages_exact(vaddr, n_pages * XEN_PAGE_SIZE);
}
static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
unsigned long attrs)
{
struct xen_grant_dma_data *data;
- unsigned int i, n_pages = PFN_UP(offset + size);
+ unsigned long dma_offset = xen_offset_in_page(offset),
+ pfn_offset = XEN_PFN_DOWN(offset);
+ unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size);
grant_ref_t grant;
dma_addr_t dma_handle;
for (i = 0; i < n_pages; i++) {
gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
- xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+ pfn_to_gfn(page_to_xen_pfn(page) + i + pfn_offset),
+ dir == DMA_TO_DEVICE);
}
- dma_handle = grant_to_dma(grant) + offset;
+ dma_handle = grant_to_dma(grant) + dma_offset;
return dma_handle;
}
unsigned long attrs)
{
struct xen_grant_dma_data *data;
- unsigned long offset = dma_handle & (PAGE_SIZE - 1);
- unsigned int i, n_pages = PFN_UP(offset + size);
+ unsigned long dma_offset = xen_offset_in_page(dma_handle);
+ unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size);
grant_ref_t grant;
if (WARN_ON(dir == DMA_NONE))
interp_elf_ex = kmalloc(sizeof(*interp_elf_ex), GFP_KERNEL);
if (!interp_elf_ex) {
retval = -ENOMEM;
- goto out_free_ph;
+ goto out_free_file;
}
/* Get the exec headers */
out_free_dentry:
kfree(interp_elf_ex);
kfree(interp_elf_phdata);
+out_free_file:
allow_write_access(interpreter);
if (interpreter)
fput(interpreter);
u64 root_objectid;
u64 inum;
int share_count;
+ bool have_delayed_delete_refs;
};
static inline int extent_is_shared(struct share_check *sc)
struct prelim_ref *ref, *next_ref;
rbtree_postorder_for_each_entry_safe(ref, next_ref,
- &preftree->root.rb_root, rbnode)
+ &preftree->root.rb_root, rbnode) {
+ free_inode_elem_list(ref->inode_list);
free_pref(ref);
+ }
preftree->root = RB_ROOT_CACHED;
preftree->count = 0;
return (struct extent_inode_elem *)(uintptr_t)node->aux;
}
+static void free_leaf_list(struct ulist *ulist)
+{
+ struct ulist_node *node;
+ struct ulist_iterator uiter;
+
+ ULIST_ITER_INIT(&uiter);
+ while ((node = ulist_next(ulist, &uiter)))
+ free_inode_elem_list(unode_aux_to_inode_list(node));
+
+ ulist_free(ulist);
+}
+
/*
* We maintain three separate rbtrees: one for direct refs, one for
* indirect refs which have a key, and one for indirect refs which do not
cond_resched();
}
out:
- ulist_free(parents);
+ /*
+ * We may have inode lists attached to refs in the parents ulist, so we
+ * must free them before freeing the ulist and its refs.
+ */
+ free_leaf_list(parents);
return ret;
}
struct preftrees *preftrees, struct share_check *sc)
{
struct btrfs_delayed_ref_node *node;
- struct btrfs_delayed_extent_op *extent_op = head->extent_op;
struct btrfs_key key;
- struct btrfs_key tmp_op_key;
struct rb_node *n;
int count;
int ret = 0;
- if (extent_op && extent_op->update_key)
- btrfs_disk_key_to_cpu(&tmp_op_key, &extent_op->key);
-
spin_lock(&head->lock);
for (n = rb_first_cached(&head->ref_tree); n; n = rb_next(n)) {
node = rb_entry(n, struct btrfs_delayed_ref_node,
case BTRFS_TREE_BLOCK_REF_KEY: {
/* NORMAL INDIRECT METADATA backref */
struct btrfs_delayed_tree_ref *ref;
+ struct btrfs_key *key_ptr = NULL;
+
+ if (head->extent_op && head->extent_op->update_key) {
+ btrfs_disk_key_to_cpu(&key, &head->extent_op->key);
+ key_ptr = &key;
+ }
ref = btrfs_delayed_node_to_tree_ref(node);
ret = add_indirect_ref(fs_info, preftrees, ref->root,
- &tmp_op_key, ref->level + 1,
+ key_ptr, ref->level + 1,
node->bytenr, count, sc,
GFP_ATOMIC);
break;
key.offset = ref->offset;
/*
- * Found a inum that doesn't match our known inum, we
- * know it's shared.
+ * If we have a share check context and a reference for
+ * another inode, we can't exit immediately. This is
+ * because even if this is a BTRFS_ADD_DELAYED_REF
+ * reference we may find next a BTRFS_DROP_DELAYED_REF
+ * which cancels out this ADD reference.
+ *
+ * If this is a DROP reference and there was no previous
+ * ADD reference, then we need to signal that when we
+ * process references from the extent tree (through
+ * add_inline_refs() and add_keyed_refs()), we should
+ * not exit early if we find a reference for another
+ * inode, because one of the delayed DROP references
+ * may cancel that reference in the extent tree.
*/
- if (sc && sc->inum && ref->objectid != sc->inum) {
- ret = BACKREF_FOUND_SHARED;
- goto out;
- }
+ if (sc && count < 0)
+ sc->have_delayed_delete_refs = true;
ret = add_indirect_ref(fs_info, preftrees, ref->root,
&key, 0, node->bytenr, count, sc,
}
if (!ret)
ret = extent_is_shared(sc);
-out:
+
spin_unlock(&head->lock);
return ret;
}
key.type = BTRFS_EXTENT_DATA_KEY;
key.offset = btrfs_extent_data_ref_offset(leaf, dref);
- if (sc && sc->inum && key.objectid != sc->inum) {
+ if (sc && sc->inum && key.objectid != sc->inum &&
+ !sc->have_delayed_delete_refs) {
ret = BACKREF_FOUND_SHARED;
break;
}
ret = add_indirect_ref(fs_info, preftrees, root,
&key, 0, bytenr, count,
sc, GFP_NOFS);
+
break;
}
default:
key.type = BTRFS_EXTENT_DATA_KEY;
key.offset = btrfs_extent_data_ref_offset(leaf, dref);
- if (sc && sc->inum && key.objectid != sc->inum) {
+ if (sc && sc->inum && key.objectid != sc->inum &&
+ !sc->have_delayed_delete_refs) {
ret = BACKREF_FOUND_SHARED;
break;
}
if (ret < 0)
goto out;
ref->inode_list = eie;
+ /*
+ * We transferred the list ownership to the ref,
+ * so set to NULL to avoid a double free in case
+ * an error happens after this.
+ */
+ eie = NULL;
}
ret = ulist_add_merge_ptr(refs, ref->parent,
ref->inode_list,
eie->next = ref->inode_list;
}
eie = NULL;
+ /*
+ * We have transferred the inode list ownership from
+ * this ref to the ref we added to the 'refs' ulist.
+ * So set this ref's inode list to NULL to avoid
+ * use-after-free when our caller uses it or double
+ * frees in case an error happens before we return.
+ */
+ ref->inode_list = NULL;
}
cond_resched();
}
return ret;
}
-static void free_leaf_list(struct ulist *blocks)
-{
- struct ulist_node *node = NULL;
- struct extent_inode_elem *eie;
- struct ulist_iterator uiter;
-
- ULIST_ITER_INIT(&uiter);
- while ((node = ulist_next(blocks, &uiter))) {
- if (!node->aux)
- continue;
- eie = unode_aux_to_inode_list(node);
- free_inode_elem_list(eie);
- node->aux = 0;
- }
-
- ulist_free(blocks);
-}
-
/*
* Finds all leafs with a reference to the specified combination of bytenr and
* offset. key_list_head will point to a list of corresponding keys (caller must
{
struct btrfs_backref_shared_cache_entry *entry;
+ if (!cache->use_cache)
+ return false;
+
if (WARN_ON_ONCE(level >= BTRFS_MAX_LEVEL))
return false;
return false;
*is_shared = entry->is_shared;
+ /*
+ * If the node at this level is shared, than all nodes below are also
+ * shared. Currently some of the nodes below may be marked as not shared
+ * because we have just switched from one leaf to another, and switched
+ * also other nodes above the leaf and below the current level, so mark
+ * them as shared.
+ */
+ if (*is_shared) {
+ for (int i = 0; i < level; i++) {
+ cache->entries[i].is_shared = true;
+ cache->entries[i].gen = entry->gen;
+ }
+ }
return true;
}
struct btrfs_backref_shared_cache_entry *entry;
u64 gen;
+ if (!cache->use_cache)
+ return;
+
if (WARN_ON_ONCE(level >= BTRFS_MAX_LEVEL))
return;
.root_objectid = root->root_key.objectid,
.inum = inum,
.share_count = 0,
+ .have_delayed_delete_refs = false,
};
int level;
/* -1 means we are in the bytenr of the data extent. */
level = -1;
ULIST_ITER_INIT(&uiter);
+ cache->use_cache = true;
while (1) {
bool is_shared;
bool cached;
extent_gen > btrfs_root_last_snapshot(&root->root_item))
break;
+ /*
+ * If our data extent was not directly shared (without multiple
+ * reference items), than it might have a single reference item
+ * with a count > 1 for the same offset, which means there are 2
+ * (or more) file extent items that point to the data extent -
+ * this happens when a file extent item needs to be split and
+ * then one item gets moved to another leaf due to a b+tree leaf
+ * split when inserting some item. In this case the file extent
+ * items may be located in different leaves and therefore some
+ * of the leaves may be referenced through shared subtrees while
+ * others are not. Since our extent buffer cache only works for
+ * a single path (by far the most common case and simpler to
+ * deal with), we can not use it if we have multiple leaves
+ * (which implies multiple paths).
+ */
+ if (level == -1 && tmp->nnodes > 1)
+ cache->use_cache = false;
+
if (level >= 0)
store_backref_shared_cache(cache, root, bytenr,
level, false);
break;
}
shared.share_count = 0;
+ shared.have_delayed_delete_refs = false;
cond_resched();
}
* a given data extent should never exceed the maximum b+tree height.
*/
struct btrfs_backref_shared_cache_entry entries[BTRFS_MAX_LEVEL];
+ bool use_cache;
};
typedef int (iterate_extent_inodes_t)(u64 inum, u64 offset, u64 root,
btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work);
out:
- /* REVIEW */
if (wait && caching_ctl)
ret = btrfs_caching_ctl_wait_done(cache, caching_ctl);
- /* wait_event(caching_ctl->wait, space_cache_v1_done(cache)); */
if (caching_ctl)
btrfs_put_caching_control(caching_ctl);
ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from,
const struct btrfs_ioctl_encoded_io_args *encoded);
-ssize_t btrfs_dio_rw(struct kiocb *iocb, struct iov_iter *iter, size_t done_before);
+ssize_t btrfs_dio_read(struct kiocb *iocb, struct iov_iter *iter,
+ size_t done_before);
+struct iomap_dio *btrfs_dio_write(struct kiocb *iocb, struct iov_iter *iter,
+ size_t done_before);
extern const struct dentry_operations btrfs_dentry_operations;
* Return 0 if the superblock checksum type matches the checksum value of that
* algorithm. Pass the raw disk superblock data.
*/
-static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
- char *raw_disk_sb)
+int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *disk_sb)
{
- struct btrfs_super_block *disk_sb =
- (struct btrfs_super_block *)raw_disk_sb;
char result[BTRFS_CSUM_SIZE];
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
* BTRFS_SUPER_INFO_SIZE range, we expect that the unused space is
* filled with zeros and is included in the checksum.
*/
- crypto_shash_digest(shash, raw_disk_sb + BTRFS_CSUM_SIZE,
+ crypto_shash_digest(shash, (const u8 *)disk_sb + BTRFS_CSUM_SIZE,
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE, result);
if (memcmp(disk_sb->csum, result, fs_info->csum_size))
* We want to check superblock checksum, the type is stored inside.
* Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k).
*/
- if (btrfs_check_super_csum(fs_info, (u8 *)disk_super)) {
+ if (btrfs_check_super_csum(fs_info, disk_super)) {
btrfs_err(fs_info, "superblock checksum mismatch");
err = -EINVAL;
btrfs_release_disk_super(disk_super);
void btrfs_clean_tree_block(struct extent_buffer *buf);
void btrfs_clear_oneshot_options(struct btrfs_fs_info *fs_info);
int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info);
+int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *disk_sb);
int __cold open_ctree(struct super_block *sb,
struct btrfs_fs_devices *fs_devices,
char *options);
}
struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
- u64 root_objectid, u32 generation,
+ u64 root_objectid, u64 generation,
int check_generation)
{
struct btrfs_fs_info *fs_info = btrfs_sb(sb);
} __attribute__ ((packed));
struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
- u64 root_objectid, u32 generation,
+ u64 root_objectid, u64 generation,
int check_generation);
struct dentry *btrfs_get_parent(struct dentry *child);
int err;
u64 failed_start;
- while (1) {
+ err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, &failed_start,
+ cached_state, NULL, GFP_NOFS);
+ while (err == -EEXIST) {
+ if (failed_start != start)
+ clear_extent_bit(tree, start, failed_start - 1,
+ EXTENT_LOCKED, cached_state);
+
+ wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED);
err = __set_extent_bit(tree, start, end, EXTENT_LOCKED,
&failed_start, cached_state, NULL,
GFP_NOFS);
- if (err == -EEXIST) {
- wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED);
- start = failed_start;
- } else
- break;
- WARN_ON(start > end);
}
return err;
}
}
/*
- * If this is a leaf and there are tree mod log users, we may
- * have recorded mod log operations that point to this leaf.
- * So we must make sure no one reuses this leaf's extent before
- * mod log operations are applied to a node, otherwise after
- * rewinding a node using the mod log operations we get an
- * inconsistent btree, as the leaf's extent may now be used as
- * a node or leaf for another different btree.
+ * If there are tree mod log users we may have recorded mod log
+ * operations for this node. If we re-allocate this node we
+ * could replay operations on this node that happened when it
+ * existed in a completely different root. For example if it
+ * was part of root A, then was reallocated to root B, and we
+ * are doing a btrfs_old_search_slot(root b), we could replay
+ * operations that happened when the block was part of root A,
+ * giving us an inconsistent view of the btree.
+ *
* We are safe from races here because at this point no other
* node or root points to this extent buffer, so if after this
- * check a new tree mod log user joins, it will not be able to
- * find a node pointing to this leaf and record operations that
- * point to this leaf.
+ * check a new tree mod log user joins we will not have an
+ * existing log of operations on this node that we have to
+ * contend with.
*/
- if (btrfs_header_level(buf) == 0 &&
- test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))
+ if (test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))
must_pin = true;
if (must_pin || btrfs_is_zoned(fs_info)) {
write_bytes);
else
btrfs_check_nocow_unlock(BTRFS_I(inode));
+
+ if (nowait && ret == -ENOSPC)
+ ret = -EAGAIN;
break;
}
release_bytes = reserve_bytes;
again:
ret = balance_dirty_pages_ratelimited_flags(inode->i_mapping, bdp_flags);
- if (ret)
+ if (ret) {
+ btrfs_delalloc_release_extents(BTRFS_I(inode), reserve_bytes);
break;
+ }
/*
* This is going to setup the pages array with the number of
loff_t endbyte;
ssize_t err;
unsigned int ilock_flags = 0;
+ struct iomap_dio *dio;
if (iocb->ki_flags & IOCB_NOWAIT)
ilock_flags |= BTRFS_ILOCK_TRY;
* So here we disable page faults in the iov_iter and then retry if we
* got -EFAULT, faulting in the pages before the retry.
*/
-again:
from->nofault = true;
- err = btrfs_dio_rw(iocb, from, written);
+ dio = btrfs_dio_write(iocb, from, written);
from->nofault = false;
+ /*
+ * iomap_dio_complete() will call btrfs_sync_file() if we have a dsync
+ * iocb, and that needs to lock the inode. So unlock it before calling
+ * iomap_dio_complete() to avoid a deadlock.
+ */
+ btrfs_inode_unlock(inode, ilock_flags);
+
+ if (IS_ERR_OR_NULL(dio))
+ err = PTR_ERR_OR_ZERO(dio);
+ else
+ err = iomap_dio_complete(dio);
+
/* No increment (+=) because iomap returns a cumulative value. */
if (err > 0)
written = err;
} else {
fault_in_iov_iter_readable(from, left);
prev_left = left;
- goto again;
+ goto relock;
}
}
- btrfs_inode_unlock(inode, ilock_flags);
-
/*
* If 'err' is -ENOTBLK or we have not written all data, then it means
* we must fallback to buffered IO.
*/
pagefault_disable();
to->nofault = true;
- ret = btrfs_dio_rw(iocb, to, read);
+ ret = btrfs_dio_read(iocb, to, read);
to->nofault = false;
pagefault_enable();
*/
status = BLK_STS_RESOURCE;
dip->csums = kcalloc(nr_sectors, fs_info->csum_size, GFP_NOFS);
- if (!dip)
+ if (!dip->csums)
goto out_err;
status = btrfs_lookup_bio_sums(inode, dio_bio, dip->csums);
.bio_set = &btrfs_dio_bioset,
};
-ssize_t btrfs_dio_rw(struct kiocb *iocb, struct iov_iter *iter, size_t done_before)
+ssize_t btrfs_dio_read(struct kiocb *iocb, struct iov_iter *iter, size_t done_before)
{
struct btrfs_dio_data data;
return iomap_dio_rw(iocb, iter, &btrfs_dio_iomap_ops, &btrfs_dio_ops,
- IOMAP_DIO_PARTIAL | IOMAP_DIO_NOSYNC,
- &data, done_before);
+ IOMAP_DIO_PARTIAL, &data, done_before);
+}
+
+struct iomap_dio *btrfs_dio_write(struct kiocb *iocb, struct iov_iter *iter,
+ size_t done_before)
+{
+ struct btrfs_dio_data data;
+
+ return __iomap_dio_rw(iocb, iter, &btrfs_dio_iomap_ops, &btrfs_dio_ops,
+ IOMAP_DIO_PARTIAL, &data, done_before);
}
static int btrfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
int ret;
ret = alloc_rbio_parity_pages(rbio);
- if (ret) {
- __free_raid_bio(rbio);
+ if (ret)
return ret;
- }
ret = lock_stripe_add(rbio);
if (ret == 0)
*/
if (rbio_is_full(rbio)) {
ret = full_stripe_write(rbio);
- if (ret)
+ if (ret) {
+ __free_raid_bio(rbio);
goto fail;
+ }
return;
}
list_add_tail(&rbio->plug_list, &plug->rbio_list);
} else {
ret = __raid56_parity_write(rbio);
- if (ret)
+ if (ret) {
+ __free_raid_bio(rbio);
goto fail;
+ }
}
return;
rbio->faila = find_logical_bio_stripe(rbio, bio);
if (rbio->faila == -1) {
- BUG();
- kfree(rbio);
+ btrfs_warn_rl(fs_info,
+ "can not determine the failed stripe number for full stripe %llu",
+ bioc->raid_map[0]);
+ __free_raid_bio(rbio);
return NULL;
}
switch (sctx->proto) {
case 1: return cmd <= BTRFS_SEND_C_MAX_V1;
case 2: return cmd <= BTRFS_SEND_C_MAX_V2;
+ case 3: return cmd <= BTRFS_SEND_C_MAX_V3;
default: return false;
}
}
if (ret < 0)
goto out;
}
- if (sctx->cur_inode_needs_verity) {
+
+ if (proto_cmd_ok(sctx, BTRFS_SEND_C_ENABLE_VERITY)
+ && sctx->cur_inode_needs_verity) {
ret = process_verity(sctx);
if (ret < 0)
goto out;
/*
* First, process the inode as if it was deleted.
*/
- sctx->cur_inode_gen = right_gen;
- sctx->cur_inode_new = false;
- sctx->cur_inode_deleted = true;
- sctx->cur_inode_size = btrfs_inode_size(
- sctx->right_path->nodes[0], right_ii);
- sctx->cur_inode_mode = btrfs_inode_mode(
- sctx->right_path->nodes[0], right_ii);
- ret = process_all_refs(sctx,
- BTRFS_COMPARE_TREE_DELETED);
- if (ret < 0)
- goto out;
+ if (old_nlinks > 0) {
+ sctx->cur_inode_gen = right_gen;
+ sctx->cur_inode_new = false;
+ sctx->cur_inode_deleted = true;
+ sctx->cur_inode_size = btrfs_inode_size(
+ sctx->right_path->nodes[0], right_ii);
+ sctx->cur_inode_mode = btrfs_inode_mode(
+ sctx->right_path->nodes[0], right_ii);
+ ret = process_all_refs(sctx,
+ BTRFS_COMPARE_TREE_DELETED);
+ if (ret < 0)
+ goto out;
+ }
/*
* Now process the inode as if it was new.
#include <linux/types.h>
#define BTRFS_SEND_STREAM_MAGIC "btrfs-stream"
+/* Conditional support for the upcoming protocol version. */
+#ifdef CONFIG_BTRFS_DEBUG
+#define BTRFS_SEND_STREAM_VERSION 3
+#else
#define BTRFS_SEND_STREAM_VERSION 2
+#endif
/*
* In send stream v1, no command is larger than 64K. In send stream v2, no limit
{
struct btrfs_fs_info *fs_info = dev->fs_info;
struct btrfs_super_block *sb;
+ u16 csum_type;
int ret = 0;
/* This should be called with fs still frozen. */
if (IS_ERR(sb))
return PTR_ERR(sb);
+ /* Verify the checksum. */
+ csum_type = btrfs_super_csum_type(sb);
+ if (csum_type != btrfs_super_csum_type(fs_info->super_copy)) {
+ btrfs_err(fs_info, "csum type changed, has %u expect %u",
+ csum_type, btrfs_super_csum_type(fs_info->super_copy));
+ ret = -EUCLEAN;
+ goto out;
+ }
+
+ if (btrfs_check_super_csum(fs_info, sb)) {
+ btrfs_err(fs_info, "csum for on-disk super block no longer matches");
+ ret = -EUCLEAN;
+ goto out;
+ }
+
/* Btrfs_validate_super() includes fsid check against super->fsid. */
ret = btrfs_validate_super(fs_info, sb, 0);
if (ret < 0)
*/
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots, false);
if (ret) {
- ulist_free(old_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FS_TREE_OBJECTID);
- if (ret)
+ if (ret) {
+ ulist_free(old_roots);
return ret;
+ }
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
ulist_free(old_roots);
- ulist_free(new_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
return ret;
}
+ /* btrfs_qgroup_account_extent() always frees the ulists passed to it. */
+ old_roots = NULL;
+ new_roots = NULL;
+
if (btrfs_verify_qgroup_counts(fs_info, BTRFS_FS_TREE_OBJECTID,
nodesize, nodesize)) {
test_err("qgroup counts didn't match expected values");
return -EINVAL;
}
- old_roots = NULL;
- new_roots = NULL;
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots, false);
if (ret) {
- ulist_free(old_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = remove_extent_item(root, nodesize, nodesize);
- if (ret)
+ if (ret) {
+ ulist_free(old_roots);
return -EINVAL;
+ }
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
ulist_free(old_roots);
- ulist_free(new_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots, false);
if (ret) {
- ulist_free(old_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FS_TREE_OBJECTID);
- if (ret)
+ if (ret) {
+ ulist_free(old_roots);
return ret;
+ }
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
ulist_free(old_roots);
- ulist_free(new_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots, false);
if (ret) {
- ulist_free(old_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = add_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FIRST_FREE_OBJECTID);
- if (ret)
+ if (ret) {
+ ulist_free(old_roots);
return ret;
+ }
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
ulist_free(old_roots);
- ulist_free(new_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots, false);
if (ret) {
- ulist_free(old_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
ret = remove_extent_ref(root, nodesize, nodesize, 0,
BTRFS_FIRST_FREE_OBJECTID);
- if (ret)
+ if (ret) {
+ ulist_free(old_roots);
return ret;
+ }
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
ulist_free(old_roots);
- ulist_free(new_roots);
test_err("couldn't find old roots: %d", ret);
return ret;
}
u64 devid;
u64 type;
u8 uuid[BTRFS_UUID_SIZE];
+ int index;
int num_stripes;
int ret;
int i;
logical = key->offset;
length = btrfs_chunk_length(leaf, chunk);
type = btrfs_chunk_type(leaf, chunk);
+ index = btrfs_bg_flags_to_raid_index(type);
num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
#if BITS_PER_LONG == 32
map->io_align = btrfs_chunk_io_align(leaf, chunk);
map->stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
map->type = type;
- map->sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk);
+ /*
+ * We can't use the sub_stripes value, as for profiles other than
+ * RAID10, they may have 0 as sub_stripes for filesystems created by
+ * older mkfs (<v5.4).
+ * In that case, it can cause divide-by-zero errors later.
+ * Since currently sub_stripes is fixed for each profile, let's
+ * use the trusted value instead.
+ */
+ map->sub_stripes = btrfs_raid_array[index].sub_stripes;
map->verified_stripes = 0;
em->orig_block_len = btrfs_calc_stripe_length(em);
for (i = 0; i < num_stripes; i++) {
*/
struct btrfs_bio {
unsigned int mirror_num;
+ struct bvec_iter iter;
/* for direct I/O */
u64 file_offset;
struct btrfs_device *device;
u8 *csum;
u8 csum_inline[BTRFS_BIO_INLINE_CSUM_SIZE];
- struct bvec_iter iter;
/* End I/O information supplied to btrfs_bio_alloc */
btrfs_bio_end_io_t end_io;
dentry = dget(cifs_sb->root);
else {
dentry = path_to_dentry(cifs_sb, path);
- if (IS_ERR(dentry))
+ if (IS_ERR(dentry)) {
+ rc = -ENOENT;
goto oshr_free;
+ }
}
cfid->dentry = dentry;
cfid->tcon = tcon;
free_cached_dir(cfid);
}
+void drop_cached_dir_by_name(const unsigned int xid, struct cifs_tcon *tcon,
+ const char *name, struct cifs_sb_info *cifs_sb)
+{
+ struct cached_fid *cfid = NULL;
+ int rc;
+
+ rc = open_cached_dir(xid, tcon, name, cifs_sb, true, &cfid);
+ if (rc) {
+ cifs_dbg(FYI, "no cached dir found for rmdir(%s)\n", name);
+ return;
+ }
+ spin_lock(&cfid->cfids->cfid_list_lock);
+ if (cfid->has_lease) {
+ cfid->has_lease = false;
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+ }
+ spin_unlock(&cfid->cfids->cfid_list_lock);
+ close_cached_dir(cfid);
+}
+
+
void close_cached_dir(struct cached_fid *cfid)
{
kref_put(&cfid->refcount, smb2_close_cached_fid);
{
struct cached_fids *cfids = tcon->cfids;
struct cached_fid *cfid, *q;
- struct list_head entry;
+ LIST_HEAD(entry);
- INIT_LIST_HEAD(&entry);
spin_lock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
- list_del(&cfid->entry);
- list_add(&cfid->entry, &entry);
+ list_move(&cfid->entry, &entry);
cfids->num_entries--;
cfid->is_open = false;
+ cfid->on_list = false;
/* To prevent race with smb2_cached_lease_break() */
kref_get(&cfid->refcount);
}
spin_unlock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &entry, entry) {
- cfid->on_list = false;
list_del(&cfid->entry);
cancel_work_sync(&cfid->lease_break);
if (cfid->has_lease) {
void free_cached_dirs(struct cached_fids *cfids)
{
struct cached_fid *cfid, *q;
- struct list_head entry;
+ LIST_HEAD(entry);
- INIT_LIST_HEAD(&entry);
spin_lock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
cfid->on_list = false;
cfid->is_open = false;
- list_del(&cfid->entry);
- list_add(&cfid->entry, &entry);
+ list_move(&cfid->entry, &entry);
}
spin_unlock(&cfids->cfid_list_lock);
struct dentry *dentry,
struct cached_fid **cfid);
extern void close_cached_dir(struct cached_fid *cfid);
+extern void drop_cached_dir_by_name(const unsigned int xid,
+ struct cifs_tcon *tcon,
+ const char *name,
+ struct cifs_sb_info *cifs_sb);
extern void close_all_cached_dirs(struct cifs_sb_info *cifs_sb);
extern void invalidate_all_cached_dirs(struct cifs_tcon *tcon);
extern int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]);
.fiemap = cifs_fiemap,
};
+const char *cifs_get_link(struct dentry *dentry, struct inode *inode,
+ struct delayed_call *done)
+{
+ char *target_path;
+
+ target_path = kmalloc(PATH_MAX, GFP_KERNEL);
+ if (!target_path)
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock(&inode->i_lock);
+ if (likely(CIFS_I(inode)->symlink_target)) {
+ strscpy(target_path, CIFS_I(inode)->symlink_target, PATH_MAX);
+ } else {
+ kfree(target_path);
+ target_path = ERR_PTR(-EOPNOTSUPP);
+ }
+ spin_unlock(&inode->i_lock);
+
+ if (!IS_ERR(target_path))
+ set_delayed_call(done, kfree_link, target_path);
+
+ return target_path;
+}
+
const struct inode_operations cifs_symlink_inode_ops = {
- .get_link = simple_get_link,
+ .get_link = cifs_get_link,
.permission = cifs_permission,
.listxattr = cifs_listxattr,
};
ssize_t rc;
struct cifsFileInfo *cfile = dst_file->private_data;
- if (cfile->swapfile)
- return -EOPNOTSUPP;
+ if (cfile->swapfile) {
+ rc = -EOPNOTSUPP;
+ free_xid(xid);
+ return rc;
+ }
rc = cifs_file_copychunk_range(xid, src_file, off, dst_file, destoff,
len, flags);
#endif /* CONFIG_CIFS_NFSD_EXPORT */
/* when changing internal version - update following two lines at same time */
-#define SMB3_PRODUCT_BUILD 39
-#define CIFS_VERSION "2.39"
+#define SMB3_PRODUCT_BUILD 40
+#define CIFS_VERSION "2.40"
#endif /* _CIFSFS_H */
server->session_key.response = NULL;
server->session_key.len = 0;
kfree(server->hostname);
+ server->hostname = NULL;
task = xchg(&server->tsk, NULL);
if (task)
cifs_dbg(FYI, "cifs_create parent inode = 0x%p name is: %pd and dentry = 0x%p\n",
inode, direntry, direntry);
- if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb))))
- return -EIO;
+ if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb)))) {
+ rc = -EIO;
+ goto out_free_xid;
+ }
tlink = cifs_sb_tlink(CIFS_SB(inode->i_sb));
rc = PTR_ERR(tlink);
struct cifsFileInfo *cfile;
__u32 type;
- rc = -EACCES;
xid = get_xid();
- if (!(fl->fl_flags & FL_FLOCK))
- return -ENOLCK;
+ if (!(fl->fl_flags & FL_FLOCK)) {
+ rc = -ENOLCK;
+ free_xid(xid);
+ return rc;
+ }
cfile = (struct cifsFileInfo *)file->private_data;
tcon = tlink_tcon(cfile->tlink);
* if no lock or unlock then nothing to do since we do not
* know what it is
*/
+ rc = -EOPNOTSUPP;
free_xid(xid);
- return -EOPNOTSUPP;
+ return rc;
}
rc = cifs_setlk(file, fl, type, wait_flag, posix_lck, lock, unlock,
struct cifs_writedata *
cifs_writedata_alloc(unsigned int nr_pages, work_func_t complete)
{
+ struct cifs_writedata *writedata = NULL;
struct page **pages =
kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
- if (pages)
- return cifs_writedata_direct_alloc(pages, complete);
+ if (pages) {
+ writedata = cifs_writedata_direct_alloc(pages, complete);
+ if (!writedata)
+ kvfree(pages);
+ }
- return NULL;
+ return writedata;
}
struct cifs_writedata *
cifs_uncached_writev_complete);
if (!wdata) {
rc = -ENOMEM;
+ for (i = 0; i < nr_pages; i++)
+ put_page(pagevec[i]);
+ kvfree(pagevec);
add_credits_and_wake_if(server, credits, 0);
break;
}
kfree(cifs_i->symlink_target);
cifs_i->symlink_target = fattr->cf_symlink_target;
fattr->cf_symlink_target = NULL;
-
- if (unlikely(!cifs_i->symlink_target))
- inode->i_link = ERR_PTR(-EOPNOTSUPP);
- else
- inode->i_link = cifs_i->symlink_target;
}
spin_unlock(&inode->i_lock);
if (cfile->symlink_target) {
fattr.cf_symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
- if (!fattr.cf_symlink_target)
- return -ENOMEM;
+ if (!fattr.cf_symlink_target) {
+ rc = -ENOMEM;
+ goto cifs_gfiunix_out;
+ }
}
rc = CIFSSMBUnixQFileInfo(xid, tcon, cfile->fid.netfid, &find_data);
{
struct smb_hdr *buf = (struct smb_hdr *)buffer;
struct smb_com_lock_req *pSMB = (struct smb_com_lock_req *)buf;
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
struct cifs_tcon *tcon;
struct cifsInodeInfo *pCifsInode;
if (!(pSMB->LockType & LOCKING_ANDX_OPLOCK_RELEASE))
return false;
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(srv) ? srv->primary_server : srv;
+
/* look up tcon based on tid & uid */
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(ses, &srv->smb_ses_list, smb_ses_list) {
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
if (tcon->tid != buf->Tid)
continue;
cifs_put_tcp_session(chan->server, 0);
}
+ free_xid(xid);
return rc;
}
smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
struct cifs_sb_info *cifs_sb)
{
+ drop_cached_dir_by_name(xid, tcon, name, cifs_sb);
return smb2_compound_op(xid, tcon, cifs_sb, name, DELETE, FILE_OPEN,
CREATE_NOT_FILE, ACL_NO_MODE,
NULL, SMB2_OP_RMDIR, NULL, NULL, NULL);
{
struct cifsFileInfo *cfile;
+ drop_cached_dir_by_name(xid, tcon, from_name, cifs_sb);
cifs_get_writable_path(tcon, from_name, FIND_WR_WITH_DELETE, &cfile);
return smb2_set_path_attr(xid, tcon, from_name, to_name,
int
smb2_check_message(char *buf, unsigned int len, struct TCP_Server_Info *server)
{
+ struct TCP_Server_Info *pserver;
struct smb2_hdr *shdr = (struct smb2_hdr *)buf;
struct smb2_pdu *pdu = (struct smb2_pdu *)shdr;
int hdr_size = sizeof(struct smb2_hdr);
__u32 calc_len; /* calculated length */
__u64 mid;
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
/*
* Add function to do table lookup of StructureSize by command
* ie Validate the wct via smb2_struct_sizes table above
/* decrypt frame now that it is completely read in */
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(iter, &server->smb_ses_list, smb_ses_list) {
+ list_for_each_entry(iter, &pserver->smb_ses_list, smb_ses_list) {
if (iter->Suid == le64_to_cpu(thdr->SessionId)) {
ses = iter;
break;
}
static bool
-smb2_is_valid_lease_break(char *buffer)
+smb2_is_valid_lease_break(char *buffer, struct TCP_Server_Info *server)
{
struct smb2_lease_break *rsp = (struct smb2_lease_break *)buffer;
- struct TCP_Server_Info *server;
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
struct cifs_tcon *tcon;
struct cifs_pending_open *open;
cifs_dbg(FYI, "Checking for lease break\n");
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
/* look up tcon based on tid & uid */
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) {
- list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
- list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
- spin_lock(&tcon->open_file_lock);
- cifs_stats_inc(
- &tcon->stats.cifs_stats.num_oplock_brks);
- if (smb2_tcon_has_lease(tcon, rsp)) {
- spin_unlock(&tcon->open_file_lock);
- spin_unlock(&cifs_tcp_ses_lock);
- return true;
- }
- open = smb2_tcon_find_pending_open_lease(tcon,
- rsp);
- if (open) {
- __u8 lease_key[SMB2_LEASE_KEY_SIZE];
- struct tcon_link *tlink;
-
- tlink = cifs_get_tlink(open->tlink);
- memcpy(lease_key, open->lease_key,
- SMB2_LEASE_KEY_SIZE);
- spin_unlock(&tcon->open_file_lock);
- spin_unlock(&cifs_tcp_ses_lock);
- smb2_queue_pending_open_break(tlink,
- lease_key,
- rsp->NewLeaseState);
- return true;
- }
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
+ list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
+ spin_lock(&tcon->open_file_lock);
+ cifs_stats_inc(
+ &tcon->stats.cifs_stats.num_oplock_brks);
+ if (smb2_tcon_has_lease(tcon, rsp)) {
spin_unlock(&tcon->open_file_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+ return true;
+ }
+ open = smb2_tcon_find_pending_open_lease(tcon,
+ rsp);
+ if (open) {
+ __u8 lease_key[SMB2_LEASE_KEY_SIZE];
+ struct tcon_link *tlink;
+
+ tlink = cifs_get_tlink(open->tlink);
+ memcpy(lease_key, open->lease_key,
+ SMB2_LEASE_KEY_SIZE);
+ spin_unlock(&tcon->open_file_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+ smb2_queue_pending_open_break(tlink,
+ lease_key,
+ rsp->NewLeaseState);
+ return true;
+ }
+ spin_unlock(&tcon->open_file_lock);
- if (cached_dir_lease_break(tcon, rsp->LeaseKey)) {
- spin_unlock(&cifs_tcp_ses_lock);
- return true;
- }
+ if (cached_dir_lease_break(tcon, rsp->LeaseKey)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return true;
}
}
}
smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
{
struct smb2_oplock_break *rsp = (struct smb2_oplock_break *)buffer;
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
struct cifs_tcon *tcon;
struct cifsInodeInfo *cinode;
if (rsp->StructureSize !=
smb2_rsp_struct_sizes[SMB2_OPLOCK_BREAK_HE]) {
if (le16_to_cpu(rsp->StructureSize) == 44)
- return smb2_is_valid_lease_break(buffer);
+ return smb2_is_valid_lease_break(buffer, server);
else
return false;
}
cifs_dbg(FYI, "oplock level 0x%x\n", rsp->OplockLevel);
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
/* look up tcon based on tid & uid */
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
spin_lock(&tcon->open_file_lock);
p = buf;
spin_lock(&ses->iface_lock);
+ ses->iface_count = 0;
/*
* Go through iface_list and do kref_put to remove
* any unused ifaces. ifaces in use will be removed
kref_put(&iface->refcount, release_iface);
} else
list_add_tail(&info->iface_head, &ses->iface_list);
- spin_unlock(&ses->iface_lock);
ses->iface_count++;
+ spin_unlock(&ses->iface_lock);
ses->iface_last_update = jiffies;
next_iface:
nb_iface++;
smb2_is_network_name_deleted(char *buf, struct TCP_Server_Info *server)
{
struct smb2_hdr *shdr = (struct smb2_hdr *)buf;
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
struct cifs_tcon *tcon;
if (shdr->Status != STATUS_NETWORK_NAME_DELETED)
return;
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
if (tcon->tid == le32_to_cpu(shdr->Id.SyncId.TreeId)) {
spin_lock(&tcon->tc_lock);
static int
smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key)
{
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
u8 *ses_enc_key;
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) {
- list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
- if (ses->Suid == ses_id) {
- spin_lock(&ses->ses_lock);
- ses_enc_key = enc ? ses->smb3encryptionkey :
- ses->smb3decryptionkey;
- memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
- spin_unlock(&ses->ses_lock);
- spin_unlock(&cifs_tcp_ses_lock);
- return 0;
- }
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
+ if (ses->Suid == ses_id) {
+ spin_lock(&ses->ses_lock);
+ ses_enc_key = enc ? ses->smb3encryptionkey :
+ ses->smb3decryptionkey;
+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
+ spin_unlock(&ses->ses_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+ return 0;
}
}
spin_unlock(&cifs_tcp_ses_lock);
static void
SMB2_sess_free_buffer(struct SMB2_sess_data *sess_data)
{
- int i;
+ struct kvec *iov = sess_data->iov;
- /* zero the session data before freeing, as it might contain sensitive info (keys, etc) */
- for (i = 0; i < 2; i++)
- if (sess_data->iov[i].iov_base)
- memzero_explicit(sess_data->iov[i].iov_base, sess_data->iov[i].iov_len);
+ /* iov[1] is already freed by caller */
+ if (sess_data->buf0_type != CIFS_NO_BUFFER && iov[0].iov_base)
+ memzero_explicit(iov[0].iov_base, iov[0].iov_len);
- free_rsp_buf(sess_data->buf0_type, sess_data->iov[0].iov_base);
+ free_rsp_buf(sess_data->buf0_type, iov[0].iov_base);
sess_data->buf0_type = CIFS_NO_BUFFER;
}
&blob_length, ses, server,
sess_data->nls_cp);
if (rc)
- goto out_err;
+ goto out;
if (use_spnego) {
/* BB eventually need to add this */
}
out:
- memzero_explicit(ntlmssp_blob, blob_length);
+ kfree_sensitive(ntlmssp_blob);
SMB2_sess_free_buffer(sess_data);
if (!rc) {
sess_data->result = 0;
}
#endif
out:
- memzero_explicit(ntlmssp_blob, blob_length);
+ kfree_sensitive(ntlmssp_blob);
SMB2_sess_free_buffer(sess_data);
kfree_sensitive(ses->ntlmssp);
ses->ntlmssp = NULL;
int smb2_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key)
{
struct cifs_chan *chan;
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses = NULL;
- struct TCP_Server_Info *it = NULL;
int i;
int rc = 0;
spin_lock(&cifs_tcp_ses_lock);
- list_for_each_entry(it, &cifs_tcp_ses_list, tcp_ses_list) {
- list_for_each_entry(ses, &it->smb_ses_list, smb_ses_list) {
- if (ses->Suid == ses_id)
- goto found;
- }
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
+ if (ses->Suid == ses_id)
+ goto found;
}
cifs_server_dbg(VFS, "%s: Could not find session 0x%llx\n",
__func__, ses_id);
static struct cifs_ses *
smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
{
+ struct TCP_Server_Info *pserver;
struct cifs_ses *ses;
- list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+ /* If server is a channel, select the primary channel */
+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
+
+ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
if (ses->Suid != ses_id)
continue;
++ses->ses_count;
}
/*
- * This is called at unmount time to release all encryption keys that have been
- * added to the filesystem, along with the keyring that contains them.
+ * Release all encryption keys that have been added to the filesystem, along
+ * with the keyring that contains them.
*
- * Note that besides clearing and freeing memory, this might need to evict keys
- * from the keyslots of an inline crypto engine. Therefore, this must be called
- * while the filesystem's underlying block device(s) are still available.
+ * This is called at unmount time. The filesystem's underlying block device(s)
+ * are still available at this time; this is important because after user file
+ * accesses have been allowed, this function may need to evict keys from the
+ * keyslots of an inline crypto engine, which requires the block device(s).
+ *
+ * This is also called when the super_block is being freed. This is needed to
+ * avoid a memory leak if mounting fails after the "test_dummy_encryption"
+ * option was processed, as in that case the unmount-time call isn't made.
*/
-void fscrypt_sb_delete(struct super_block *sb)
+void fscrypt_destroy_keyring(struct super_block *sb)
{
struct fscrypt_keyring *keyring = sb->s_master_keys;
size_t i;
if (err)
return err;
- /*
- * Ensure that the available space hasn't shrunk below the safe level
- */
- status = check_var_size(attributes, *size + ucs2_strsize(name, 1024));
- if (status != EFI_SUCCESS) {
- if (status != EFI_UNSUPPORTED) {
- err = efi_status_to_err(status);
- goto out;
- }
-
- if (*size > 65536) {
- err = -ENOSPC;
- goto out;
- }
- }
-
status = efivar_set_variable_locked(name, vendor, attributes, *size,
data, false);
if (status != EFI_SUCCESS) {
struct super_block *psb = erofs_pseudo_mnt->mnt_sb;
mutex_lock(&erofs_domain_cookies_lock);
+ spin_lock(&psb->s_inode_list_lock);
list_for_each_entry(inode, &psb->s_inodes, i_sb_list) {
ctx = inode->i_private;
if (!ctx || ctx->domain != domain || strcmp(ctx->name, name))
continue;
igrab(inode);
+ spin_unlock(&psb->s_inode_list_lock);
mutex_unlock(&erofs_domain_cookies_lock);
return ctx;
}
+ spin_unlock(&psb->s_inode_list_lock);
ctx = erofs_fscache_domain_init_cookie(sb, name, need_inode);
mutex_unlock(&erofs_domain_cookies_lock);
return ctx;
++spiltted;
if (fe->pcl->pageofs_out != (map->m_la & ~PAGE_MASK))
fe->pcl->multibases = true;
-
- if ((map->m_flags & EROFS_MAP_FULL_MAPPED) &&
- !(map->m_flags & EROFS_MAP_PARTIAL_REF) &&
- fe->pcl->length == map->m_llen)
- fe->pcl->partial = false;
if (fe->pcl->length < offset + end - map->m_la) {
fe->pcl->length = offset + end - map->m_la;
fe->pcl->pageofs_out = map->m_la & ~PAGE_MASK;
}
+ if ((map->m_flags & EROFS_MAP_FULL_MAPPED) &&
+ !(map->m_flags & EROFS_MAP_PARTIAL_REF) &&
+ fe->pcl->length == map->m_llen)
+ fe->pcl->partial = false;
next_part:
/* shorten the remaining extent to update progress */
map->m_llen = offset + cur - map->m_la;
if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK)) {
unsigned int pgnr;
- struct page *oldpage;
pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT;
DBG_BUGON(pgnr >= be->nr_pages);
- oldpage = be->decompressed_pages[pgnr];
- be->decompressed_pages[pgnr] = bvec->page;
-
- if (!oldpage)
+ if (!be->decompressed_pages[pgnr]) {
+ be->decompressed_pages[pgnr] = bvec->page;
return;
+ }
}
/* (cold path) one pcluster is requested multiple times */
}
/*
- * bit 31: I/O error occurred on this page
- * bit 0 - 30: remaining parts to complete this page
+ * bit 30: I/O error occurred on this page
+ * bit 0 - 29: remaining parts to complete this page
*/
-#define Z_EROFS_PAGE_EIO (1 << 31)
+#define Z_EROFS_PAGE_EIO (1 << 30)
static inline void z_erofs_onlinepage_init(struct page *page)
{
pos = ALIGN(iloc(EROFS_SB(sb), vi->nid) + vi->inode_isize +
vi->xattr_isize, 8);
- kaddr = erofs_read_metabuf(&buf, sb, erofs_blknr(pos),
- EROFS_KMAP_ATOMIC);
+ kaddr = erofs_read_metabuf(&buf, sb, erofs_blknr(pos), EROFS_KMAP);
if (IS_ERR(kaddr)) {
err = PTR_ERR(kaddr);
goto out_unlock;
vi->z_advise = Z_EROFS_ADVISE_FRAGMENT_PCLUSTER;
vi->z_fragmentoff = le64_to_cpu(*(__le64 *)h) ^ (1ULL << 63);
vi->z_tailextent_headlcn = 0;
- goto unmap_done;
+ goto done;
}
vi->z_advise = le16_to_cpu(h->h_advise);
vi->z_algorithmtype[0] = h->h_algorithmtype & 15;
erofs_err(sb, "unknown HEAD%u format %u for nid %llu, please upgrade kernel",
headnr + 1, vi->z_algorithmtype[headnr], vi->nid);
err = -EOPNOTSUPP;
- goto unmap_done;
+ goto out_put_metabuf;
}
vi->z_logical_clusterbits = LOG_BLOCK_SIZE + (h->h_clusterbits & 7);
erofs_err(sb, "per-inode big pcluster without sb feature for nid %llu",
vi->nid);
err = -EFSCORRUPTED;
- goto unmap_done;
+ goto out_put_metabuf;
}
if (vi->datalayout == EROFS_INODE_FLAT_COMPRESSION &&
!(vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1) ^
erofs_err(sb, "big pcluster head1/2 of compact indexes should be consistent for nid %llu",
vi->nid);
err = -EFSCORRUPTED;
- goto unmap_done;
+ goto out_put_metabuf;
}
-unmap_done:
- erofs_put_metabuf(&buf);
- if (err)
- goto out_unlock;
if (vi->z_advise & Z_EROFS_ADVISE_INLINE_PCLUSTER) {
struct erofs_map_blocks map = {
err = -EFSCORRUPTED;
}
if (err < 0)
- goto out_unlock;
+ goto out_put_metabuf;
}
if (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER &&
EROFS_GET_BLOCKS_FINDTAIL);
erofs_put_metabuf(&map.buf);
if (err < 0)
- goto out_unlock;
+ goto out_put_metabuf;
}
+done:
/* paired with smp_mb() at the beginning of the function */
smp_mb();
set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+out_put_metabuf:
+ erofs_put_metabuf(&buf);
out_unlock:
clear_and_wake_up_bit(EROFS_I_BL_Z_BIT, &vi->flags);
return err;
active_mm = tsk->active_mm;
tsk->active_mm = mm;
tsk->mm = mm;
- lru_gen_add_mm(mm);
/*
* This prevents preemption while active_mm is being loaded and
* it and mm are being updated, which could cause problems for
activate_mm(active_mm, mm);
if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))
local_irq_enable();
+ lru_gen_add_mm(mm);
task_unlock(tsk);
lru_gen_use_mm(mm);
if (old_mm) {
return -ENOMEM;
refcount_set(&newsighand->count, 1);
- memcpy(newsighand->action, oldsighand->action,
- sizeof(newsighand->action));
write_lock_irq(&tasklist_lock);
spin_lock(&oldsighand->siglock);
+ memcpy(newsighand->action, oldsighand->action,
+ sizeof(newsighand->action));
rcu_assign_pointer(me->sighand, newsighand);
spin_unlock(&oldsighand->siglock);
write_unlock_irq(&tasklist_lock);
struct ext4_iloc iloc;
int inode_len, ino, ret, tag = tl->fc_tag;
struct ext4_extent_header *eh;
+ size_t off_gen = offsetof(struct ext4_inode, i_generation);
memcpy(&fc_inode, val, sizeof(fc_inode));
raw_inode = ext4_raw_inode(&iloc);
memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block));
- memcpy(&raw_inode->i_generation, &raw_fc_inode->i_generation,
- inode_len - offsetof(struct ext4_inode, i_generation));
+ memcpy((u8 *)raw_inode + off_gen, (u8 *)raw_fc_inode + off_gen,
+ inode_len - off_gen);
if (le32_to_cpu(raw_inode->i_flags) & EXT4_EXTENTS_FL) {
eh = (struct ext4_extent_header *)(&raw_inode->i_block[0]);
if (eh->eh_magic != EXT4_EXT_MAGIC) {
if (ext4_has_metadata_csum(sb) &&
es->s_checksum != ext4_superblock_csum(sb, es)) {
ext4_msg(sb, KERN_ERR, "Invalid checksum for backup "
- "superblock %llu\n", sb_block);
+ "superblock %llu", sb_block);
unlock_buffer(bh);
- err = -EFSBADCRC;
goto out_bh;
}
func(es, arg);
* already is extent-based, error out.
*/
if (!ext4_has_feature_extents(inode->i_sb) ||
- (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS) ||
+ ext4_has_inline_data(inode))
return -EINVAL;
if (S_ISLNK(inode->i_mode) && inode->i_blocks == 0)
memset(de, 0, len); /* wipe old data */
de = (struct ext4_dir_entry_2 *) data2;
top = data2 + len;
- while ((char *)(de2 = ext4_next_entry(de, blocksize)) < top)
+ while ((char *)(de2 = ext4_next_entry(de, blocksize)) < top) {
+ if (ext4_check_dir_entry(dir, NULL, de, bh2, data2, len,
+ (data2 + (blocksize - csum_size) -
+ (char *) de))) {
+ brelse(bh2);
+ brelse(bh);
+ return -EFSCORRUPTED;
+ }
de = de2;
+ }
de->rec_len = ext4_rec_len_to_disk(data2 + (blocksize - csum_size) -
(char *) de, blocksize);
while (group < sbi->s_groups_count) {
struct buffer_head *bh;
ext4_fsblk_t backup_block;
+ struct ext4_super_block *es;
/* Out of journal space, and can't get more - abort - so sad */
err = ext4_resize_ensure_credits_batch(handle, 1);
memcpy(bh->b_data, data, size);
if (rest)
memset(bh->b_data + size, 0, rest);
+ es = (struct ext4_super_block *) bh->b_data;
+ es->s_block_group_nr = cpu_to_le16(group);
+ if (ext4_has_metadata_csum(sb))
+ es->s_checksum = ext4_superblock_csum(sb, es);
set_buffer_uptodate(bh);
unlock_buffer(bh);
err = ext4_handle_dirty_metadata(handle, NULL, bh);
#define DEFAULT_JOURNAL_IOPRIO (IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 3))
-static const char deprecated_msg[] =
- "Mount option \"%s\" will be removed by %s\n"
- "Contact linux-ext4@vger.kernel.org if you think we should keep it.\n";
-
#define MOPT_SET 0x0001
#define MOPT_CLEAR 0x0002
#define MOPT_NOSUPPORT 0x0004
flush_work(&sbi->s_error_work);
jbd2_journal_destroy(sbi->s_journal);
sbi->s_journal = NULL;
- return err;
+ return -EINVAL;
}
static int ext4_journal_data_mode_check(struct super_block *sb)
goto out;
}
+ err = file_modified(file);
+ if (err)
+ goto out;
+
if (!(mode & FALLOC_FL_KEEP_SIZE))
set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
goto unlock;
addr = kmap_local_page(page);
- if (!offset)
+ if (!offset) {
clear_page(addr);
+ SetPageUptodate(page);
+ }
memcpy(addr + offset, dirent, reclen);
kunmap_local(addr);
fi->rdc.size = (index << PAGE_SHIFT) + offset + reclen;
page = find_get_page_flags(file->f_mapping, index,
FGP_ACCESSED | FGP_LOCK);
+ /* Page gone missing, then re-added to cache, but not initialized? */
+ if (page && !PageUptodate(page)) {
+ unlock_page(page);
+ put_page(page);
+ page = NULL;
+ }
spin_lock(&fi->rdc.lock);
if (!page) {
/*
static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *data)
{
struct nfs_client *clp;
- const struct sockaddr *sap = data->addr;
+ const struct sockaddr *sap = (struct sockaddr *)data->addr;
struct nfs_net *nn = net_generic(data->net, nfs_net_id);
int error;
struct rpc_timeout timeparms;
struct nfs_client_initdata cl_init = {
.hostname = ctx->nfs_server.hostname,
- .addr = (const struct sockaddr *)&ctx->nfs_server.address,
+ .addr = &ctx->nfs_server._address,
.addrlen = ctx->nfs_server.addrlen,
.nfs_mod = ctx->nfs_mod,
.proto = ctx->nfs_server.protocol,
*
*/
void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred,
- fmode_t type,
- const nfs4_stateid *stateid,
+ fmode_t type, const nfs4_stateid *stateid,
unsigned long pagemod_limit)
{
struct nfs_delegation *delegation;
delegation = rcu_dereference(NFS_I(inode)->delegation);
if (delegation != NULL) {
spin_lock(&delegation->lock);
- if (nfs4_is_valid_delegation(delegation, 0)) {
- nfs4_stateid_copy(&delegation->stateid, stateid);
- delegation->type = type;
- delegation->pagemod_limit = pagemod_limit;
- oldcred = delegation->cred;
- delegation->cred = get_cred(cred);
- clear_bit(NFS_DELEGATION_NEED_RECLAIM,
- &delegation->flags);
- spin_unlock(&delegation->lock);
- rcu_read_unlock();
- put_cred(oldcred);
- trace_nfs4_reclaim_delegation(inode, type);
- return;
- }
- /* We appear to have raced with a delegation return. */
+ nfs4_stateid_copy(&delegation->stateid, stateid);
+ delegation->type = type;
+ delegation->pagemod_limit = pagemod_limit;
+ oldcred = delegation->cred;
+ delegation->cred = get_cred(cred);
+ clear_bit(NFS_DELEGATION_NEED_RECLAIM, &delegation->flags);
+ if (test_and_clear_bit(NFS_DELEGATION_REVOKED,
+ &delegation->flags))
+ atomic_long_inc(&nfs_active_delegations);
spin_unlock(&delegation->lock);
+ rcu_read_unlock();
+ put_cred(oldcred);
+ trace_nfs4_reclaim_delegation(inode, type);
+ } else {
+ rcu_read_unlock();
+ nfs_inode_set_delegation(inode, cred, type, stateid,
+ pagemod_limit);
}
- rcu_read_unlock();
- nfs_inode_set_delegation(inode, cred, type, stateid, pagemod_limit);
}
static int nfs_do_return_delegation(struct inode *inode, struct nfs_delegation *delegation, int issync)
spin_unlock(&dentry->d_lock);
goto out;
}
- if (dentry->d_fsdata)
- /* old devname */
- kfree(dentry->d_fsdata);
+ /* old devname */
+ kfree(dentry->d_fsdata);
dentry->d_fsdata = NFS_FSDATA_BLOCKED;
spin_unlock(&dentry->d_lock);
#include "dns_resolve.h"
ssize_t nfs_dns_resolve_name(struct net *net, char *name, size_t namelen,
- struct sockaddr *sa, size_t salen)
+ struct sockaddr_storage *ss, size_t salen)
{
+ struct sockaddr *sa = (struct sockaddr *)ss;
ssize_t ret;
char *ip_addr = NULL;
int ip_len;
}
ssize_t nfs_dns_resolve_name(struct net *net, char *name,
- size_t namelen, struct sockaddr *sa, size_t salen)
+ size_t namelen, struct sockaddr_storage *ss, size_t salen)
{
struct nfs_dns_ent key = {
.hostname = name,
ret = do_cache_lookup_wait(nn->nfs_dns_resolve, &key, &item);
if (ret == 0) {
if (salen >= item->addrlen) {
- memcpy(sa, &item->addr, item->addrlen);
+ memcpy(ss, &item->addr, item->addrlen);
ret = item->addrlen;
} else
ret = -EOVERFLOW;
#endif
extern ssize_t nfs_dns_resolve_name(struct net *net, char *name,
- size_t namelen, struct sockaddr *sa, size_t salen);
+ size_t namelen, struct sockaddr_storage *sa, size_t salen);
#endif
* Address family must be initialized, and address must not be
* the ANY address for that family.
*/
-static int nfs_verify_server_address(struct sockaddr *addr)
+static int nfs_verify_server_address(struct sockaddr_storage *addr)
{
- switch (addr->sa_family) {
+ switch (addr->ss_family) {
case AF_INET: {
struct sockaddr_in *sa = (struct sockaddr_in *)addr;
return sa->sin_addr.s_addr != htonl(INADDR_ANY);
{
struct nfs_fs_context *ctx = nfs_fc2context(fc);
struct nfs_fh *mntfh = ctx->mntfh;
- struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address;
+ struct sockaddr_storage *sap = &ctx->nfs_server._address;
int extra_flags = NFS_MOUNT_LEGACY_INTERFACE;
int ret;
memcpy(sap, &data->addr, sizeof(data->addr));
ctx->nfs_server.addrlen = sizeof(data->addr);
ctx->nfs_server.port = ntohs(data->addr.sin_port);
- if (sap->sa_family != AF_INET ||
+ if (sap->ss_family != AF_INET ||
!nfs_verify_server_address(sap))
goto out_no_address;
struct nfs4_mount_data *data)
{
struct nfs_fs_context *ctx = nfs_fc2context(fc);
- struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address;
+ struct sockaddr_storage *sap = &ctx->nfs_server._address;
int ret;
char *c;
{
struct nfs_fs_context *ctx = nfs_fc2context(fc);
struct nfs_subversion *nfs_mod;
- struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address;
+ struct sockaddr_storage *sap = &ctx->nfs_server._address;
int max_namelen = PAGE_SIZE;
int max_pathlen = NFS_MAXPATHLEN;
int port = 0;
ctx->version = nfss->nfs_client->rpc_ops->version;
ctx->minorversion = nfss->nfs_client->cl_minorversion;
- memcpy(&ctx->nfs_server.address, &nfss->nfs_client->cl_addr,
+ memcpy(&ctx->nfs_server._address, &nfss->nfs_client->cl_addr,
ctx->nfs_server.addrlen);
if (fc->net_ns != net) {
struct nfs_client_initdata {
unsigned long init_flags;
const char *hostname; /* Hostname of the server */
- const struct sockaddr *addr; /* Address of the server */
+ const struct sockaddr_storage *addr; /* Address of the server */
const char *nodename; /* Hostname of the client */
const char *ip_addr; /* IP address of the client */
size_t addrlen;
/* mount_clnt.c */
struct nfs_mount_request {
- struct sockaddr *sap;
+ struct sockaddr_storage *sap;
size_t salen;
char *hostname;
char *dirpath;
extern struct nfs_server *nfs4_create_server(struct fs_context *);
extern struct nfs_server *nfs4_create_referral_server(struct fs_context *);
extern int nfs4_update_server(struct nfs_server *server, const char *hostname,
- struct sockaddr *sap, size_t salen,
+ struct sockaddr_storage *sap, size_t salen,
struct net *net);
extern void nfs_free_server(struct nfs_server *server);
extern struct nfs_server *nfs_clone_server(struct nfs_server *,
extern int nfs_wait_client_init_complete(const struct nfs_client *clp);
extern void nfs_mark_client_ready(struct nfs_client *clp, int state);
extern struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv,
- const struct sockaddr *ds_addr,
+ const struct sockaddr_storage *ds_addr,
int ds_addrlen, int ds_proto,
unsigned int ds_timeo,
unsigned int ds_retrans,
extern struct rpc_clnt *nfs4_find_or_create_ds_client(struct nfs_client *,
struct inode *);
extern struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv,
- const struct sockaddr *ds_addr, int ds_addrlen,
+ const struct sockaddr_storage *ds_addr, int ds_addrlen,
int ds_proto, unsigned int ds_timeo,
unsigned int ds_retrans);
#ifdef CONFIG_PROC_FS
* Select between a default port value and a user-specified port value.
* If a zero value is set, then autobind will be used.
*/
-static inline void nfs_set_port(struct sockaddr *sap, int *port,
+static inline void nfs_set_port(struct sockaddr_storage *sap, int *port,
const unsigned short default_port)
{
if (*port == NFS_UNSPEC_PORT)
*port = default_port;
- rpc_set_port(sap, *port);
+ rpc_set_port((struct sockaddr *)sap, *port);
}
struct nfs_direct_req {
struct rpc_create_args args = {
.net = info->net,
.protocol = info->protocol,
- .address = info->sap,
+ .address = (struct sockaddr *)info->sap,
.addrsize = info->salen,
.timeout = &mnt_timeout,
.servername = info->hostname,
struct rpc_create_args args = {
.net = info->net,
.protocol = IPPROTO_UDP,
- .address = info->sap,
+ .address = (struct sockaddr *)info->sap,
.addrsize = info->salen,
.timeout = &nfs_umnt_timeout,
.servername = info->hostname,
}
/* for submounts we want the same server; referrals will reassign */
- memcpy(&ctx->nfs_server.address, &client->cl_addr, client->cl_addrlen);
+ memcpy(&ctx->nfs_server._address, &client->cl_addr, client->cl_addrlen);
ctx->nfs_server.addrlen = client->cl_addrlen;
ctx->nfs_server.port = server->port;
* the MDS.
*/
struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv,
- const struct sockaddr *ds_addr, int ds_addrlen,
+ const struct sockaddr_storage *ds_addr, int ds_addrlen,
int ds_proto, unsigned int ds_timeo, unsigned int ds_retrans)
{
struct rpc_timeout ds_timeout;
char buf[INET6_ADDRSTRLEN + 1];
/* fake a hostname because lockd wants it */
- if (rpc_ntop(ds_addr, buf, sizeof(buf)) <= 0)
+ if (rpc_ntop((struct sockaddr *)ds_addr, buf, sizeof(buf)) <= 0)
return ERR_PTR(-EINVAL);
cl_init.hostname = buf;
&args.seq_args, &res.seq_res, 0);
trace_nfs4_clone(src_inode, dst_inode, &args, status);
if (status == 0) {
+ /* a zero-length count means clone to EOF in src */
+ if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE)
+ count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset;
nfs42_copy_dest_done(dst_inode, dst_offset, count);
status = nfs_post_op_update_inode(dst_inode, res.dst_fattr);
}
int nfs4_submount(struct fs_context *, struct nfs_server *);
int nfs4_replace_transport(struct nfs_server *server,
const struct nfs4_fs_locations *locations);
-size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
+size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr_storage *ss,
size_t salen, struct net *net, int port);
/* nfs4proc.c */
extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
ret = nfs4_setup_slot_table(tbl, NFS4_MAX_SLOT_TABLE,
"NFSv4.0 transport Slot table");
if (ret) {
+ nfs4_shutdown_slot_table(tbl);
kfree(tbl);
return ret;
}
*/
static int nfs4_set_client(struct nfs_server *server,
const char *hostname,
- const struct sockaddr *addr,
+ const struct sockaddr_storage *addr,
const size_t addrlen,
const char *ip_addr,
int proto, const struct rpc_timeout *timeparms,
__set_bit(NFS_CS_MIGRATION, &cl_init.init_flags);
if (test_bit(NFS_MIG_TSM_POSSIBLE, &server->mig_status))
__set_bit(NFS_CS_TSM_POSSIBLE, &cl_init.init_flags);
- server->port = rpc_get_port(addr);
+ server->port = rpc_get_port((struct sockaddr *)addr);
/* Allocate or find a client reference we can use */
clp = nfs_get_client(&cl_init);
* the MDS.
*/
struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv,
- const struct sockaddr *ds_addr, int ds_addrlen,
+ const struct sockaddr_storage *ds_addr, int ds_addrlen,
int ds_proto, unsigned int ds_timeo, unsigned int ds_retrans,
u32 minor_version)
{
};
char buf[INET6_ADDRSTRLEN + 1];
- if (rpc_ntop(ds_addr, buf, sizeof(buf)) <= 0)
+ if (rpc_ntop((struct sockaddr *)ds_addr, buf, sizeof(buf)) <= 0)
return ERR_PTR(-EINVAL);
cl_init.hostname = buf;
/* Get a client record */
error = nfs4_set_client(server,
ctx->nfs_server.hostname,
- &ctx->nfs_server.address,
+ &ctx->nfs_server._address,
ctx->nfs_server.addrlen,
ctx->client_address,
ctx->nfs_server.protocol,
rpc_set_port(&ctx->nfs_server.address, NFS_RDMA_PORT);
error = nfs4_set_client(server,
ctx->nfs_server.hostname,
- &ctx->nfs_server.address,
+ &ctx->nfs_server._address,
ctx->nfs_server.addrlen,
parent_client->cl_ipaddr,
XPRT_TRANSPORT_RDMA,
rpc_set_port(&ctx->nfs_server.address, NFS_PORT);
error = nfs4_set_client(server,
ctx->nfs_server.hostname,
- &ctx->nfs_server.address,
+ &ctx->nfs_server._address,
ctx->nfs_server.addrlen,
parent_client->cl_ipaddr,
XPRT_TRANSPORT_TCP,
* Returns zero on success, or a negative errno value.
*/
int nfs4_update_server(struct nfs_server *server, const char *hostname,
- struct sockaddr *sap, size_t salen, struct net *net)
+ struct sockaddr_storage *sap, size_t salen, struct net *net)
{
struct nfs_client *clp = server->nfs_client;
struct rpc_clnt *clnt = server->client;
struct xprt_create xargs = {
.ident = clp->cl_proto,
.net = net,
- .dstaddr = sap,
+ .dstaddr = (struct sockaddr *)sap,
.addrlen = salen,
.servername = hostname,
};
return 0;
}
-size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
+size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr_storage *ss,
size_t salen, struct net *net, int port)
{
+ struct sockaddr *sa = (struct sockaddr *)ss;
ssize_t ret;
ret = rpc_pton(net, string, len, sa, salen);
if (ret == 0) {
ret = rpc_uaddr2sockaddr(net, string, len, sa, salen);
if (ret == 0) {
- ret = nfs_dns_resolve_name(net, string, len, sa, salen);
+ ret = nfs_dns_resolve_name(net, string, len, ss, salen);
if (ret < 0)
ret = 0;
}
ctx->nfs_server.addrlen =
nfs_parse_server_name(buf->data, buf->len,
- &ctx->nfs_server.address,
+ &ctx->nfs_server._address,
sizeof(ctx->nfs_server._address),
fc->net_ns, 0);
if (ctx->nfs_server.addrlen == 0)
char *page, char *page2,
const struct nfs4_fs_location *location)
{
- const size_t addr_bufsize = sizeof(struct sockaddr_storage);
struct net *net = rpc_net_ns(server->client);
- struct sockaddr *sap;
+ struct sockaddr_storage *sap;
unsigned int s;
size_t salen;
int error;
- sap = kmalloc(addr_bufsize, GFP_KERNEL);
+ sap = kmalloc(sizeof(*sap), GFP_KERNEL);
if (sap == NULL)
return -ENOMEM;
continue;
salen = nfs_parse_server_name(buf->data, buf->len,
- sap, addr_bufsize, net, 0);
+ sap, sizeof(*sap), net, 0);
if (salen == 0)
continue;
- rpc_set_port(sap, NFS_PORT);
+ rpc_set_port((struct sockaddr *)sap, NFS_PORT);
error = -ENOMEM;
hostname = kmemdup_nul(buf->data, buf->len, GFP_KERNEL);
for (i = 0; i < location->nservers; i++) {
struct nfs4_string *srv_loc = &location->servers[i];
- struct sockaddr addr;
+ struct sockaddr_storage addr;
size_t addrlen;
struct xprt_create xprt_args = {
.ident = 0,
clp->cl_net, server->port);
if (!addrlen)
return;
- xprt_args.dstaddr = &addr;
+ xprt_args.dstaddr = (struct sockaddr *)&addr;
xprt_args.addrlen = addrlen;
servername = kmalloc(srv_loc->len + 1, GFP_KERNEL);
if (!servername)
{
struct nfs4_lockdata *data = calldata;
struct nfs4_lock_state *lsp = data->lsp;
+ struct nfs_server *server = NFS_SERVER(d_inode(data->ctx->dentry));
if (!nfs4_sequence_done(task, &data->res.seq_res))
return;
data->rpc_status = task->tk_status;
switch (task->tk_status) {
case 0:
- renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)),
- data->timestamp);
+ renew_lease(server, data->timestamp);
if (data->arg.new_lock && !data->cancelled) {
data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS);
if (locks_lock_inode_wait(lsp->ls_state->inode, &data->fl) < 0)
if (!nfs4_stateid_match(&data->arg.open_stateid,
&lsp->ls_state->open_stateid))
goto out_restart;
+ else if (nfs4_async_handle_error(task, server, lsp->ls_state, NULL) == -EAGAIN)
+ goto out_restart;
} else if (!nfs4_stateid_match(&data->arg.lock_stateid,
&lsp->ls_stateid))
goto out_restart;
static void nfs4_state_start_reclaim_reboot(struct nfs_client *clp)
{
+ set_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state);
/* Mark all delegations for reclaim */
nfs_delegation_mark_reclaim(clp);
nfs4_state_mark_reclaim_helper(clp, nfs4_state_mark_reclaim_reboot);
if (status < 0)
goto out_error;
nfs4_state_end_reclaim_reboot(clp);
+ continue;
}
/* Detect expired delegations... */
static struct nfs_client *(*get_v3_ds_connect)(
struct nfs_server *mds_srv,
- const struct sockaddr *ds_addr,
+ const struct sockaddr_storage *ds_addr,
int ds_addrlen,
int ds_proto,
unsigned int ds_timeo,
continue;
}
clp = get_v3_ds_connect(mds_srv,
- (struct sockaddr *)&da->da_addr,
+ &da->da_addr,
da->da_addrlen, da->da_transport,
timeo, retrans);
if (IS_ERR(clp))
put_cred(xprtdata.cred);
} else {
clp = nfs4_set_ds_client(mds_srv,
- (struct sockaddr *)&da->da_addr,
+ &da->da_addr,
da->da_addrlen,
da->da_transport, timeo,
retrans, minor_version);
{
struct nfs_fs_context *ctx = nfs_fc2context(fc);
struct nfs_mount_request request = {
- .sap = (struct sockaddr *)
- &ctx->mount_server.address,
+ .sap = &ctx->mount_server._address,
.dirpath = ctx->nfs_server.export_path,
.protocol = ctx->mount_server.protocol,
.fh = root_fh,
* Construct the mount server's address.
*/
if (ctx->mount_server.address.sa_family == AF_UNSPEC) {
- memcpy(request.sap, &ctx->nfs_server.address,
+ memcpy(request.sap, &ctx->nfs_server._address,
ctx->nfs_server.addrlen);
ctx->mount_server.addrlen = ctx->nfs_server.addrlen;
}
nf = rhashtable_walk_next(&iter);
while (!IS_ERR_OR_NULL(nf)) {
- if (net && nf->nf_net != net)
- continue;
- nfsd_file_unhash_and_dispose(nf, &dispose);
+ if (!net || nf->nf_net == net)
+ nfsd_file_unhash_and_dispose(nf, &dispose);
nf = rhashtable_walk_next(&iter);
}
goto out_drc_error;
retval = nfsd_reply_cache_init(nn);
if (retval)
- goto out_drc_error;
+ goto out_cache_error;
get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
seqlock_init(&nn->writeverf_lock);
return 0;
+out_cache_error:
+ nfsd4_leases_net_shutdown(nn);
out_drc_error:
nfsd_idmap_shutdown(net);
out_idmap_error:
skip_pseudoflavor_check:
/* Finally, check access permissions. */
error = nfsd_permission(rqstp, exp, dentry, access);
- trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error);
out:
+ trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error);
if (error == nfserr_stale)
nfsd_stats_fh_stale_inc(exp);
return error;
handle_t *handle = NULL;
struct ocfs2_super *osb;
struct ocfs2_dinode *dirfe;
+ struct ocfs2_dinode *fe = NULL;
struct buffer_head *new_fe_bh = NULL;
struct inode *inode = NULL;
struct ocfs2_alloc_context *inode_ac = NULL;
goto leave;
}
+ fe = (struct ocfs2_dinode *) new_fe_bh->b_data;
if (S_ISDIR(mode)) {
status = ocfs2_fill_new_dir(osb, handle, dir, inode,
new_fe_bh, data_ac, meta_ac);
leave:
if (status < 0 && did_quota_inode)
dquot_free_inode(inode);
- if (handle)
+ if (handle) {
+ if (status < 0 && fe)
+ ocfs2_set_links_count(fe, 0);
ocfs2_commit_trans(osb, handle);
+ }
ocfs2_inode_unlock(dir, 1);
if (did_block_signals)
return status;
}
- status = __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
+ return __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
parent_fe_bh, handle, inode_ac,
fe_blkno, suballoc_loc, suballoc_bit);
- if (status < 0) {
- u64 bg_blkno = ocfs2_which_suballoc_group(fe_blkno, suballoc_bit);
- int tmp = ocfs2_free_suballoc_bits(handle, inode_ac->ac_inode,
- inode_ac->ac_bh, suballoc_bit, bg_blkno, 1);
- if (tmp)
- mlog_errno(tmp);
- }
-
- return status;
}
static int ocfs2_mkdir(struct user_namespace *mnt_userns,
ocfs2_clusters_to_bytes(osb->sb, 1));
if (status < 0 && did_quota_inode)
dquot_free_inode(inode);
- if (handle)
+ if (handle) {
+ if (status < 0 && fe)
+ ocfs2_set_links_count(fe, 0);
ocfs2_commit_trans(osb, handle);
+ }
ocfs2_inode_unlock(dir, 1);
if (did_block_signals)
goto out_put_mm;
hold_task_mempolicy(priv);
- vma = mas_find(&mas, 0);
+ vma = mas_find(&mas, ULONG_MAX);
if (unlikely(!vma))
goto empty_set;
squashfs_i(inode)->fragment_size);
struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
unsigned int n, mask = (1 << (msblk->block_log - PAGE_SHIFT)) - 1;
+ int error = buffer->error;
- if (buffer->error)
+ if (error)
goto out;
expected += squashfs_i(inode)->fragment_offset;
out:
squashfs_cache_put(buffer);
- return buffer->error;
+ return error;
}
static void squashfs_readahead(struct readahead_control *ractl)
int res, bsize;
u64 block = 0;
unsigned int expected;
+ struct page *last_page;
+
+ expected = start >> msblk->block_log == file_end ?
+ (i_size_read(inode) & (msblk->block_size - 1)) :
+ msblk->block_size;
+
+ max_pages = (expected + PAGE_SIZE - 1) >> PAGE_SHIFT;
nr_pages = __readahead_batch(ractl, pages, max_pages);
if (!nr_pages)
goto skip_pages;
index = pages[0]->index >> shift;
+
if ((pages[nr_pages - 1]->index >> shift) != index)
goto skip_pages;
- expected = index == file_end ?
- (i_size_read(inode) & (msblk->block_size - 1)) :
- msblk->block_size;
-
if (index == file_end && squashfs_i(inode)->fragment_block !=
SQUASHFS_INVALID_BLK) {
res = squashfs_readahead_fragment(pages, nr_pages,
res = squashfs_read_data(inode->i_sb, block, bsize, NULL, actor);
- squashfs_page_actor_free(actor);
+ last_page = squashfs_page_actor_free(actor);
if (res == expected) {
int bytes;
/* Last page (if present) may have trailing bytes not filled */
bytes = res % PAGE_SIZE;
- if (pages[nr_pages - 1]->index == file_end && bytes)
- memzero_page(pages[nr_pages - 1], bytes,
+ if (index == file_end && bytes && last_page)
+ memzero_page(last_page, bytes,
PAGE_SIZE - bytes);
for (i = 0; i < nr_pages; i++) {
(actor->next_index != actor->page[actor->next_page]->index)) {
actor->next_index++;
actor->returned_pages++;
+ actor->last_page = NULL;
return actor->alloc_buffer ? actor->tmp_buffer : ERR_PTR(-ENOMEM);
}
actor->next_index++;
actor->returned_pages++;
+ actor->last_page = actor->page[actor->next_page];
return actor->pageaddr = kmap_local_page(actor->page[actor->next_page++]);
}
actor->returned_pages = 0;
actor->next_index = page[0]->index & ~((1 << (msblk->block_log - PAGE_SHIFT)) - 1);
actor->pageaddr = NULL;
+ actor->last_page = NULL;
actor->alloc_buffer = msblk->decompressor->alloc_buffer;
actor->squashfs_first_page = direct_first_page;
actor->squashfs_next_page = direct_next_page;
void *(*squashfs_first_page)(struct squashfs_page_actor *);
void *(*squashfs_next_page)(struct squashfs_page_actor *);
void (*squashfs_finish_page)(struct squashfs_page_actor *);
+ struct page *last_page;
int pages;
int length;
int next_page;
extern struct squashfs_page_actor *squashfs_page_actor_init_special(
struct squashfs_sb_info *msblk,
struct page **page, int pages, int length);
-static inline void squashfs_page_actor_free(struct squashfs_page_actor *actor)
+static inline struct page *squashfs_page_actor_free(struct squashfs_page_actor *actor)
{
+ struct page *last_page = actor->last_page;
+
kfree(actor->tmp_buffer);
kfree(actor);
+ return last_page;
}
static inline void *squashfs_first_page(struct squashfs_page_actor *actor)
{
WARN_ON(s->s_inode_lru.node);
WARN_ON(!list_empty(&s->s_mounts));
security_sb_free(s);
+ fscrypt_destroy_keyring(s);
put_user_ns(s->s_user_ns);
kfree(s->s_subtype);
call_rcu(&s->rcu, destroy_super_rcu);
evict_inodes(sb);
/* only nonzero refcount inodes can have marks */
fsnotify_sb_delete(sb);
- fscrypt_sb_delete(sb);
+ fscrypt_destroy_keyring(sb);
security_sb_delete(sb);
if (sb->s_dio_done_wq) {
return true;
}
+static inline bool
+xfs_verify_agbext(
+ struct xfs_perag *pag,
+ xfs_agblock_t agbno,
+ xfs_agblock_t len)
+{
+ if (agbno + len <= agbno)
+ return false;
+
+ if (!xfs_verify_agbno(pag, agbno))
+ return false;
+
+ return xfs_verify_agbno(pag, agbno + len - 1);
+}
+
/*
* Verify that an AG inode number pointer neither points outside the AG
* nor points at static metadata.
goto out_bad_rec;
/* check for valid extent range, including overflow */
- if (!xfs_verify_agbno(pag, *bno))
- goto out_bad_rec;
- if (*bno > *bno + *len)
- goto out_bad_rec;
- if (!xfs_verify_agbno(pag, *bno + *len - 1))
+ if (!xfs_verify_agbext(pag, *bno, *len))
goto out_bad_rec;
return 0;
xfs_dir2_leaf_tail_t *ltp;
int stale;
int i;
+ bool isleaf1 = (hdr->magic == XFS_DIR2_LEAF1_MAGIC ||
+ hdr->magic == XFS_DIR3_LEAF1_MAGIC);
ltp = xfs_dir2_leaf_tail_p(geo, leaf);
return __this_address;
/* Leaves and bests don't overlap in leaf format. */
- if ((hdr->magic == XFS_DIR2_LEAF1_MAGIC ||
- hdr->magic == XFS_DIR3_LEAF1_MAGIC) &&
+ if (isleaf1 &&
(char *)&hdr->ents[hdr->count] > (char *)xfs_dir2_leaf_bests_p(ltp))
return __this_address;
}
if (hdr->ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
stale++;
+ if (isleaf1 && xfs_dir2_dataptr_to_db(geo,
+ be32_to_cpu(hdr->ents[i].address)) >=
+ be32_to_cpu(ltp->bestcount))
+ return __this_address;
}
if (hdr->stale != stale)
return __this_address;
#define RMAPBT_UNUSED_OFFSET_BITLEN 7
#define RMAPBT_OFFSET_BITLEN 54
-#define XFS_RMAP_ATTR_FORK (1 << 0)
-#define XFS_RMAP_BMBT_BLOCK (1 << 1)
-#define XFS_RMAP_UNWRITTEN (1 << 2)
-#define XFS_RMAP_KEY_FLAGS (XFS_RMAP_ATTR_FORK | \
- XFS_RMAP_BMBT_BLOCK)
-#define XFS_RMAP_REC_FLAGS (XFS_RMAP_UNWRITTEN)
-struct xfs_rmap_irec {
- xfs_agblock_t rm_startblock; /* extent start block */
- xfs_extlen_t rm_blockcount; /* extent length */
- uint64_t rm_owner; /* extent owner */
- uint64_t rm_offset; /* offset within the owner */
- unsigned int rm_flags; /* state flags */
-};
-
/*
* Key structure
*
* on the startblock. This speeds up mount time deletion of stale
* staging extents because they're all at the right side of the tree.
*/
-#define XFS_REFC_COW_START ((xfs_agblock_t)(1U << 31))
+#define XFS_REFC_COWFLAG (1U << 31)
#define REFCNTBT_COWFLAG_BITLEN 1
#define REFCNTBT_AGBLOCK_BITLEN 31
__be32 rc_startblock; /* starting block number */
};
-struct xfs_refcount_irec {
- xfs_agblock_t rc_startblock; /* starting block number */
- xfs_extlen_t rc_blockcount; /* count of free blocks */
- xfs_nlink_t rc_refcount; /* number of inodes linked here */
-};
-
#define MAXREFCOUNT ((xfs_nlink_t)~0U)
#define MAXREFCEXTLEN ((xfs_extlen_t)~0U)
uint16_t efi_size; /* size of this item */
uint32_t efi_nextents; /* # extents to free */
uint64_t efi_id; /* efi identifier */
- xfs_extent_t efi_extents[1]; /* array of extents to free */
+ xfs_extent_t efi_extents[]; /* array of extents to free */
} xfs_efi_log_format_t;
+static inline size_t
+xfs_efi_log_format_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efi_log_format) +
+ nr * sizeof(struct xfs_extent);
+}
+
typedef struct xfs_efi_log_format_32 {
uint16_t efi_type; /* efi log item type */
uint16_t efi_size; /* size of this item */
uint32_t efi_nextents; /* # extents to free */
uint64_t efi_id; /* efi identifier */
- xfs_extent_32_t efi_extents[1]; /* array of extents to free */
+ xfs_extent_32_t efi_extents[]; /* array of extents to free */
} __attribute__((packed)) xfs_efi_log_format_32_t;
+static inline size_t
+xfs_efi_log_format32_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efi_log_format_32) +
+ nr * sizeof(struct xfs_extent_32);
+}
+
typedef struct xfs_efi_log_format_64 {
uint16_t efi_type; /* efi log item type */
uint16_t efi_size; /* size of this item */
uint32_t efi_nextents; /* # extents to free */
uint64_t efi_id; /* efi identifier */
- xfs_extent_64_t efi_extents[1]; /* array of extents to free */
+ xfs_extent_64_t efi_extents[]; /* array of extents to free */
} xfs_efi_log_format_64_t;
+static inline size_t
+xfs_efi_log_format64_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efi_log_format_64) +
+ nr * sizeof(struct xfs_extent_64);
+}
+
/*
* This is the structure used to lay out an efd log item in the
* log. The efd_extents array is a variable size array whose
uint16_t efd_size; /* size of this item */
uint32_t efd_nextents; /* # of extents freed */
uint64_t efd_efi_id; /* id of corresponding efi */
- xfs_extent_t efd_extents[1]; /* array of extents freed */
+ xfs_extent_t efd_extents[]; /* array of extents freed */
} xfs_efd_log_format_t;
+static inline size_t
+xfs_efd_log_format_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efd_log_format) +
+ nr * sizeof(struct xfs_extent);
+}
+
typedef struct xfs_efd_log_format_32 {
uint16_t efd_type; /* efd log item type */
uint16_t efd_size; /* size of this item */
uint32_t efd_nextents; /* # of extents freed */
uint64_t efd_efi_id; /* id of corresponding efi */
- xfs_extent_32_t efd_extents[1]; /* array of extents freed */
+ xfs_extent_32_t efd_extents[]; /* array of extents freed */
} __attribute__((packed)) xfs_efd_log_format_32_t;
+static inline size_t
+xfs_efd_log_format32_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efd_log_format_32) +
+ nr * sizeof(struct xfs_extent_32);
+}
+
typedef struct xfs_efd_log_format_64 {
uint16_t efd_type; /* efd log item type */
uint16_t efd_size; /* size of this item */
uint32_t efd_nextents; /* # of extents freed */
uint64_t efd_efi_id; /* id of corresponding efi */
- xfs_extent_64_t efd_extents[1]; /* array of extents freed */
+ xfs_extent_64_t efd_extents[]; /* array of extents freed */
} xfs_efd_log_format_64_t;
+static inline size_t
+xfs_efd_log_format64_sizeof(
+ unsigned int nr)
+{
+ return sizeof(struct xfs_efd_log_format_64) +
+ nr * sizeof(struct xfs_extent_64);
+}
+
/*
* RUI/RUD (reverse mapping) log format definitions
*/
int
xfs_refcount_lookup_le(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t bno,
int *stat)
{
- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,
+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,
+ xfs_refcount_encode_startblock(bno, domain),
XFS_LOOKUP_LE);
cur->bc_rec.rc.rc_startblock = bno;
cur->bc_rec.rc.rc_blockcount = 0;
+ cur->bc_rec.rc.rc_domain = domain;
return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat);
}
int
xfs_refcount_lookup_ge(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t bno,
int *stat)
{
- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,
+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,
+ xfs_refcount_encode_startblock(bno, domain),
XFS_LOOKUP_GE);
cur->bc_rec.rc.rc_startblock = bno;
cur->bc_rec.rc.rc_blockcount = 0;
+ cur->bc_rec.rc.rc_domain = domain;
return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat);
}
int
xfs_refcount_lookup_eq(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t bno,
int *stat)
{
- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,
+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,
+ xfs_refcount_encode_startblock(bno, domain),
XFS_LOOKUP_LE);
cur->bc_rec.rc.rc_startblock = bno;
cur->bc_rec.rc.rc_blockcount = 0;
+ cur->bc_rec.rc.rc_domain = domain;
return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat);
}
const union xfs_btree_rec *rec,
struct xfs_refcount_irec *irec)
{
- irec->rc_startblock = be32_to_cpu(rec->refc.rc_startblock);
+ uint32_t start;
+
+ start = be32_to_cpu(rec->refc.rc_startblock);
+ if (start & XFS_REFC_COWFLAG) {
+ start &= ~XFS_REFC_COWFLAG;
+ irec->rc_domain = XFS_REFC_DOMAIN_COW;
+ } else {
+ irec->rc_domain = XFS_REFC_DOMAIN_SHARED;
+ }
+
+ irec->rc_startblock = start;
irec->rc_blockcount = be32_to_cpu(rec->refc.rc_blockcount);
irec->rc_refcount = be32_to_cpu(rec->refc.rc_refcount);
}
struct xfs_perag *pag = cur->bc_ag.pag;
union xfs_btree_rec *rec;
int error;
- xfs_agblock_t realstart;
error = xfs_btree_get_rec(cur, &rec, stat);
if (error || !*stat)
if (irec->rc_blockcount == 0 || irec->rc_blockcount > MAXREFCEXTLEN)
goto out_bad_rec;
- /* handle special COW-staging state */
- realstart = irec->rc_startblock;
- if (realstart & XFS_REFC_COW_START) {
- if (irec->rc_refcount != 1)
- goto out_bad_rec;
- realstart &= ~XFS_REFC_COW_START;
- } else if (irec->rc_refcount < 2) {
+ if (!xfs_refcount_check_domain(irec))
goto out_bad_rec;
- }
/* check for valid extent range, including overflow */
- if (!xfs_verify_agbno(pag, realstart))
- goto out_bad_rec;
- if (realstart > realstart + irec->rc_blockcount)
- goto out_bad_rec;
- if (!xfs_verify_agbno(pag, realstart + irec->rc_blockcount - 1))
+ if (!xfs_verify_agbext(pag, irec->rc_startblock, irec->rc_blockcount))
goto out_bad_rec;
if (irec->rc_refcount == 0 || irec->rc_refcount > MAXREFCOUNT)
struct xfs_refcount_irec *irec)
{
union xfs_btree_rec rec;
+ uint32_t start;
int error;
trace_xfs_refcount_update(cur->bc_mp, cur->bc_ag.pag->pag_agno, irec);
- rec.refc.rc_startblock = cpu_to_be32(irec->rc_startblock);
+
+ start = xfs_refcount_encode_startblock(irec->rc_startblock,
+ irec->rc_domain);
+ rec.refc.rc_startblock = cpu_to_be32(start);
rec.refc.rc_blockcount = cpu_to_be32(irec->rc_blockcount);
rec.refc.rc_refcount = cpu_to_be32(irec->rc_refcount);
+
error = xfs_btree_update(cur, &rec);
if (error)
trace_xfs_refcount_update_error(cur->bc_mp,
int error;
trace_xfs_refcount_insert(cur->bc_mp, cur->bc_ag.pag->pag_agno, irec);
+
cur->bc_rec.rc.rc_startblock = irec->rc_startblock;
cur->bc_rec.rc.rc_blockcount = irec->rc_blockcount;
cur->bc_rec.rc.rc_refcount = irec->rc_refcount;
+ cur->bc_rec.rc.rc_domain = irec->rc_domain;
+
error = xfs_btree_insert(cur, i);
if (error)
goto out_error;
}
if (error)
goto out_error;
- error = xfs_refcount_lookup_ge(cur, irec.rc_startblock, &found_rec);
+ error = xfs_refcount_lookup_ge(cur, irec.rc_domain, irec.rc_startblock,
+ &found_rec);
out_error:
if (error)
trace_xfs_refcount_delete_error(cur->bc_mp,
STATIC int
xfs_refcount_split_extent(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t agbno,
bool *shape_changed)
{
int error;
*shape_changed = false;
- error = xfs_refcount_lookup_le(cur, agbno, &found_rec);
+ error = xfs_refcount_lookup_le(cur, domain, agbno, &found_rec);
if (error)
goto out_error;
if (!found_rec)
error = -EFSCORRUPTED;
goto out_error;
}
+ if (rcext.rc_domain != domain)
+ return 0;
if (rcext.rc_startblock == agbno || xfs_refc_next(&rcext) <= agbno)
return 0;
trace_xfs_refcount_merge_center_extents(cur->bc_mp,
cur->bc_ag.pag->pag_agno, left, center, right);
+ ASSERT(left->rc_domain == center->rc_domain);
+ ASSERT(right->rc_domain == center->rc_domain);
+
/*
* Make sure the center and right extents are not in the btree.
* If the center extent was synthesized, the first delete call
* call removes the center and the second one removes the right
* extent.
*/
- error = xfs_refcount_lookup_ge(cur, center->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_ge(cur, center->rc_domain,
+ center->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
}
/* Enlarge the left extent. */
- error = xfs_refcount_lookup_le(cur, left->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_le(cur, left->rc_domain,
+ left->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
trace_xfs_refcount_merge_left_extent(cur->bc_mp,
cur->bc_ag.pag->pag_agno, left, cleft);
+ ASSERT(left->rc_domain == cleft->rc_domain);
+
/* If the extent at agbno (cleft) wasn't synthesized, remove it. */
if (cleft->rc_refcount > 1) {
- error = xfs_refcount_lookup_le(cur, cleft->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_le(cur, cleft->rc_domain,
+ cleft->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
}
/* Enlarge the left extent. */
- error = xfs_refcount_lookup_le(cur, left->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_le(cur, left->rc_domain,
+ left->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
trace_xfs_refcount_merge_right_extent(cur->bc_mp,
cur->bc_ag.pag->pag_agno, cright, right);
+ ASSERT(right->rc_domain == cright->rc_domain);
+
/*
* If the extent ending at agbno+aglen (cright) wasn't synthesized,
* remove it.
*/
if (cright->rc_refcount > 1) {
- error = xfs_refcount_lookup_le(cur, cright->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_le(cur, cright->rc_domain,
+ cright->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
}
/* Enlarge the right extent. */
- error = xfs_refcount_lookup_le(cur, right->rc_startblock,
- &found_rec);
+ error = xfs_refcount_lookup_le(cur, right->rc_domain,
+ right->rc_startblock, &found_rec);
if (error)
goto out_error;
if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
return error;
}
-#define XFS_FIND_RCEXT_SHARED 1
-#define XFS_FIND_RCEXT_COW 2
/*
* Find the left extent and the one after it (cleft). This function assumes
* that we've already split any extent crossing agbno.
struct xfs_btree_cur *cur,
struct xfs_refcount_irec *left,
struct xfs_refcount_irec *cleft,
+ enum xfs_refc_domain domain,
xfs_agblock_t agbno,
- xfs_extlen_t aglen,
- int flags)
+ xfs_extlen_t aglen)
{
struct xfs_refcount_irec tmp;
int error;
int found_rec;
left->rc_startblock = cleft->rc_startblock = NULLAGBLOCK;
- error = xfs_refcount_lookup_le(cur, agbno - 1, &found_rec);
+ error = xfs_refcount_lookup_le(cur, domain, agbno - 1, &found_rec);
if (error)
goto out_error;
if (!found_rec)
goto out_error;
}
- if (xfs_refc_next(&tmp) != agbno)
- return 0;
- if ((flags & XFS_FIND_RCEXT_SHARED) && tmp.rc_refcount < 2)
+ if (tmp.rc_domain != domain)
return 0;
- if ((flags & XFS_FIND_RCEXT_COW) && tmp.rc_refcount > 1)
+ if (xfs_refc_next(&tmp) != agbno)
return 0;
/* We have a left extent; retrieve (or invent) the next right one */
*left = tmp;
goto out_error;
}
+ if (tmp.rc_domain != domain)
+ goto not_found;
+
/* if tmp starts at the end of our range, just use that */
if (tmp.rc_startblock == agbno)
*cleft = tmp;
cleft->rc_blockcount = min(aglen,
tmp.rc_startblock - agbno);
cleft->rc_refcount = 1;
+ cleft->rc_domain = domain;
}
} else {
+not_found:
/*
* No extents, so pretend that there's one covering the whole
* range.
cleft->rc_startblock = agbno;
cleft->rc_blockcount = aglen;
cleft->rc_refcount = 1;
+ cleft->rc_domain = domain;
}
trace_xfs_refcount_find_left_extent(cur->bc_mp, cur->bc_ag.pag->pag_agno,
left, cleft, agbno);
struct xfs_btree_cur *cur,
struct xfs_refcount_irec *right,
struct xfs_refcount_irec *cright,
+ enum xfs_refc_domain domain,
xfs_agblock_t agbno,
- xfs_extlen_t aglen,
- int flags)
+ xfs_extlen_t aglen)
{
struct xfs_refcount_irec tmp;
int error;
int found_rec;
right->rc_startblock = cright->rc_startblock = NULLAGBLOCK;
- error = xfs_refcount_lookup_ge(cur, agbno + aglen, &found_rec);
+ error = xfs_refcount_lookup_ge(cur, domain, agbno + aglen, &found_rec);
if (error)
goto out_error;
if (!found_rec)
goto out_error;
}
- if (tmp.rc_startblock != agbno + aglen)
- return 0;
- if ((flags & XFS_FIND_RCEXT_SHARED) && tmp.rc_refcount < 2)
+ if (tmp.rc_domain != domain)
return 0;
- if ((flags & XFS_FIND_RCEXT_COW) && tmp.rc_refcount > 1)
+ if (tmp.rc_startblock != agbno + aglen)
return 0;
/* We have a right extent; retrieve (or invent) the next left one */
*right = tmp;
goto out_error;
}
+ if (tmp.rc_domain != domain)
+ goto not_found;
+
/* if tmp ends at the end of our range, just use that */
if (xfs_refc_next(&tmp) == agbno + aglen)
*cright = tmp;
cright->rc_blockcount = right->rc_startblock -
cright->rc_startblock;
cright->rc_refcount = 1;
+ cright->rc_domain = domain;
}
} else {
+not_found:
/*
* No extents, so pretend that there's one covering the whole
* range.
cright->rc_startblock = agbno;
cright->rc_blockcount = aglen;
cright->rc_refcount = 1;
+ cright->rc_domain = domain;
}
trace_xfs_refcount_find_right_extent(cur->bc_mp, cur->bc_ag.pag->pag_agno,
cright, right, agbno + aglen);
STATIC int
xfs_refcount_merge_extents(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t *agbno,
xfs_extlen_t *aglen,
enum xfs_refc_adjust_op adjust,
- int flags,
bool *shape_changed)
{
struct xfs_refcount_irec left = {0}, cleft = {0};
* just below (agbno + aglen) [cright], and just above (agbno + aglen)
* [right].
*/
- error = xfs_refcount_find_left_extents(cur, &left, &cleft, *agbno,
- *aglen, flags);
+ error = xfs_refcount_find_left_extents(cur, &left, &cleft, domain,
+ *agbno, *aglen);
if (error)
return error;
- error = xfs_refcount_find_right_extents(cur, &right, &cright, *agbno,
- *aglen, flags);
+ error = xfs_refcount_find_right_extents(cur, &right, &cright, domain,
+ *agbno, *aglen);
if (error)
return error;
aglen);
}
- return error;
+ return 0;
}
/*
if (*aglen == 0)
return 0;
- error = xfs_refcount_lookup_ge(cur, *agbno, &found_rec);
+ error = xfs_refcount_lookup_ge(cur, XFS_REFC_DOMAIN_SHARED, *agbno,
+ &found_rec);
if (error)
goto out_error;
error = xfs_refcount_get_rec(cur, &ext, &found_rec);
if (error)
goto out_error;
- if (!found_rec) {
+ if (!found_rec || ext.rc_domain != XFS_REFC_DOMAIN_SHARED) {
ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks;
ext.rc_blockcount = 0;
ext.rc_refcount = 0;
+ ext.rc_domain = XFS_REFC_DOMAIN_SHARED;
}
/*
tmp.rc_blockcount = min(*aglen,
ext.rc_startblock - *agbno);
tmp.rc_refcount = 1 + adj;
+ tmp.rc_domain = XFS_REFC_DOMAIN_SHARED;
+
trace_xfs_refcount_modify_extent(cur->bc_mp,
cur->bc_ag.pag->pag_agno, &tmp);
(*agbno) += tmp.rc_blockcount;
(*aglen) -= tmp.rc_blockcount;
- error = xfs_refcount_lookup_ge(cur, *agbno,
+ /* Stop if there's nothing left to modify */
+ if (*aglen == 0 || !xfs_refcount_still_have_space(cur))
+ break;
+
+ /* Move the cursor to the start of ext. */
+ error = xfs_refcount_lookup_ge(cur,
+ XFS_REFC_DOMAIN_SHARED, *agbno,
&found_rec);
if (error)
goto out_error;
}
- /* Stop if there's nothing left to modify */
- if (*aglen == 0 || !xfs_refcount_still_have_space(cur))
- break;
+ /*
+ * A previous step trimmed agbno/aglen such that the end of the
+ * range would not be in the middle of the record. If this is
+ * no longer the case, something is seriously wrong with the
+ * btree. Make sure we never feed the synthesized record into
+ * the processing loop below.
+ */
+ if (XFS_IS_CORRUPT(cur->bc_mp, ext.rc_blockcount == 0) ||
+ XFS_IS_CORRUPT(cur->bc_mp, ext.rc_blockcount > *aglen)) {
+ error = -EFSCORRUPTED;
+ goto out_error;
+ }
/*
* Adjust the reference count and either update the tree
/*
* Ensure that no rcextents cross the boundary of the adjustment range.
*/
- error = xfs_refcount_split_extent(cur, agbno, &shape_changed);
+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_SHARED,
+ agbno, &shape_changed);
if (error)
goto out_error;
if (shape_changed)
shape_changes++;
- error = xfs_refcount_split_extent(cur, agbno + aglen, &shape_changed);
+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_SHARED,
+ agbno + aglen, &shape_changed);
if (error)
goto out_error;
if (shape_changed)
/*
* Try to merge with the left or right extents of the range.
*/
- error = xfs_refcount_merge_extents(cur, new_agbno, new_aglen, adj,
- XFS_FIND_RCEXT_SHARED, &shape_changed);
+ error = xfs_refcount_merge_extents(cur, XFS_REFC_DOMAIN_SHARED,
+ new_agbno, new_aglen, adj, &shape_changed);
if (error)
goto out_error;
if (shape_changed)
xfs_trans_brelse(tp, agbp);
}
+/*
+ * Set up a continuation a deferred refcount operation by updating the intent.
+ * Checks to make sure we're not going to run off the end of the AG.
+ */
+static inline int
+xfs_refcount_continue_op(
+ struct xfs_btree_cur *cur,
+ xfs_fsblock_t startblock,
+ xfs_agblock_t new_agbno,
+ xfs_extlen_t new_len,
+ xfs_fsblock_t *new_fsbno)
+{
+ struct xfs_mount *mp = cur->bc_mp;
+ struct xfs_perag *pag = cur->bc_ag.pag;
+
+ if (XFS_IS_CORRUPT(mp, !xfs_verify_agbext(pag, new_agbno, new_len)))
+ return -EFSCORRUPTED;
+
+ *new_fsbno = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);
+
+ ASSERT(xfs_verify_fsbext(mp, *new_fsbno, new_len));
+ ASSERT(pag->pag_agno == XFS_FSB_TO_AGNO(mp, *new_fsbno));
+
+ return 0;
+}
+
/*
* Process one of the deferred refcount operations. We pass back the
* btree cursor to maintain our lock on the btree between calls.
case XFS_REFCOUNT_INCREASE:
error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,
new_len, XFS_REFCOUNT_ADJUST_INCREASE);
- *new_fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);
+ if (error)
+ goto out_drop;
+ if (*new_len > 0)
+ error = xfs_refcount_continue_op(rcur, startblock,
+ new_agbno, *new_len, new_fsb);
break;
case XFS_REFCOUNT_DECREASE:
error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,
new_len, XFS_REFCOUNT_ADJUST_DECREASE);
- *new_fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);
+ if (error)
+ goto out_drop;
+ if (*new_len > 0)
+ error = xfs_refcount_continue_op(rcur, startblock,
+ new_agbno, *new_len, new_fsb);
break;
case XFS_REFCOUNT_ALLOC_COW:
*new_fsb = startblock + blockcount;
*flen = 0;
/* Try to find a refcount extent that crosses the start */
- error = xfs_refcount_lookup_le(cur, agbno, &have);
+ error = xfs_refcount_lookup_le(cur, XFS_REFC_DOMAIN_SHARED, agbno,
+ &have);
if (error)
goto out_error;
if (!have) {
error = -EFSCORRUPTED;
goto out_error;
}
+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED)
+ goto done;
/* If the extent ends before the start, look at the next one */
if (tmp.rc_startblock + tmp.rc_blockcount <= agbno) {
error = -EFSCORRUPTED;
goto out_error;
}
+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED)
+ goto done;
}
/* If the extent starts after the range we want, bail out */
error = -EFSCORRUPTED;
goto out_error;
}
- if (tmp.rc_startblock >= agbno + aglen ||
+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED ||
+ tmp.rc_startblock >= agbno + aglen ||
tmp.rc_startblock != *fbno + *flen)
break;
*flen = min(*flen + tmp.rc_blockcount, agbno + aglen - *fbno);
return 0;
/* Find any overlapping refcount records */
- error = xfs_refcount_lookup_ge(cur, agbno, &found_rec);
+ error = xfs_refcount_lookup_ge(cur, XFS_REFC_DOMAIN_COW, agbno,
+ &found_rec);
if (error)
goto out_error;
error = xfs_refcount_get_rec(cur, &ext, &found_rec);
if (error)
goto out_error;
+ if (XFS_IS_CORRUPT(cur->bc_mp, found_rec &&
+ ext.rc_domain != XFS_REFC_DOMAIN_COW)) {
+ error = -EFSCORRUPTED;
+ goto out_error;
+ }
if (!found_rec) {
- ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks +
- XFS_REFC_COW_START;
+ ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks;
ext.rc_blockcount = 0;
ext.rc_refcount = 0;
+ ext.rc_domain = XFS_REFC_DOMAIN_COW;
}
switch (adj) {
tmp.rc_startblock = agbno;
tmp.rc_blockcount = aglen;
tmp.rc_refcount = 1;
+ tmp.rc_domain = XFS_REFC_DOMAIN_COW;
+
trace_xfs_refcount_modify_extent(cur->bc_mp,
cur->bc_ag.pag->pag_agno, &tmp);
bool shape_changed;
int error;
- agbno += XFS_REFC_COW_START;
-
/*
* Ensure that no rcextents cross the boundary of the adjustment range.
*/
- error = xfs_refcount_split_extent(cur, agbno, &shape_changed);
+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_COW,
+ agbno, &shape_changed);
if (error)
goto out_error;
- error = xfs_refcount_split_extent(cur, agbno + aglen, &shape_changed);
+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_COW,
+ agbno + aglen, &shape_changed);
if (error)
goto out_error;
/*
* Try to merge with the left or right extents of the range.
*/
- error = xfs_refcount_merge_extents(cur, &agbno, &aglen, adj,
- XFS_FIND_RCEXT_COW, &shape_changed);
+ error = xfs_refcount_merge_extents(cur, XFS_REFC_DOMAIN_COW, &agbno,
+ &aglen, adj, &shape_changed);
if (error)
goto out_error;
be32_to_cpu(rec->refc.rc_refcount) != 1))
return -EFSCORRUPTED;
- rr = kmem_alloc(sizeof(struct xfs_refcount_recovery), 0);
+ rr = kmalloc(sizeof(struct xfs_refcount_recovery),
+ GFP_KERNEL | __GFP_NOFAIL);
+ INIT_LIST_HEAD(&rr->rr_list);
xfs_refcount_btrec_to_irec(rec, &rr->rr_rrec);
- list_add_tail(&rr->rr_list, debris);
+ if (XFS_IS_CORRUPT(cur->bc_mp,
+ rr->rr_rrec.rc_domain != XFS_REFC_DOMAIN_COW)) {
+ kfree(rr);
+ return -EFSCORRUPTED;
+ }
+
+ list_add_tail(&rr->rr_list, debris);
return 0;
}
union xfs_btree_irec low;
union xfs_btree_irec high;
xfs_fsblock_t fsb;
- xfs_agblock_t agbno;
int error;
- if (mp->m_sb.sb_agblocks >= XFS_REFC_COW_START)
+ /* reflink filesystems mustn't have AGs larger than 2^31-1 blocks */
+ BUILD_BUG_ON(XFS_MAX_CRC_AG_BLOCKS >= XFS_REFC_COWFLAG);
+ if (mp->m_sb.sb_agblocks > XFS_MAX_CRC_AG_BLOCKS)
return -EOPNOTSUPP;
INIT_LIST_HEAD(&debris);
/* Find all the leftover CoW staging extents. */
memset(&low, 0, sizeof(low));
memset(&high, 0, sizeof(high));
- low.rc.rc_startblock = XFS_REFC_COW_START;
+ low.rc.rc_domain = high.rc.rc_domain = XFS_REFC_DOMAIN_COW;
high.rc.rc_startblock = -1U;
error = xfs_btree_query_range(cur, &low, &high,
xfs_refcount_recover_extent, &debris);
&rr->rr_rrec);
/* Free the orphan record */
- agbno = rr->rr_rrec.rc_startblock - XFS_REFC_COW_START;
- fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, agbno);
+ fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno,
+ rr->rr_rrec.rc_startblock);
xfs_refcount_free_cow_extent(tp, fsb,
rr->rr_rrec.rc_blockcount);
goto out_free;
list_del(&rr->rr_list);
- kmem_free(rr);
+ kfree(rr);
}
return error;
/* Free the leftover list */
list_for_each_entry_safe(rr, n, &debris, rr_list) {
list_del(&rr->rr_list);
- kmem_free(rr);
+ kfree(rr);
}
return error;
}
int
xfs_refcount_has_record(
struct xfs_btree_cur *cur,
+ enum xfs_refc_domain domain,
xfs_agblock_t bno,
xfs_extlen_t len,
bool *exists)
low.rc.rc_startblock = bno;
memset(&high, 0xFF, sizeof(high));
high.rc.rc_startblock = bno + len - 1;
+ low.rc.rc_domain = high.rc.rc_domain = domain;
return xfs_btree_has_record(cur, &low, &high, exists);
}
struct xfs_refcount_irec;
extern int xfs_refcount_lookup_le(struct xfs_btree_cur *cur,
- xfs_agblock_t bno, int *stat);
+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);
extern int xfs_refcount_lookup_ge(struct xfs_btree_cur *cur,
- xfs_agblock_t bno, int *stat);
+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);
extern int xfs_refcount_lookup_eq(struct xfs_btree_cur *cur,
- xfs_agblock_t bno, int *stat);
+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);
extern int xfs_refcount_get_rec(struct xfs_btree_cur *cur,
struct xfs_refcount_irec *irec, int *stat);
+static inline uint32_t
+xfs_refcount_encode_startblock(
+ xfs_agblock_t startblock,
+ enum xfs_refc_domain domain)
+{
+ uint32_t start;
+
+ /*
+ * low level btree operations need to handle the generic btree range
+ * query functions (which set rc_domain == -1U), so we check that the
+ * domain is /not/ shared.
+ */
+ start = startblock & ~XFS_REFC_COWFLAG;
+ if (domain != XFS_REFC_DOMAIN_SHARED)
+ start |= XFS_REFC_COWFLAG;
+
+ return start;
+}
+
enum xfs_refcount_intent_type {
XFS_REFCOUNT_INCREASE = 1,
XFS_REFCOUNT_DECREASE,
xfs_fsblock_t ri_startblock;
};
+/* Check that the refcount is appropriate for the record domain. */
+static inline bool
+xfs_refcount_check_domain(
+ const struct xfs_refcount_irec *irec)
+{
+ if (irec->rc_domain == XFS_REFC_DOMAIN_COW && irec->rc_refcount != 1)
+ return false;
+ if (irec->rc_domain == XFS_REFC_DOMAIN_SHARED && irec->rc_refcount < 2)
+ return false;
+ return true;
+}
+
void xfs_refcount_increase_extent(struct xfs_trans *tp,
struct xfs_bmbt_irec *irec);
void xfs_refcount_decrease_extent(struct xfs_trans *tp,
#define XFS_REFCOUNT_ITEM_OVERHEAD 32
extern int xfs_refcount_has_record(struct xfs_btree_cur *cur,
- xfs_agblock_t bno, xfs_extlen_t len, bool *exists);
+ enum xfs_refc_domain domain, xfs_agblock_t bno,
+ xfs_extlen_t len, bool *exists);
union xfs_btree_rec;
extern void xfs_refcount_btrec_to_irec(const union xfs_btree_rec *rec,
struct xfs_refcount_irec *irec);
#include "xfs_btree.h"
#include "xfs_btree_staging.h"
#include "xfs_refcount_btree.h"
+#include "xfs_refcount.h"
#include "xfs_alloc.h"
#include "xfs_error.h"
#include "xfs_trace.h"
struct xfs_btree_cur *cur,
union xfs_btree_rec *rec)
{
- rec->refc.rc_startblock = cpu_to_be32(cur->bc_rec.rc.rc_startblock);
+ const struct xfs_refcount_irec *irec = &cur->bc_rec.rc;
+ uint32_t start;
+
+ start = xfs_refcount_encode_startblock(irec->rc_startblock,
+ irec->rc_domain);
+ rec->refc.rc_startblock = cpu_to_be32(start);
rec->refc.rc_blockcount = cpu_to_be32(cur->bc_rec.rc.rc_blockcount);
rec->refc.rc_refcount = cpu_to_be32(cur->bc_rec.rc.rc_refcount);
}
struct xfs_btree_cur *cur,
const union xfs_btree_key *key)
{
- struct xfs_refcount_irec *rec = &cur->bc_rec.rc;
const struct xfs_refcount_key *kp = &key->refc;
+ const struct xfs_refcount_irec *irec = &cur->bc_rec.rc;
+ uint32_t start;
- return (int64_t)be32_to_cpu(kp->rc_startblock) - rec->rc_startblock;
+ start = xfs_refcount_encode_startblock(irec->rc_startblock,
+ irec->rc_domain);
+ return (int64_t)be32_to_cpu(kp->rc_startblock) - start;
}
STATIC int64_t
goto out_bad_rec;
} else {
/* check for valid extent range, including overflow */
- if (!xfs_verify_agbno(pag, irec->rm_startblock))
- goto out_bad_rec;
- if (irec->rm_startblock >
- irec->rm_startblock + irec->rm_blockcount)
- goto out_bad_rec;
- if (!xfs_verify_agbno(pag,
- irec->rm_startblock + irec->rm_blockcount - 1))
+ if (!xfs_verify_agbext(pag, irec->rm_startblock,
+ irec->rm_blockcount))
goto out_bad_rec;
}
/*
* In renaming a files we can modify:
- * the four inodes involved: 4 * inode size
+ * the five inodes involved: 5 * inode size
* the two directory btrees: 2 * (max depth + v2) * dir block size
* the two directory bmap btrees: 2 * max depth * block size
* And the bmap_finish transaction can free dir and bmap blocks (two sets
struct xfs_mount *mp)
{
return XFS_DQUOT_LOGRES(mp) +
- max((xfs_calc_inode_res(mp, 4) +
+ max((xfs_calc_inode_res(mp, 5) +
xfs_calc_buf_res(2 * XFS_DIROP_LOG_COUNT(mp),
XFS_FSB_TO_B(mp, 1))),
(xfs_calc_buf_res(7, mp->m_sb.sb_sectsize) +
xfs_exntst_t br_state; /* extent state */
} xfs_bmbt_irec_t;
+enum xfs_refc_domain {
+ XFS_REFC_DOMAIN_SHARED = 0,
+ XFS_REFC_DOMAIN_COW,
+};
+
+#define XFS_REFC_DOMAIN_STRINGS \
+ { XFS_REFC_DOMAIN_SHARED, "shared" }, \
+ { XFS_REFC_DOMAIN_COW, "cow" }
+
+struct xfs_refcount_irec {
+ xfs_agblock_t rc_startblock; /* starting block number */
+ xfs_extlen_t rc_blockcount; /* count of free blocks */
+ xfs_nlink_t rc_refcount; /* number of inodes linked here */
+ enum xfs_refc_domain rc_domain; /* shared or cow staging extent? */
+};
+
+#define XFS_RMAP_ATTR_FORK (1 << 0)
+#define XFS_RMAP_BMBT_BLOCK (1 << 1)
+#define XFS_RMAP_UNWRITTEN (1 << 2)
+#define XFS_RMAP_KEY_FLAGS (XFS_RMAP_ATTR_FORK | \
+ XFS_RMAP_BMBT_BLOCK)
+#define XFS_RMAP_REC_FLAGS (XFS_RMAP_UNWRITTEN)
+struct xfs_rmap_irec {
+ xfs_agblock_t rm_startblock; /* extent start block */
+ xfs_extlen_t rm_blockcount; /* extent length */
+ uint64_t rm_owner; /* extent owner */
+ uint64_t rm_offset; /* offset within the owner */
+ unsigned int rm_flags; /* state flags */
+};
+
/* per-AG block reservation types */
enum xfs_ag_resv_type {
XFS_AG_RESV_NONE = 0,
bno = be32_to_cpu(rec->alloc.ar_startblock);
len = be32_to_cpu(rec->alloc.ar_blockcount);
- if (bno + len <= bno ||
- !xfs_verify_agbno(pag, bno) ||
- !xfs_verify_agbno(pag, bno + len - 1))
+ if (!xfs_verify_agbext(pag, bno, len))
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_allocbt_xref(bs->sc, bno, len);
xfs_agblock_t bno;
bno = XFS_AGINO_TO_AGBNO(mp, agino);
- if (bno + len <= bno ||
- !xfs_verify_agbno(pag, bno) ||
- !xfs_verify_agbno(pag, bno + len - 1))
+
+ if (!xfs_verify_agbext(pag, bno, len))
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_iallocbt_chunk_xref(bs->sc, irec, agino, bno, len);
STATIC void
xchk_refcountbt_xref_rmap(
struct xfs_scrub *sc,
- xfs_agblock_t bno,
- xfs_extlen_t len,
- xfs_nlink_t refcount)
+ const struct xfs_refcount_irec *irec)
{
struct xchk_refcnt_check refchk = {
- .sc = sc,
- .bno = bno,
- .len = len,
- .refcount = refcount,
+ .sc = sc,
+ .bno = irec->rc_startblock,
+ .len = irec->rc_blockcount,
+ .refcount = irec->rc_refcount,
.seen = 0,
};
struct xfs_rmap_irec low;
/* Cross-reference with the rmapbt to confirm the refcount. */
memset(&low, 0, sizeof(low));
- low.rm_startblock = bno;
+ low.rm_startblock = irec->rc_startblock;
memset(&high, 0xFF, sizeof(high));
- high.rm_startblock = bno + len - 1;
+ high.rm_startblock = irec->rc_startblock + irec->rc_blockcount - 1;
INIT_LIST_HEAD(&refchk.fragments);
error = xfs_rmap_query_range(sc->sa.rmap_cur, &low, &high,
goto out_free;
xchk_refcountbt_process_rmap_fragments(&refchk);
- if (refcount != refchk.seen)
+ if (irec->rc_refcount != refchk.seen)
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
out_free:
/* Cross-reference with the other btrees. */
STATIC void
xchk_refcountbt_xref(
- struct xfs_scrub *sc,
- xfs_agblock_t agbno,
- xfs_extlen_t len,
- xfs_nlink_t refcount)
+ struct xfs_scrub *sc,
+ const struct xfs_refcount_irec *irec)
{
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
- xchk_xref_is_used_space(sc, agbno, len);
- xchk_xref_is_not_inode_chunk(sc, agbno, len);
- xchk_refcountbt_xref_rmap(sc, agbno, len, refcount);
+ xchk_xref_is_used_space(sc, irec->rc_startblock, irec->rc_blockcount);
+ xchk_xref_is_not_inode_chunk(sc, irec->rc_startblock,
+ irec->rc_blockcount);
+ xchk_refcountbt_xref_rmap(sc, irec);
}
/* Scrub a refcountbt record. */
struct xchk_btree *bs,
const union xfs_btree_rec *rec)
{
+ struct xfs_refcount_irec irec;
xfs_agblock_t *cow_blocks = bs->private;
struct xfs_perag *pag = bs->cur->bc_ag.pag;
- xfs_agblock_t bno;
- xfs_extlen_t len;
- xfs_nlink_t refcount;
- bool has_cowflag;
- bno = be32_to_cpu(rec->refc.rc_startblock);
- len = be32_to_cpu(rec->refc.rc_blockcount);
- refcount = be32_to_cpu(rec->refc.rc_refcount);
+ xfs_refcount_btrec_to_irec(rec, &irec);
- /* Only CoW records can have refcount == 1. */
- has_cowflag = (bno & XFS_REFC_COW_START);
- if ((refcount == 1 && !has_cowflag) || (refcount != 1 && has_cowflag))
+ /* Check the domain and refcount are not incompatible. */
+ if (!xfs_refcount_check_domain(&irec))
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
- if (has_cowflag)
- (*cow_blocks) += len;
+
+ if (irec.rc_domain == XFS_REFC_DOMAIN_COW)
+ (*cow_blocks) += irec.rc_blockcount;
/* Check the extent. */
- bno &= ~XFS_REFC_COW_START;
- if (bno + len <= bno ||
- !xfs_verify_agbno(pag, bno) ||
- !xfs_verify_agbno(pag, bno + len - 1))
+ if (!xfs_verify_agbext(pag, irec.rc_startblock, irec.rc_blockcount))
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
- if (refcount == 0)
+ if (irec.rc_refcount == 0)
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
- xchk_refcountbt_xref(bs->sc, bno, len, refcount);
+ xchk_refcountbt_xref(bs->sc, &irec);
return 0;
}
xfs_extlen_t len)
{
struct xfs_refcount_irec rc;
- bool has_cowflag;
int has_refcount;
int error;
return;
/* Find the CoW staging extent. */
- error = xfs_refcount_lookup_le(sc->sa.refc_cur,
- agbno + XFS_REFC_COW_START, &has_refcount);
+ error = xfs_refcount_lookup_le(sc->sa.refc_cur, XFS_REFC_DOMAIN_COW,
+ agbno, &has_refcount);
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (!has_refcount) {
return;
}
- /* CoW flag must be set, refcount must be 1. */
- has_cowflag = (rc.rc_startblock & XFS_REFC_COW_START);
- if (!has_cowflag || rc.rc_refcount != 1)
+ /* CoW lookup returned a shared extent record? */
+ if (rc.rc_domain != XFS_REFC_DOMAIN_COW)
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
/* Must be at least as long as what was passed in */
if (!sc->sa.refc_cur || xchk_skip_xref(sc->sm))
return;
- error = xfs_refcount_has_record(sc->sa.refc_cur, agbno, len, &shared);
+ error = xfs_refcount_has_record(sc->sa.refc_cur, XFS_REFC_DOMAIN_SHARED,
+ agbno, len, &shared);
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (shared)
return attrip;
}
-/*
- * Copy an attr format buffer from the given buf, and into the destination attr
- * format structure.
- */
-STATIC int
-xfs_attri_copy_format(
- struct xfs_log_iovec *buf,
- struct xfs_attri_log_format *dst_attr_fmt)
-{
- struct xfs_attri_log_format *src_attr_fmt = buf->i_addr;
- size_t len;
-
- len = sizeof(struct xfs_attri_log_format);
- if (buf->i_len != len) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
- return -EFSCORRUPTED;
- }
-
- memcpy((char *)dst_attr_fmt, (char *)src_attr_fmt, len);
- return 0;
-}
-
static inline struct xfs_attrd_log_item *ATTRD_ITEM(struct xfs_log_item *lip)
{
return container_of(lip, struct xfs_attrd_log_item, attrd_item);
struct xfs_attri_log_nameval *nv;
const void *attr_value = NULL;
const void *attr_name;
- int error;
+ size_t len;
attri_formatp = item->ri_buf[0].i_addr;
attr_name = item->ri_buf[1].i_addr;
/* Validate xfs_attri_log_format before the large memory allocation */
+ len = sizeof(struct xfs_attri_log_format);
+ if (item->ri_buf[0].i_len != len) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
if (!xfs_attri_validate(mp, attri_formatp)) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
+ /* Validate the attr name */
+ if (item->ri_buf[1].i_len !=
+ xlog_calc_iovec_len(attri_formatp->alfi_name_len)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
return -EFSCORRUPTED;
}
if (!xfs_attr_namecheck(attr_name, attri_formatp->alfi_name_len)) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[1].i_addr, item->ri_buf[1].i_len);
return -EFSCORRUPTED;
}
- if (attri_formatp->alfi_value_len)
+ /* Validate the attr value, if present */
+ if (attri_formatp->alfi_value_len != 0) {
+ if (item->ri_buf[2].i_len != xlog_calc_iovec_len(attri_formatp->alfi_value_len)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr,
+ item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
attr_value = item->ri_buf[2].i_addr;
+ }
/*
* Memory alloc failure will cause replay to abort. We attach the
attri_formatp->alfi_value_len);
attrip = xfs_attri_init(mp, nv);
- error = xfs_attri_copy_format(&item->ri_buf[0], &attrip->attri_format);
- if (error)
- goto out;
+ memcpy(&attrip->attri_format, attri_formatp, len);
/*
* The ATTRI has two references. One for the ATTRD and one for ATTRI to
xfs_attri_release(attrip);
xfs_attri_log_nameval_put(nv);
return 0;
-out:
- xfs_attri_item_free(attrip);
- xfs_attri_log_nameval_put(nv);
- return error;
}
/*
attrd_formatp = item->ri_buf[0].i_addr;
if (item->ri_buf[0].i_len != sizeof(struct xfs_attrd_log_format)) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
return -EFSCORRUPTED;
}
.iop_relog = xfs_bui_item_relog,
};
-/*
- * Copy an BUI format buffer from the given buf, and into the destination
- * BUI format structure. The BUI/BUD items were designed not to need any
- * special alignment handling.
- */
-static int
+static inline void
xfs_bui_copy_format(
- struct xfs_log_iovec *buf,
- struct xfs_bui_log_format *dst_bui_fmt)
+ struct xfs_bui_log_format *dst,
+ const struct xfs_bui_log_format *src)
{
- struct xfs_bui_log_format *src_bui_fmt;
- uint len;
+ unsigned int i;
- src_bui_fmt = buf->i_addr;
- len = xfs_bui_log_format_sizeof(src_bui_fmt->bui_nextents);
+ memcpy(dst, src, offsetof(struct xfs_bui_log_format, bui_extents));
- if (buf->i_len == len) {
- memcpy(dst_bui_fmt, src_bui_fmt, len);
- return 0;
- }
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
- return -EFSCORRUPTED;
+ for (i = 0; i < src->bui_nextents; i++)
+ memcpy(&dst->bui_extents[i], &src->bui_extents[i],
+ sizeof(struct xfs_map_extent));
}
/*
struct xlog_recover_item *item,
xfs_lsn_t lsn)
{
- int error;
struct xfs_mount *mp = log->l_mp;
struct xfs_bui_log_item *buip;
struct xfs_bui_log_format *bui_formatp;
+ size_t len;
bui_formatp = item->ri_buf[0].i_addr;
+ if (item->ri_buf[0].i_len < xfs_bui_log_format_sizeof(0)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
if (bui_formatp->bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
return -EFSCORRUPTED;
}
- buip = xfs_bui_init(mp);
- error = xfs_bui_copy_format(&item->ri_buf[0], &buip->bui_format);
- if (error) {
- xfs_bui_item_free(buip);
- return error;
+
+ len = xfs_bui_log_format_sizeof(bui_formatp->bui_nextents);
+ if (item->ri_buf[0].i_len != len) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
}
+
+ buip = xfs_bui_init(mp);
+ xfs_bui_copy_format(&buip->bui_format, bui_formatp);
atomic_set(&buip->bui_next_extent, bui_formatp->bui_nextents);
/*
* Insert the intent into the AIL directly and drop one reference so
bud_formatp = item->ri_buf[0].i_addr;
if (item->ri_buf[0].i_len != sizeof(struct xfs_bud_log_format)) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
return -EFSCORRUPTED;
}
xfs_errortag_init(
struct xfs_mount *mp)
{
+ int ret;
+
mp->m_errortag = kmem_zalloc(sizeof(unsigned int) * XFS_ERRTAG_MAX,
KM_MAYFAIL);
if (!mp->m_errortag)
return -ENOMEM;
- return xfs_sysfs_init(&mp->m_errortag_kobj, &xfs_errortag_ktype,
- &mp->m_kobj, "errortag");
+ ret = xfs_sysfs_init(&mp->m_errortag_kobj, &xfs_errortag_ktype,
+ &mp->m_kobj, "errortag");
+ if (ret)
+ kmem_free(mp->m_errortag);
+ return ret;
}
void
xfs_efi_item_free(efip);
}
-/*
- * This returns the number of iovecs needed to log the given efi item.
- * We only need 1 iovec for an efi item. It just logs the efi_log_format
- * structure.
- */
-static inline int
-xfs_efi_item_sizeof(
- struct xfs_efi_log_item *efip)
-{
- return sizeof(struct xfs_efi_log_format) +
- (efip->efi_format.efi_nextents - 1) * sizeof(xfs_extent_t);
-}
-
STATIC void
xfs_efi_item_size(
struct xfs_log_item *lip,
int *nvecs,
int *nbytes)
{
+ struct xfs_efi_log_item *efip = EFI_ITEM(lip);
+
*nvecs += 1;
- *nbytes += xfs_efi_item_sizeof(EFI_ITEM(lip));
+ *nbytes += xfs_efi_log_format_sizeof(efip->efi_format.efi_nextents);
}
/*
xlog_copy_iovec(lv, &vecp, XLOG_REG_TYPE_EFI_FORMAT,
&efip->efi_format,
- xfs_efi_item_sizeof(efip));
+ xfs_efi_log_format_sizeof(efip->efi_format.efi_nextents));
}
{
struct xfs_efi_log_item *efip;
- uint size;
ASSERT(nextents > 0);
if (nextents > XFS_EFI_MAX_FAST_EXTENTS) {
- size = (uint)(sizeof(struct xfs_efi_log_item) +
- ((nextents - 1) * sizeof(xfs_extent_t)));
- efip = kmem_zalloc(size, 0);
+ efip = kzalloc(xfs_efi_log_item_sizeof(nextents),
+ GFP_KERNEL | __GFP_NOFAIL);
} else {
efip = kmem_cache_zalloc(xfs_efi_cache,
GFP_KERNEL | __GFP_NOFAIL);
{
xfs_efi_log_format_t *src_efi_fmt = buf->i_addr;
uint i;
- uint len = sizeof(xfs_efi_log_format_t) +
- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_t);
- uint len32 = sizeof(xfs_efi_log_format_32_t) +
- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_32_t);
- uint len64 = sizeof(xfs_efi_log_format_64_t) +
- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_64_t);
+ uint len = xfs_efi_log_format_sizeof(src_efi_fmt->efi_nextents);
+ uint len32 = xfs_efi_log_format32_sizeof(src_efi_fmt->efi_nextents);
+ uint len64 = xfs_efi_log_format64_sizeof(src_efi_fmt->efi_nextents);
if (buf->i_len == len) {
- memcpy((char *)dst_efi_fmt, (char*)src_efi_fmt, len);
+ memcpy(dst_efi_fmt, src_efi_fmt,
+ offsetof(struct xfs_efi_log_format, efi_extents));
+ for (i = 0; i < src_efi_fmt->efi_nextents; i++)
+ memcpy(&dst_efi_fmt->efi_extents[i],
+ &src_efi_fmt->efi_extents[i],
+ sizeof(struct xfs_extent));
return 0;
} else if (buf->i_len == len32) {
xfs_efi_log_format_32_t *src_efi_fmt_32 = buf->i_addr;
}
return 0;
}
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, NULL, buf->i_addr,
+ buf->i_len);
return -EFSCORRUPTED;
}
kmem_cache_free(xfs_efd_cache, efdp);
}
-/*
- * This returns the number of iovecs needed to log the given efd item.
- * We only need 1 iovec for an efd item. It just logs the efd_log_format
- * structure.
- */
-static inline int
-xfs_efd_item_sizeof(
- struct xfs_efd_log_item *efdp)
-{
- return sizeof(xfs_efd_log_format_t) +
- (efdp->efd_format.efd_nextents - 1) * sizeof(xfs_extent_t);
-}
-
STATIC void
xfs_efd_item_size(
struct xfs_log_item *lip,
int *nvecs,
int *nbytes)
{
+ struct xfs_efd_log_item *efdp = EFD_ITEM(lip);
+
*nvecs += 1;
- *nbytes += xfs_efd_item_sizeof(EFD_ITEM(lip));
+ *nbytes += xfs_efd_log_format_sizeof(efdp->efd_format.efd_nextents);
}
/*
xlog_copy_iovec(lv, &vecp, XLOG_REG_TYPE_EFD_FORMAT,
&efdp->efd_format,
- xfs_efd_item_sizeof(efdp));
+ xfs_efd_log_format_sizeof(efdp->efd_format.efd_nextents));
}
/*
ASSERT(nextents > 0);
if (nextents > XFS_EFD_MAX_FAST_EXTENTS) {
- efdp = kmem_zalloc(sizeof(struct xfs_efd_log_item) +
- (nextents - 1) * sizeof(struct xfs_extent),
- 0);
+ efdp = kzalloc(xfs_efd_log_item_sizeof(nextents),
+ GFP_KERNEL | __GFP_NOFAIL);
} else {
efdp = kmem_cache_zalloc(xfs_efd_cache,
GFP_KERNEL | __GFP_NOFAIL);
efi_formatp = item->ri_buf[0].i_addr;
+ if (item->ri_buf[0].i_len < xfs_efi_log_format_sizeof(0)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
efip = xfs_efi_init(mp, efi_formatp->efi_nextents);
error = xfs_efi_copy_format(&item->ri_buf[0], &efip->efi_format);
if (error) {
xfs_lsn_t lsn)
{
struct xfs_efd_log_format *efd_formatp;
+ int buflen = item->ri_buf[0].i_len;
efd_formatp = item->ri_buf[0].i_addr;
- ASSERT((item->ri_buf[0].i_len == (sizeof(xfs_efd_log_format_32_t) +
- ((efd_formatp->efd_nextents - 1) * sizeof(xfs_extent_32_t)))) ||
- (item->ri_buf[0].i_len == (sizeof(xfs_efd_log_format_64_t) +
- ((efd_formatp->efd_nextents - 1) * sizeof(xfs_extent_64_t)))));
+
+ if (buflen < sizeof(struct xfs_efd_log_format)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ efd_formatp, buflen);
+ return -EFSCORRUPTED;
+ }
+
+ if (item->ri_buf[0].i_len != xfs_efd_log_format32_sizeof(
+ efd_formatp->efd_nextents) &&
+ item->ri_buf[0].i_len != xfs_efd_log_format64_sizeof(
+ efd_formatp->efd_nextents)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ efd_formatp, buflen);
+ return -EFSCORRUPTED;
+ }
xlog_recover_release_intent(log, XFS_LI_EFI, efd_formatp->efd_efi_id);
return 0;
xfs_efi_log_format_t efi_format;
};
+static inline size_t
+xfs_efi_log_item_sizeof(
+ unsigned int nr)
+{
+ return offsetof(struct xfs_efi_log_item, efi_format) +
+ xfs_efi_log_format_sizeof(nr);
+}
+
/*
* This is the "extent free done" log item. It is used to log
* the fact that some extents earlier mentioned in an efi item
xfs_efd_log_format_t efd_format;
};
+static inline size_t
+xfs_efd_log_item_sizeof(
+ unsigned int nr)
+{
+ return offsetof(struct xfs_efd_log_item, efd_format) +
+ xfs_efd_log_format_sizeof(nr);
+}
+
/*
* Max number of extents in fast allocation path.
*/
}
#ifdef CONFIG_FS_DAX
-static int
+static inline vm_fault_t
xfs_dax_fault(
struct vm_fault *vmf,
enum page_entry_size pe_size,
&xfs_read_iomap_ops);
}
#else
-static int
+static inline vm_fault_t
xfs_dax_fault(
struct vm_fault *vmf,
enum page_entry_size pe_size,
bool write_fault,
pfn_t *pfn)
{
- return 0;
+ ASSERT(0);
+ return VM_FAULT_SIGBUS;
}
#endif
* Lock all the participating inodes. Depending upon whether
* the target_name exists in the target directory, and
* whether the target directory is the same as the source
- * directory, we can lock from 2 to 4 inodes.
+ * directory, we can lock from 2 to 5 inodes.
*/
xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
for (lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
lip != NULL;
lip = xfs_trans_ail_cursor_next(ailp, &cur)) {
+ const struct xfs_item_ops *ops;
+
if (!xlog_item_is_intent(lip))
break;
* deferred ops, you /must/ attach them to the capture list in
* the recover routine or else those subsequent intents will be
* replayed in the wrong order!
+ *
+ * The recovery function can free the log item, so we must not
+ * access lip after it returns.
*/
spin_unlock(&ailp->ail_lock);
- error = lip->li_ops->iop_recover(lip, &capture_list);
+ ops = lip->li_ops;
+ error = ops->iop_recover(lip, &capture_list);
spin_lock(&ailp->ail_lock);
if (error) {
trace_xlog_intent_recovery_failed(log->l_mp, error,
- lip->li_ops->iop_recover);
+ ops->iop_recover);
break;
}
}
/* log structures */
XFS_CHECK_STRUCT_SIZE(struct xfs_buf_log_format, 88);
XFS_CHECK_STRUCT_SIZE(struct xfs_dq_logformat, 24);
- XFS_CHECK_STRUCT_SIZE(struct xfs_efd_log_format_32, 28);
- XFS_CHECK_STRUCT_SIZE(struct xfs_efd_log_format_64, 32);
- XFS_CHECK_STRUCT_SIZE(struct xfs_efi_log_format_32, 28);
- XFS_CHECK_STRUCT_SIZE(struct xfs_efi_log_format_64, 32);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_efd_log_format_32, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_efd_log_format_64, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_efi_log_format_32, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_efi_log_format_64, 16);
XFS_CHECK_STRUCT_SIZE(struct xfs_extent_32, 12);
XFS_CHECK_STRUCT_SIZE(struct xfs_extent_64, 16);
XFS_CHECK_STRUCT_SIZE(struct xfs_log_dinode, 176);
XFS_CHECK_STRUCT_SIZE(struct xfs_trans_header, 16);
XFS_CHECK_STRUCT_SIZE(struct xfs_attri_log_format, 40);
XFS_CHECK_STRUCT_SIZE(struct xfs_attrd_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_bui_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_bud_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_cui_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_cud_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_rui_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_rud_log_format, 16);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_map_extent, 32);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_phys_extent, 16);
+
+ XFS_CHECK_OFFSET(struct xfs_bui_log_format, bui_extents, 16);
+ XFS_CHECK_OFFSET(struct xfs_cui_log_format, cui_extents, 16);
+ XFS_CHECK_OFFSET(struct xfs_rui_log_format, rui_extents, 16);
+ XFS_CHECK_OFFSET(struct xfs_efi_log_format, efi_extents, 16);
+ XFS_CHECK_OFFSET(struct xfs_efi_log_format_32, efi_extents, 16);
+ XFS_CHECK_OFFSET(struct xfs_efi_log_format_64, efi_extents, 16);
/*
* The v5 superblock format extended several v4 header structures with
type = refc_type;
break;
default:
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ &cuip->cui_format,
+ sizeof(cuip->cui_format));
error = -EFSCORRUPTED;
goto abort_error;
}
&new_fsb, &new_len, &rcur);
if (error == -EFSCORRUPTED)
XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
- refc, sizeof(*refc));
+ &cuip->cui_format,
+ sizeof(cuip->cui_format));
if (error)
goto abort_error;
.iop_relog = xfs_cui_item_relog,
};
-/*
- * Copy an CUI format buffer from the given buf, and into the destination
- * CUI format structure. The CUI/CUD items were designed not to need any
- * special alignment handling.
- */
-static int
+static inline void
xfs_cui_copy_format(
- struct xfs_log_iovec *buf,
- struct xfs_cui_log_format *dst_cui_fmt)
+ struct xfs_cui_log_format *dst,
+ const struct xfs_cui_log_format *src)
{
- struct xfs_cui_log_format *src_cui_fmt;
- uint len;
+ unsigned int i;
- src_cui_fmt = buf->i_addr;
- len = xfs_cui_log_format_sizeof(src_cui_fmt->cui_nextents);
+ memcpy(dst, src, offsetof(struct xfs_cui_log_format, cui_extents));
- if (buf->i_len == len) {
- memcpy(dst_cui_fmt, src_cui_fmt, len);
- return 0;
- }
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
- return -EFSCORRUPTED;
+ for (i = 0; i < src->cui_nextents; i++)
+ memcpy(&dst->cui_extents[i], &src->cui_extents[i],
+ sizeof(struct xfs_phys_extent));
}
/*
struct xlog_recover_item *item,
xfs_lsn_t lsn)
{
- int error;
struct xfs_mount *mp = log->l_mp;
struct xfs_cui_log_item *cuip;
struct xfs_cui_log_format *cui_formatp;
+ size_t len;
cui_formatp = item->ri_buf[0].i_addr;
- cuip = xfs_cui_init(mp, cui_formatp->cui_nextents);
- error = xfs_cui_copy_format(&item->ri_buf[0], &cuip->cui_format);
- if (error) {
- xfs_cui_item_free(cuip);
- return error;
+ if (item->ri_buf[0].i_len < xfs_cui_log_format_sizeof(0)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
}
+
+ len = xfs_cui_log_format_sizeof(cui_formatp->cui_nextents);
+ if (item->ri_buf[0].i_len != len) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
+ cuip = xfs_cui_init(mp, cui_formatp->cui_nextents);
+ xfs_cui_copy_format(&cuip->cui_format, cui_formatp);
atomic_set(&cuip->cui_next_extent, cui_formatp->cui_nextents);
/*
* Insert the intent into the AIL directly and drop one reference so
cud_formatp = item->ri_buf[0].i_addr;
if (item->ri_buf[0].i_len != sizeof(struct xfs_cud_log_format)) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
return -EFSCORRUPTED;
}
return ruip;
}
-/*
- * Copy an RUI format buffer from the given buf, and into the destination
- * RUI format structure. The RUI/RUD items were designed not to need any
- * special alignment handling.
- */
-STATIC int
-xfs_rui_copy_format(
- struct xfs_log_iovec *buf,
- struct xfs_rui_log_format *dst_rui_fmt)
-{
- struct xfs_rui_log_format *src_rui_fmt;
- uint len;
-
- src_rui_fmt = buf->i_addr;
- len = xfs_rui_log_format_sizeof(src_rui_fmt->rui_nextents);
-
- if (buf->i_len != len) {
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
- return -EFSCORRUPTED;
- }
-
- memcpy(dst_rui_fmt, src_rui_fmt, len);
- return 0;
-}
-
static inline struct xfs_rud_log_item *RUD_ITEM(struct xfs_log_item *lip)
{
return container_of(lip, struct xfs_rud_log_item, rud_item);
type = XFS_RMAP_FREE;
break;
default:
- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ &ruip->rui_format,
+ sizeof(ruip->rui_format));
error = -EFSCORRUPTED;
goto abort_error;
}
.iop_relog = xfs_rui_item_relog,
};
+static inline void
+xfs_rui_copy_format(
+ struct xfs_rui_log_format *dst,
+ const struct xfs_rui_log_format *src)
+{
+ unsigned int i;
+
+ memcpy(dst, src, offsetof(struct xfs_rui_log_format, rui_extents));
+
+ for (i = 0; i < src->rui_nextents; i++)
+ memcpy(&dst->rui_extents[i], &src->rui_extents[i],
+ sizeof(struct xfs_map_extent));
+}
+
/*
* This routine is called to create an in-core extent rmap update
* item from the rui format structure which was logged on disk.
struct xlog_recover_item *item,
xfs_lsn_t lsn)
{
- int error;
struct xfs_mount *mp = log->l_mp;
struct xfs_rui_log_item *ruip;
struct xfs_rui_log_format *rui_formatp;
+ size_t len;
rui_formatp = item->ri_buf[0].i_addr;
- ruip = xfs_rui_init(mp, rui_formatp->rui_nextents);
- error = xfs_rui_copy_format(&item->ri_buf[0], &ruip->rui_format);
- if (error) {
- xfs_rui_item_free(ruip);
- return error;
+ if (item->ri_buf[0].i_len < xfs_rui_log_format_sizeof(0)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
+
+ len = xfs_rui_log_format_sizeof(rui_formatp->rui_nextents);
+ if (item->ri_buf[0].i_len != len) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,
+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
}
+
+ ruip = xfs_rui_init(mp, rui_formatp->rui_nextents);
+ xfs_rui_copy_format(&ruip->rui_format, rui_formatp);
atomic_set(&ruip->rui_next_extent, rui_formatp->rui_nextents);
/*
* Insert the intent into the AIL directly and drop one reference so
struct xfs_rud_log_format *rud_formatp;
rud_formatp = item->ri_buf[0].i_addr;
- ASSERT(item->ri_buf[0].i_len == sizeof(struct xfs_rud_log_format));
+ if (item->ri_buf[0].i_len != sizeof(struct xfs_rud_log_format)) {
+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,
+ rud_formatp, item->ri_buf[0].i_len);
+ return -EFSCORRUPTED;
+ }
xlog_recover_release_intent(log, XFS_LI_RUI, rud_formatp->rud_rui_id);
return 0;
goto out_destroy_trans_cache;
xfs_efd_cache = kmem_cache_create("xfs_efd_item",
- (sizeof(struct xfs_efd_log_item) +
- (XFS_EFD_MAX_FAST_EXTENTS - 1) *
- sizeof(struct xfs_extent)),
- 0, 0, NULL);
+ xfs_efd_log_item_sizeof(XFS_EFD_MAX_FAST_EXTENTS),
+ 0, 0, NULL);
if (!xfs_efd_cache)
goto out_destroy_buf_item_cache;
xfs_efi_cache = kmem_cache_create("xfs_efi_item",
- (sizeof(struct xfs_efi_log_item) +
- (XFS_EFI_MAX_FAST_EXTENTS - 1) *
- sizeof(struct xfs_extent)),
- 0, 0, NULL);
+ xfs_efi_log_item_sizeof(XFS_EFI_MAX_FAST_EXTENTS),
+ 0, 0, NULL);
if (!xfs_efi_cache)
goto out_destroy_efd_cache;
const char *name)
{
struct kobject *parent;
+ int err;
parent = parent_kobj ? &parent_kobj->kobject : NULL;
init_completion(&kobj->complete);
- return kobject_init_and_add(&kobj->kobject, ktype, parent, "%s", name);
+ err = kobject_init_and_add(&kobj->kobject, ktype, parent, "%s", name);
+ if (err)
+ kobject_put(&kobj->kobject);
+
+ return err;
}
static inline void
TRACE_DEFINE_ENUM(PE_SIZE_PMD);
TRACE_DEFINE_ENUM(PE_SIZE_PUD);
+TRACE_DEFINE_ENUM(XFS_REFC_DOMAIN_SHARED);
+TRACE_DEFINE_ENUM(XFS_REFC_DOMAIN_COW);
+
TRACE_EVENT(xfs_filemap_fault,
TP_PROTO(struct xfs_inode *ip, enum page_entry_size pe_size,
bool write_fault),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
+ __field(enum xfs_refc_domain, domain)
__field(xfs_agblock_t, startblock)
__field(xfs_extlen_t, blockcount)
__field(xfs_nlink_t, refcount)
TP_fast_assign(
__entry->dev = mp->m_super->s_dev;
__entry->agno = agno;
+ __entry->domain = irec->rc_domain;
__entry->startblock = irec->rc_startblock;
__entry->blockcount = irec->rc_blockcount;
__entry->refcount = irec->rc_refcount;
),
- TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u",
+ TP_printk("dev %d:%d agno 0x%x dom %s agbno 0x%x fsbcount 0x%x refcount %u",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
+ __print_symbolic(__entry->domain, XFS_REFC_DOMAIN_STRINGS),
__entry->startblock,
__entry->blockcount,
__entry->refcount)
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
+ __field(enum xfs_refc_domain, domain)
__field(xfs_agblock_t, startblock)
__field(xfs_extlen_t, blockcount)
__field(xfs_nlink_t, refcount)
TP_fast_assign(
__entry->dev = mp->m_super->s_dev;
__entry->agno = agno;
+ __entry->domain = irec->rc_domain;
__entry->startblock = irec->rc_startblock;
__entry->blockcount = irec->rc_blockcount;
__entry->refcount = irec->rc_refcount;
__entry->agbno = agbno;
),
- TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u @ agbno 0x%x",
+ TP_printk("dev %d:%d agno 0x%x dom %s agbno 0x%x fsbcount 0x%x refcount %u @ agbno 0x%x",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
+ __print_symbolic(__entry->domain, XFS_REFC_DOMAIN_STRINGS),
__entry->startblock,
__entry->blockcount,
__entry->refcount,
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
+ __field(enum xfs_refc_domain, i1_domain)
__field(xfs_agblock_t, i1_startblock)
__field(xfs_extlen_t, i1_blockcount)
__field(xfs_nlink_t, i1_refcount)
+ __field(enum xfs_refc_domain, i2_domain)
__field(xfs_agblock_t, i2_startblock)
__field(xfs_extlen_t, i2_blockcount)
__field(xfs_nlink_t, i2_refcount)
TP_fast_assign(
__entry->dev = mp->m_super->s_dev;
__entry->agno = agno;
+ __entry->i1_domain = i1->rc_domain;
__entry->i1_startblock = i1->rc_startblock;
__entry->i1_blockcount = i1->rc_blockcount;
__entry->i1_refcount = i1->rc_refcount;
+ __entry->i2_domain = i2->rc_domain;
__entry->i2_startblock = i2->rc_startblock;
__entry->i2_blockcount = i2->rc_blockcount;
__entry->i2_refcount = i2->rc_refcount;
),
- TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u -- "
- "agbno 0x%x fsbcount 0x%x refcount %u",
+ TP_printk("dev %d:%d agno 0x%x dom %s agbno 0x%x fsbcount 0x%x refcount %u -- "
+ "dom %s agbno 0x%x fsbcount 0x%x refcount %u",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
+ __print_symbolic(__entry->i1_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i1_startblock,
__entry->i1_blockcount,
__entry->i1_refcount,
+ __print_symbolic(__entry->i2_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i2_startblock,
__entry->i2_blockcount,
__entry->i2_refcount)
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
+ __field(enum xfs_refc_domain, i1_domain)
__field(xfs_agblock_t, i1_startblock)
__field(xfs_extlen_t, i1_blockcount)
__field(xfs_nlink_t, i1_refcount)
+ __field(enum xfs_refc_domain, i2_domain)
__field(xfs_agblock_t, i2_startblock)
__field(xfs_extlen_t, i2_blockcount)
__field(xfs_nlink_t, i2_refcount)
TP_fast_assign(
__entry->dev = mp->m_super->s_dev;
__entry->agno = agno;
+ __entry->i1_domain = i1->rc_domain;
__entry->i1_startblock = i1->rc_startblock;
__entry->i1_blockcount = i1->rc_blockcount;
__entry->i1_refcount = i1->rc_refcount;
+ __entry->i2_domain = i2->rc_domain;
__entry->i2_startblock = i2->rc_startblock;
__entry->i2_blockcount = i2->rc_blockcount;
__entry->i2_refcount = i2->rc_refcount;
__entry->agbno = agbno;
),
- TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u -- "
- "agbno 0x%x fsbcount 0x%x refcount %u @ agbno 0x%x",
+ TP_printk("dev %d:%d agno 0x%x dom %s agbno 0x%x fsbcount 0x%x refcount %u -- "
+ "dom %s agbno 0x%x fsbcount 0x%x refcount %u @ agbno 0x%x",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
+ __print_symbolic(__entry->i1_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i1_startblock,
__entry->i1_blockcount,
__entry->i1_refcount,
+ __print_symbolic(__entry->i2_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i2_startblock,
__entry->i2_blockcount,
__entry->i2_refcount,
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
+ __field(enum xfs_refc_domain, i1_domain)
__field(xfs_agblock_t, i1_startblock)
__field(xfs_extlen_t, i1_blockcount)
__field(xfs_nlink_t, i1_refcount)
+ __field(enum xfs_refc_domain, i2_domain)
__field(xfs_agblock_t, i2_startblock)
__field(xfs_extlen_t, i2_blockcount)
__field(xfs_nlink_t, i2_refcount)
+ __field(enum xfs_refc_domain, i3_domain)
__field(xfs_agblock_t, i3_startblock)
__field(xfs_extlen_t, i3_blockcount)
__field(xfs_nlink_t, i3_refcount)
TP_fast_assign(
__entry->dev = mp->m_super->s_dev;
__entry->agno = agno;
+ __entry->i1_domain = i1->rc_domain;
__entry->i1_startblock = i1->rc_startblock;
__entry->i1_blockcount = i1->rc_blockcount;
__entry->i1_refcount = i1->rc_refcount;
+ __entry->i2_domain = i2->rc_domain;
__entry->i2_startblock = i2->rc_startblock;
__entry->i2_blockcount = i2->rc_blockcount;
__entry->i2_refcount = i2->rc_refcount;
+ __entry->i3_domain = i3->rc_domain;
__entry->i3_startblock = i3->rc_startblock;
__entry->i3_blockcount = i3->rc_blockcount;
__entry->i3_refcount = i3->rc_refcount;
),
- TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u -- "
- "agbno 0x%x fsbcount 0x%x refcount %u -- "
- "agbno 0x%x fsbcount 0x%x refcount %u",
+ TP_printk("dev %d:%d agno 0x%x dom %s agbno 0x%x fsbcount 0x%x refcount %u -- "
+ "dom %s agbno 0x%x fsbcount 0x%x refcount %u -- "
+ "dom %s agbno 0x%x fsbcount 0x%x refcount %u",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
+ __print_symbolic(__entry->i1_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i1_startblock,
__entry->i1_blockcount,
__entry->i1_refcount,
+ __print_symbolic(__entry->i2_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i2_startblock,
__entry->i2_blockcount,
__entry->i2_refcount,
+ __print_symbolic(__entry->i3_domain, XFS_REFC_DOMAIN_STRINGS),
__entry->i3_startblock,
__entry->i3_blockcount,
__entry->i3_refcount)
xfs_ail_push_all_sync(
struct xfs_ail *ailp)
{
- struct xfs_log_item *lip;
DEFINE_WAIT(wait);
spin_lock(&ailp->ail_lock);
- while ((lip = xfs_ail_max(ailp)) != NULL) {
+ while (xfs_ail_max(ailp) != NULL) {
prepare_to_wait(&ailp->ail_empty, &wait, TASK_UNINTERRUPTIBLE);
wake_up_process(ailp->ail_task);
spin_unlock(&ailp->ail_lock);
void ghes_unregister_vendor_record_notifier(struct notifier_block *nb);
#endif
-int ghes_estatus_pool_init(int num_ghes);
+int ghes_estatus_pool_init(unsigned int num_ghes);
/* From drivers/edac/ghes_edac.c */
#endif
#ifndef compat_arg_u64
-#ifdef CONFIG_CPU_BIG_ENDIAN
+#ifndef CONFIG_CPU_BIG_ENDIAN
#define compat_arg_u64(name) u32 name##_lo, u32 name##_hi
#define compat_arg_u64_dual(name) u32, name##_lo, u32, name##_hi
#else
#define PATCHABLE_DISCARDS *(__patchable_function_entries)
#endif
+#ifndef CONFIG_ARCH_SUPPORTS_CFI_CLANG
+/*
+ * Simply points to ftrace_stub, but with the proper protocol.
+ * Defined by the linker script in linux/vmlinux.lds.h
+ */
+#define FTRACE_STUB_HACK ftrace_stub_graph = ftrace_stub;
+#else
+#define FTRACE_STUB_HACK
+#endif
+
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
/*
* The ftrace call sites are logged to a section whose name depends on the
* FTRACE_CALLSITE_SECTION. We capture all of them here to avoid header
* dependencies for FTRACE_CALLSITE_SECTION's definition.
*
- * Need to also make ftrace_stub_graph point to ftrace_stub
- * so that the same stub location may have different protocols
- * and not mess up with C verifiers.
- *
* ftrace_ops_list_func will be defined as arch_ftrace_ops_list_func
* as some archs will have a different prototype for that function
* but ftrace_ops_list_func() will have a single prototype.
KEEP(*(__mcount_loc)) \
KEEP_PATCHABLE \
__stop_mcount_loc = .; \
- ftrace_stub_graph = ftrace_stub; \
+ FTRACE_STUB_HACK \
ftrace_ops_list_func = arch_ftrace_ops_list_func;
#else
# ifdef CONFIG_FUNCTION_TRACER
-# define MCOUNT_REC() ftrace_stub_graph = ftrace_stub; \
+# define MCOUNT_REC() FTRACE_STUB_HACK \
ftrace_ops_list_func = arch_ftrace_ops_list_func;
# else
# define MCOUNT_REC()
#define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
+/**
+ * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining
+ *
+ * Setting this flag on a scheduler fence prevents pipelining of jobs depending
+ * on this fence. In other words we always insert a full CPU round trip before
+ * dependen jobs are pushed to the hw queue.
+ */
+#define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS
+
struct drm_gem_object;
struct drm_gpu_scheduler;
struct io_comp_batch *iob, int ioerror,
void (*complete)(struct io_comp_batch *))
{
- if (!iob || (req->rq_flags & RQF_ELV) || ioerror)
+ if (!iob || (req->rq_flags & RQF_ELV) || ioerror ||
+ (req->end_io && !blk_rq_is_passthrough(req)))
return false;
if (!iob->complete)
#include <linux/bpfptr.h>
#include <linux/btf.h>
#include <linux/rcupdate_trace.h>
+#include <linux/init.h>
struct bpf_verifier_env;
struct bpf_verifier_log;
struct bpf_attach_target_info *tgt_info);
void bpf_trampoline_put(struct bpf_trampoline *tr);
int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
+int __init bpf_arch_init_dispatcher_early(void *ip);
+
#define BPF_DISPATCHER_INIT(_name) { \
.mutex = __MUTEX_INITIALIZER(_name.mutex), \
.func = &_name##_func, \
}, \
}
+#define BPF_DISPATCHER_INIT_CALL(_name) \
+ static int __init _name##_init(void) \
+ { \
+ return bpf_arch_init_dispatcher_early(_name##_func); \
+ } \
+ early_initcall(_name##_init)
+
#ifdef CONFIG_X86_64
#define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5)))
#else
} \
EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \
struct bpf_dispatcher bpf_dispatcher_##name = \
- BPF_DISPATCHER_INIT(bpf_dispatcher_##name);
+ BPF_DISPATCHER_INIT(bpf_dispatcher_##name); \
+ BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name);
+
#define DECLARE_BPF_DISPATCHER(name) \
unsigned int bpf_dispatcher_##name##_func( \
const void *ctx, \
struct cgroup *cgroup_get_from_path(const char *path);
struct cgroup *cgroup_get_from_fd(int fd);
+struct cgroup *cgroup_v1v2_get_from_fd(int fd);
int cgroup_attach_task_all(struct task_struct *from, struct task_struct *);
int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from);
#define DEFINE_COUNTER_ARRAY_CAPTURE(_name, _length) \
DEFINE_COUNTER_ARRAY_U64(_name, _length)
-#define DEFINE_COUNTER_ARRAY_POLARITY(_name, _enums, _length) \
- DEFINE_COUNTER_AVAILABLE(_name##_available, _enums); \
+#define DEFINE_COUNTER_ARRAY_POLARITY(_name, _available, _length) \
struct counter_array _name = { \
.type = COUNTER_COMP_SIGNAL_POLARITY, \
- .avail = &(_name##_available), \
+ .avail = &(_available), \
.length = (_length), \
}
/* Special struct emulating a Ethernet header */
struct qca_mgmt_ethhdr {
- u32 command; /* command bit 31:0 */
- u32 seq; /* seq 63:32 */
- u32 mdio_data; /* first 4byte mdio */
+ __le32 command; /* command bit 31:0 */
+ __le32 seq; /* seq 63:32 */
+ __le32 mdio_data; /* first 4byte mdio */
__be16 hdr; /* qca hdr */
} __packed;
};
struct mib_ethhdr {
- u32 data[3]; /* first 3 mib counter */
+ __le32 data[3]; /* first 3 mib counter */
__be16 hdr; /* qca hdr */
} __packed;
efi_status_t efivar_set_variable(efi_char16_t *name, efi_guid_t *vendor,
u32 attr, unsigned long data_size, void *data);
-efi_status_t check_var_size(u32 attributes, unsigned long size);
-efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size);
-
#if IS_ENABLED(CONFIG_EFI_CAPSULE_LOADER)
extern bool efi_capsule_pending(int *reset_type);
arch_efi_call_virt_teardown(); \
})
-#define EFI_RANDOM_SEED_SIZE 64U
+#define EFI_RANDOM_SEED_SIZE 32U // BLAKE2S_HASH_SIZE
struct linux_efi_random_seed {
u32 size;
#elif defined(__i386__) || defined(__alpha__) || defined(__x86_64__) || \
defined(__hppa__) || defined(__sh__) || defined(__powerpc__) || \
- defined(__arm__) || defined(__aarch64__)
+ defined(__arm__) || defined(__aarch64__) || defined(__mips__)
#define fb_readb __raw_readb
#define fb_readw __raw_readw
extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat);
extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy);
#else
-#define __underlying_memchr __builtin_memchr
-#define __underlying_memcmp __builtin_memcmp
+
+#if defined(__SANITIZE_MEMORY__)
+/*
+ * For KMSAN builds all memcpy/memset/memmove calls should be replaced by the
+ * corresponding __msan_XXX functions.
+ */
+#include <linux/kmsan_string.h>
+#define __underlying_memcpy __msan_memcpy
+#define __underlying_memmove __msan_memmove
+#define __underlying_memset __msan_memset
+#else
#define __underlying_memcpy __builtin_memcpy
#define __underlying_memmove __builtin_memmove
#define __underlying_memset __builtin_memset
+#endif
+
+#define __underlying_memchr __builtin_memchr
+#define __underlying_memcmp __builtin_memcmp
#define __underlying_strcat __builtin_strcat
#define __underlying_strcpy __builtin_strcpy
#define __underlying_strlen __builtin_strlen
#define __fortify_memcpy_chk(p, q, size, p_size, q_size, \
p_size_field, q_size_field, op) ({ \
- size_t __fortify_size = (size_t)(size); \
- WARN_ONCE(fortify_memcpy_chk(__fortify_size, p_size, q_size, \
- p_size_field, q_size_field, #op), \
+ const size_t __fortify_size = (size_t)(size); \
+ const size_t __p_size = (p_size); \
+ const size_t __q_size = (q_size); \
+ const size_t __p_size_field = (p_size_field); \
+ const size_t __q_size_field = (q_size_field); \
+ WARN_ONCE(fortify_memcpy_chk(__fortify_size, __p_size, \
+ __q_size, __p_size_field, \
+ __q_size_field, #op), \
#op ": detected field-spanning write (size %zu) of single %s (size %zu)\n", \
__fortify_size, \
"field \"" #p "\" at " __FILE__ ":" __stringify(__LINE__), \
- p_size_field); \
+ __p_size_field); \
__underlying_##op(p, q, __fortify_size); \
})
}
/* keyring.c */
-void fscrypt_sb_delete(struct super_block *sb);
+void fscrypt_destroy_keyring(struct super_block *sb);
int fscrypt_ioctl_add_key(struct file *filp, void __user *arg);
int fscrypt_add_test_dummy_key(struct super_block *sb,
const struct fscrypt_dummy_policy *dummy_policy);
}
/* keyring.c */
-static inline void fscrypt_sb_delete(struct super_block *sb)
+static inline void fscrypt_destroy_keyring(struct super_block *sb)
{
}
extern bool iommu_default_passthrough(void);
extern struct iommu_resv_region *
iommu_alloc_resv_region(phys_addr_t start, size_t length, int prot,
- enum iommu_resv_type type);
+ enum iommu_resv_type type, gfp_t gfp);
extern int iommu_get_group_resv_regions(struct iommu_group *group,
struct list_head *head);
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * KMSAN string functions API used in other headers.
+ *
+ * Copyright (C) 2022 Google LLC
+ * Author: Alexander Potapenko <glider@google.com>
+ *
+ */
+#ifndef _LINUX_KMSAN_STRING_H
+#define _LINUX_KMSAN_STRING_H
+
+/*
+ * KMSAN overrides the default memcpy/memset/memmove implementations in the
+ * kernel, which requires having __msan_XXX function prototypes in several other
+ * headers. Keep them in one place instead of open-coding.
+ */
+void *__msan_memcpy(void *dst, const void *src, size_t size);
+void *__msan_memset(void *s, int c, size_t n);
+void *__msan_memmove(void *dest, const void *src, size_t len);
+
+#endif /* _LINUX_KMSAN_STRING_H */
void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
/**
- * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a
- * given guest physical address.
+ * kvm_gpc_init - initialize gfn_to_pfn_cache.
+ *
+ * @gpc: struct gfn_to_pfn_cache object.
+ *
+ * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache must
+ * be zero-allocated (or zeroed by the caller before init).
+ */
+void kvm_gpc_init(struct gfn_to_pfn_cache *gpc);
+
+/**
+ * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest
+ * physical address.
*
* @kvm: pointer to kvm instance.
* @gpc: struct gfn_to_pfn_cache object.
* kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before
* accessing the target page.
*/
-int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
- struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
- gpa_t gpa, unsigned long len);
+int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+ gpa_t gpa, unsigned long len);
/**
* kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
/**
- * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.
+ * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache.
*
* @kvm: pointer to kvm instance.
* @gpc: struct gfn_to_pfn_cache object.
* This removes a cache from the @kvm's list to be processed on MMU notifier
* invocation.
*/
-void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
void kvm_sigset_activate(struct kvm_vcpu *vcpu);
void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
struct kvm_enable_cap *cap);
long kvm_arch_vm_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg);
+long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+ unsigned long arg);
int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
struct mlx5_async_ctx {
struct mlx5_core_dev *dev;
atomic_t num_inflight;
- struct wait_queue_head wait;
+ struct completion inflight_done;
};
struct mlx5_async_work;
#define SOCK_NOSPACE 2
#define SOCK_PASSCRED 3
#define SOCK_PASSSEC 4
+#define SOCK_SUPPORT_ZC 5
#ifndef ARCH_HAS_SOCKET_TYPES
/**
static inline unsigned int netif_attrmask_next(int n, const unsigned long *srcp,
unsigned int nr_bits)
{
- /* n is a prior cpu */
- cpu_max_bits_warn(n + 1, nr_bits);
+ /* -1 is a legal arg here. */
+ if (n != -1)
+ cpu_max_bits_warn(n, nr_bits);
if (srcp)
return find_next_bit(srcp, nr_bits, n + 1);
const unsigned long *src2p,
unsigned int nr_bits)
{
- /* n is a prior cpu */
- cpu_max_bits_warn(n + 1, nr_bits);
+ /* -1 is a legal arg here. */
+ if (n != -1)
+ cpu_max_bits_warn(n, nr_bits);
if (src1p && src2p)
return find_next_and_bit(src1p, src2p, nr_bits, n + 1);
return unlikely(overflow);
}
-/** check_add_overflow() - Calculate addition with overflow checking
- *
+/**
+ * check_add_overflow() - Calculate addition with overflow checking
* @a: first addend
* @b: second addend
* @d: pointer to store sum
#define check_add_overflow(a, b, d) \
__must_check_overflow(__builtin_add_overflow(a, b, d))
-/** check_sub_overflow() - Calculate subtraction with overflow checking
- *
+/**
+ * check_sub_overflow() - Calculate subtraction with overflow checking
* @a: minuend; value to subtract from
* @b: subtrahend; value to subtract from @a
* @d: pointer to store difference
#define check_sub_overflow(a, b, d) \
__must_check_overflow(__builtin_sub_overflow(a, b, d))
-/** check_mul_overflow() - Calculate multiplication with overflow checking
- *
+/**
+ * check_mul_overflow() - Calculate multiplication with overflow checking
* @a: first factor
* @b: second factor
* @d: pointer to store product
#define check_mul_overflow(a, b, d) \
__must_check_overflow(__builtin_mul_overflow(a, b, d))
-/** check_shl_overflow() - Calculate a left-shifted value and check overflow
- *
+/**
+ * check_shl_overflow() - Calculate a left-shifted value and check overflow
* @a: Value to be shifted
* @s: How many bits left to shift
* @d: Pointer to where to store the result
*
* Computes *@d = (@a << @s)
*
- * Returns true if '*d' cannot hold the result or when 'a << s' doesn't
+ * Returns true if '*@d' cannot hold the result or when '@a << @s' doesn't
* make sense. Example conditions:
- * - 'a << s' causes bits to be lost when stored in *d.
- * - 's' is garbage (e.g. negative) or so large that the result of
- * 'a << s' is guaranteed to be 0.
- * - 'a' is negative.
- * - 'a << s' sets the sign bit, if any, in '*d'.
*
- * '*d' will hold the results of the attempted shift, but is not
+ * - '@a << @s' causes bits to be lost when stored in *@d.
+ * - '@s' is garbage (e.g. negative) or so large that the result of
+ * '@a << @s' is guaranteed to be 0.
+ * - '@a' is negative.
+ * - '@a << @s' sets the sign bit, if any, in '*@d'.
+ *
+ * '*@d' will hold the results of the attempted shift, but is not
* considered "safe for use" if true is returned.
*/
#define check_shl_overflow(a, s, d) __must_check_overflow(({ \
/**
* size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX
- *
* @factor1: first factor
* @factor2: second factor
*
/**
* size_add() - Calculate size_t addition with saturation at SIZE_MAX
- *
* @addend1: first addend
* @addend2: second addend
*
/**
* size_sub() - Calculate size_t subtraction with saturation at SIZE_MAX
- *
* @minuend: value to subtract from
* @subtrahend: value to subtract from @minuend
*
/**
* array_size() - Calculate size of 2-dimensional array.
- *
* @a: dimension one
* @b: dimension two
*
/**
* array3_size() - Calculate size of 3-dimensional array.
- *
* @a: dimension one
* @b: dimension two
* @c: dimension three
/**
* flex_array_size() - Calculate size of a flexible array member
* within an enclosing structure.
- *
* @p: Pointer to the structure.
* @member: Name of the flexible array member.
* @count: Number of elements in the array.
/**
* struct_size() - Calculate size of structure with trailing flexible array.
- *
* @p: Pointer to the structure.
* @member: Name of the array member.
* @count: Number of elements in the array.
struct fasync_struct *fasync;
/* delayed work for NMIs and such */
- int pending_wakeup;
- int pending_kill;
- int pending_disable;
+ unsigned int pending_wakeup;
+ unsigned int pending_kill;
+ unsigned int pending_disable;
+ unsigned int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
- struct irq_work pending;
+ struct irq_work pending_irq;
+ struct callback_head pending_task;
+ unsigned int pending_work;
atomic_t event_limit;
#endif
void *task_ctx_data; /* pmu specific data */
struct rcu_head rcu_head;
+
+ /*
+ * Sum (event->pending_sigtrap + event->pending_work)
+ *
+ * The SIGTRAP is targeted at ctx->task, as such it won't do changing
+ * that until the signal is delivered.
+ */
+ local_t nr_pending;
};
/*
* (See commit 7cceb599d15d ("net: phylink: avoid mac_config calls")
* @poll_fixed_state: if true, starts link_poll,
* if MAC link is at %MLO_AN_FIXED mode.
+ * @mac_managed_pm: if true, indicate the MAC driver is responsible for PHY PM.
* @ovr_an_inband: if true, override PCS to MLO_AN_INBAND
* @get_fixed_state: callback to execute to determine the fixed link state,
* if MAC link is at %MLO_AN_FIXED mode.
enum phylink_op_type type;
bool legacy_pre_march2020;
bool poll_fixed_state;
+ bool mac_managed_pm;
bool ovr_an_inband;
void (*get_fixed_state)(struct phylink_config *config,
struct phylink_link_state *state);
static inline bool vma_can_userfault(struct vm_area_struct *vma,
unsigned long vm_flags)
{
- if (vm_flags & VM_UFFD_MINOR)
- return is_vm_hugetlb_page(vma) || vma_is_shmem(vma);
-
+ if ((vm_flags & VM_UFFD_MINOR) &&
+ (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma)))
+ return false;
#ifndef CONFIG_PTE_MARKER_UFFD_WP
/*
* If user requested uffd-wp but not enabled pte markers for
#include <uapi/linux/utsname.h>
enum uts_proc {
+ UTS_PROC_ARCH,
UTS_PROC_OSTYPE,
UTS_PROC_OSRELEASE,
UTS_PROC_VERSION,
IR_KBD_GET_KEY_PIXELVIEW,
IR_KBD_GET_KEY_HAUP,
IR_KBD_GET_KEY_KNC1,
+ IR_KBD_GET_KEY_GENIATECH,
IR_KBD_GET_KEY_FUSIONHDTV,
IR_KBD_GET_KEY_HAUP_XVR,
IR_KBD_GET_KEY_AVERMEDIA_CARDBUS,
#define MEDIA_DEV_NOTIFY_PRE_LINK_CH 0
#define MEDIA_DEV_NOTIFY_POST_LINK_CH 1
-/**
- * media_entity_enum_init - Initialise an entity enumeration
- *
- * @ent_enum: Entity enumeration to be initialised
- * @mdev: The related media device
- *
- * Return: zero on success or a negative error code.
- */
-static inline __must_check int media_entity_enum_init(
- struct media_entity_enum *ent_enum, struct media_device *mdev)
-{
- return __media_entity_enum_init(ent_enum,
- mdev->entity_internal_idx_max + 1);
-}
-
/**
* media_device_init() - Initializes a media device element
*
#include <linux/fwnode.h>
#include <linux/list.h>
#include <linux/media.h>
+#include <linux/minmax.h>
#include <linux/types.h>
/* Enums used internally at the media controller to represent graphs */
/**
* struct media_pipeline - Media pipeline related information
*
- * @streaming_count: Streaming start count - streaming stop count
- * @graph: Media graph walk during pipeline start / stop
+ * @allocated: Media pipeline allocated and freed by the framework
+ * @mdev: The media device the pipeline is part of
+ * @pads: List of media_pipeline_pad
+ * @start_count: Media pipeline start - stop count
*/
struct media_pipeline {
- int streaming_count;
- struct media_graph graph;
+ bool allocated;
+ struct media_device *mdev;
+ struct list_head pads;
+ int start_count;
+};
+
+/**
+ * struct media_pipeline_pad - A pad part of a media pipeline
+ *
+ * @list: Entry in the media_pad pads list
+ * @pipe: The media_pipeline that the pad is part of
+ * @pad: The media pad
+ *
+ * This structure associate a pad with a media pipeline. Instances of
+ * media_pipeline_pad are created by media_pipeline_start() when it builds the
+ * pipeline, and stored in the &media_pad.pads list. media_pipeline_stop()
+ * removes the entries from the list and deletes them.
+ */
+struct media_pipeline_pad {
+ struct list_head list;
+ struct media_pipeline *pipe;
+ struct media_pad *pad;
};
/**
* @flags: Pad flags, as defined in
* :ref:`include/uapi/linux/media.h <media_header>`
* (seek for ``MEDIA_PAD_FL_*``)
+ * @pipe: Pipeline this pad belongs to. Use media_entity_pipeline() to
+ * access this field.
*/
struct media_pad {
struct media_gobj graph_obj; /* must be first field in struct */
u16 index;
enum media_pad_signal_type sig_type;
unsigned long flags;
+
+ /*
+ * The fields below are private, and should only be accessed via
+ * appropriate functions.
+ */
+ struct media_pipeline *pipe;
};
/**
* @link_validate: Return whether a link is valid from the entity point of
* view. The media_pipeline_start() function
* validates all links by calling this operation. Optional.
+ * @has_pad_interdep: Return whether a two pads inside the entity are
+ * interdependent. If two pads are interdependent they are
+ * part of the same pipeline and enabling one of the pads
+ * means that the other pad will become "locked" and
+ * doesn't allow configuration changes. pad0 and pad1 are
+ * guaranteed to not both be sinks or sources.
+ * Optional: If the operation isn't implemented all pads
+ * will be considered as interdependent.
*
* .. note::
*
const struct media_pad *local,
const struct media_pad *remote, u32 flags);
int (*link_validate)(struct media_link *link);
+ bool (*has_pad_interdep)(struct media_entity *entity, unsigned int pad0,
+ unsigned int pad1);
};
/**
* @links: List of data links.
* @ops: Entity operations.
* @use_count: Use count for the entity.
- * @pipe: Pipeline this entity belongs to.
* @info: Union with devnode information. Kept just for backward
* compatibility.
* @info.dev: Contains device major and minor info.
int use_count;
- struct media_pipeline *pipe;
-
union {
struct {
u32 major;
} info;
};
+/**
+ * media_entity_for_each_pad - Iterate on all pads in an entity
+ * @entity: The entity the pads belong to
+ * @iter: The iterator pad
+ *
+ * Iterate on all pads in a media entity.
+ */
+#define media_entity_for_each_pad(entity, iter) \
+ for (iter = (entity)->pads; \
+ iter < &(entity)->pads[(entity)->num_pads]; \
+ ++iter)
+
/**
* struct media_interface - A media interface graph object.
*
}
/**
- * __media_entity_enum_init - Initialise an entity enumeration
+ * media_entity_enum_init - Initialise an entity enumeration
*
* @ent_enum: Entity enumeration to be initialised
- * @idx_max: Maximum number of entities in the enumeration
+ * @mdev: The related media device
*
- * Return: Returns zero on success or a negative error code.
+ * Return: zero on success or a negative error code.
*/
-__must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
- int idx_max);
+__must_check int media_entity_enum_init(struct media_entity_enum *ent_enum,
+ struct media_device *mdev);
/**
* media_entity_enum_cleanup - Release resources of an entity enumeration
return media_entity_remote_pad_unique(entity, MEDIA_PAD_FL_SOURCE);
}
+/**
+ * media_pad_is_streaming - Test if a pad is part of a streaming pipeline
+ * @pad: The pad
+ *
+ * Return: True if the pad is part of a pipeline started with the
+ * media_pipeline_start() function, false otherwise.
+ */
+static inline bool media_pad_is_streaming(const struct media_pad *pad)
+{
+ return pad->pipe;
+}
+
/**
* media_entity_is_streaming - Test if an entity is part of a streaming pipeline
* @entity: The entity
*/
static inline bool media_entity_is_streaming(const struct media_entity *entity)
{
- return entity->pipe;
+ struct media_pad *pad;
+
+ media_entity_for_each_pad(entity, pad) {
+ if (media_pad_is_streaming(pad))
+ return true;
+ }
+
+ return false;
}
+/**
+ * media_entity_pipeline - Get the media pipeline an entity is part of
+ * @entity: The entity
+ *
+ * DEPRECATED: use media_pad_pipeline() instead.
+ *
+ * This function returns the media pipeline that an entity has been associated
+ * with when constructing the pipeline with media_pipeline_start(). The pointer
+ * remains valid until media_pipeline_stop() is called.
+ *
+ * In general, entities can be part of multiple pipelines, when carrying
+ * multiple streams (either on different pads, or on the same pad using
+ * multiplexed streams). This function is to be used only for entities that
+ * do not support multiple pipelines.
+ *
+ * Return: The media_pipeline the entity is part of, or NULL if the entity is
+ * not part of any pipeline.
+ */
+struct media_pipeline *media_entity_pipeline(struct media_entity *entity);
+
+/**
+ * media_pad_pipeline - Get the media pipeline a pad is part of
+ * @pad: The pad
+ *
+ * This function returns the media pipeline that a pad has been associated
+ * with when constructing the pipeline with media_pipeline_start(). The pointer
+ * remains valid until media_pipeline_stop() is called.
+ *
+ * Return: The media_pipeline the pad is part of, or NULL if the pad is
+ * not part of any pipeline.
+ */
+struct media_pipeline *media_pad_pipeline(struct media_pad *pad);
+
/**
* media_entity_get_fwnode_pad - Get pad number from fwnode
*
/**
* media_pipeline_start - Mark a pipeline as streaming
- * @entity: Starting entity
- * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ * @pad: Starting pad
+ * @pipe: Media pipeline to be assigned to all pads in the pipeline.
*
- * Mark all entities connected to a given entity through enabled links, either
+ * Mark all pads connected to a given pad through enabled links, either
* directly or indirectly, as streaming. The given pipeline object is assigned
- * to every entity in the pipeline and stored in the media_entity pipe field.
+ * to every pad in the pipeline and stored in the media_pad pipe field.
*
* Calls to this function can be nested, in which case the same number of
* media_pipeline_stop() calls will be required to stop streaming. The
* pipeline pointer must be identical for all nested calls to
* media_pipeline_start().
*/
-__must_check int media_pipeline_start(struct media_entity *entity,
+__must_check int media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe);
/**
* __media_pipeline_start - Mark a pipeline as streaming
*
- * @entity: Starting entity
- * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ * @pad: Starting pad
+ * @pipe: Media pipeline to be assigned to all pads in the pipeline.
*
* ..note:: This is the non-locking version of media_pipeline_start()
*/
-__must_check int __media_pipeline_start(struct media_entity *entity,
+__must_check int __media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe);
/**
* media_pipeline_stop - Mark a pipeline as not streaming
- * @entity: Starting entity
+ * @pad: Starting pad
*
- * Mark all entities connected to a given entity through enabled links, either
- * directly or indirectly, as not streaming. The media_entity pipe field is
+ * Mark all pads connected to a given pads through enabled links, either
+ * directly or indirectly, as not streaming. The media_pad pipe field is
* reset to %NULL.
*
* If multiple calls to media_pipeline_start() have been made, the same
* number of calls to this function are required to mark the pipeline as not
* streaming.
*/
-void media_pipeline_stop(struct media_entity *entity);
+void media_pipeline_stop(struct media_pad *pad);
/**
* __media_pipeline_stop - Mark a pipeline as not streaming
*
- * @entity: Starting entity
+ * @pad: Starting pad
*
* .. note:: This is the non-locking version of media_pipeline_stop()
*/
-void __media_pipeline_stop(struct media_entity *entity);
+void __media_pipeline_stop(struct media_pad *pad);
+
+/**
+ * media_pipeline_alloc_start - Mark a pipeline as streaming
+ * @pad: Starting pad
+ *
+ * media_pipeline_alloc_start() is similar to media_pipeline_start() but instead
+ * of working on a given pipeline the function will use an existing pipeline if
+ * the pad is already part of a pipeline, or allocate a new pipeline.
+ *
+ * Calls to media_pipeline_alloc_start() must be matched with
+ * media_pipeline_stop().
+ */
+__must_check int media_pipeline_alloc_start(struct media_pad *pad);
/**
* media_devnode_create() - creates and initializes a device node interface
*
* @sd: pointer to &struct v4l2_subdev
* @client: pointer to struct i2c_client
- * @devname: the name of the device; if NULL, the I²C device's name will be used
+ * @devname: the name of the device; if NULL, the I²C device drivers's name
+ * will be used
* @postfix: sub-device specific string to put right after the I²C device name;
* may be NULL
*/
* struct v4l2_ctrl_type_ops - The control type operations that the driver
* has to provide.
*
- * @equal: return true if both values are equal.
- * @init: initialize the value.
+ * @equal: return true if all ctrl->elems array elements are equal.
+ * @init: initialize the value for array elements from from_idx to ctrl->elems.
* @log: log the value.
- * @validate: validate the value. Return 0 on success and a negative value
- * otherwise.
+ * @validate: validate the value for ctrl->new_elems array elements.
+ * Return 0 on success and a negative value otherwise.
*/
struct v4l2_ctrl_type_ops {
- bool (*equal)(const struct v4l2_ctrl *ctrl, u32 elems,
- union v4l2_ctrl_ptr ptr1,
- union v4l2_ctrl_ptr ptr2);
- void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx, u32 tot_elems,
+ bool (*equal)(const struct v4l2_ctrl *ctrl,
+ union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2);
+ void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx,
union v4l2_ctrl_ptr ptr);
void (*log)(const struct v4l2_ctrl *ctrl);
- int (*validate)(const struct v4l2_ctrl *ctrl, u32 elems,
- union v4l2_ctrl_ptr ptr);
+ int (*validate)(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr);
};
/**
* v4l2_ctrl_type_op_equal - Default v4l2_ctrl_type_ops equal callback.
*
* @ctrl: The v4l2_ctrl pointer.
- * @elems: The number of elements to compare.
* @ptr1: A v4l2 control value.
* @ptr2: A v4l2 control value.
*
* Return: true if values are equal, otherwise false.
*/
-bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
+bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl,
union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2);
/**
*
* @ctrl: The v4l2_ctrl pointer.
* @from_idx: Starting element index.
- * @elems: The number of elements to initialize.
* @ptr: The v4l2 control value.
*
* Return: void
*/
void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx,
- u32 elems, union v4l2_ctrl_ptr ptr);
+ union v4l2_ctrl_ptr ptr);
/**
* v4l2_ctrl_type_op_log - Default v4l2_ctrl_type_ops log callback.
* v4l2_ctrl_type_op_validate - Default v4l2_ctrl_type_ops validate callback.
*
* @ctrl: The v4l2_ctrl pointer.
- * @elems: The number of elements in the control.
* @ptr: The v4l2 control value.
*
* Return: 0 on success, a negative error code on failure.
*/
-int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems,
- union v4l2_ctrl_ptr ptr);
+int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr);
#endif
return test_bit(V4L2_FL_REGISTERED, &vdev->flags);
}
+#if defined(CONFIG_MEDIA_CONTROLLER)
+
+/**
+ * video_device_pipeline_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ *
+ * Mark all entities connected to a given video device through enabled links,
+ * either directly or indirectly, as streaming. The given pipeline object is
+ * assigned to every pad in the pipeline and stored in the media_pad pipe
+ * field.
+ *
+ * Calls to this function can be nested, in which case the same number of
+ * video_device_pipeline_stop() calls will be required to stop streaming. The
+ * pipeline pointer must be identical for all nested calls to
+ * video_device_pipeline_start().
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_pipeline_start().
+ */
+__must_check int video_device_pipeline_start(struct video_device *vdev,
+ struct media_pipeline *pipe);
+
+/**
+ * __video_device_pipeline_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ *
+ * ..note:: This is the non-locking version of video_device_pipeline_start()
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around __media_pipeline_start().
+ */
+__must_check int __video_device_pipeline_start(struct video_device *vdev,
+ struct media_pipeline *pipe);
+
+/**
+ * video_device_pipeline_stop - Mark a pipeline as not streaming
+ * @vdev: Starting video device
+ *
+ * Mark all entities connected to a given video device through enabled links,
+ * either directly or indirectly, as not streaming. The media_pad pipe field
+ * is reset to %NULL.
+ *
+ * If multiple calls to media_pipeline_start() have been made, the same
+ * number of calls to this function are required to mark the pipeline as not
+ * streaming.
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_pipeline_stop().
+ */
+void video_device_pipeline_stop(struct video_device *vdev);
+
+/**
+ * __video_device_pipeline_stop - Mark a pipeline as not streaming
+ * @vdev: Starting video device
+ *
+ * .. note:: This is the non-locking version of media_pipeline_stop()
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around __media_pipeline_stop().
+ */
+void __video_device_pipeline_stop(struct video_device *vdev);
+
+/**
+ * video_device_pipeline_alloc_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ *
+ * video_device_pipeline_alloc_start() is similar to video_device_pipeline_start()
+ * but instead of working on a given pipeline the function will use an
+ * existing pipeline if the video device is already part of a pipeline, or
+ * allocate a new pipeline.
+ *
+ * Calls to video_device_pipeline_alloc_start() must be matched with
+ * video_device_pipeline_stop().
+ */
+__must_check int video_device_pipeline_alloc_start(struct video_device *vdev);
+
+/**
+ * video_device_pipeline - Get the media pipeline a video device is part of
+ * @vdev: The video device
+ *
+ * This function returns the media pipeline that a video device has been
+ * associated with when constructing the pipeline with
+ * video_device_pipeline_start(). The pointer remains valid until
+ * video_device_pipeline_stop() is called.
+ *
+ * Return: The media_pipeline the video device is part of, or NULL if the video
+ * device is not part of any pipeline.
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_entity_pipeline().
+ */
+struct media_pipeline *video_device_pipeline(struct video_device *vdev);
+
+#endif /* CONFIG_MEDIA_CONTROLLER */
+
#endif /* _V4L2_DEV_H */
*/
struct v4l2_fwnode_endpoint {
struct fwnode_endpoint base;
- /*
- * Fields below this line will be zeroed by
- * v4l2_fwnode_endpoint_parse()
- */
enum v4l2_mbus_type bus_type;
struct {
struct v4l2_mbus_config_parallel parallel;
} bus;
};
-#define V4L2_FRAME_DESC_ENTRY_MAX 4
+ /*
+ * If this number is too small, it should be dropped altogether and the
+ * API switched to a dynamic number of frame descriptor entries.
+ */
+#define V4L2_FRAME_DESC_ENTRY_MAX 8
/**
* enum v4l2_mbus_frame_desc_type - media bus frame description type
struct v4l2_subdev_state *state,
unsigned int pad)
{
+ if (WARN_ON(!state))
+ return NULL;
if (WARN_ON(pad >= sd->entity.num_pads))
pad = 0;
return &state->pads[pad].try_fmt;
struct v4l2_subdev_state *state,
unsigned int pad)
{
+ if (WARN_ON(!state))
+ return NULL;
if (WARN_ON(pad >= sd->entity.num_pads))
pad = 0;
return &state->pads[pad].try_crop;
struct v4l2_subdev_state *state,
unsigned int pad)
{
+ if (WARN_ON(!state))
+ return NULL;
if (WARN_ON(pad >= sd->entity.num_pads))
pad = 0;
return &state->pads[pad].try_compose;
* do additional, common, filtering and return an error
* @post_doit: called after an operation's doit callback, it may
* undo operations done by pre_doit, for example release locks
+ * @module: pointer to the owning module (set to THIS_MODULE)
* @mcgrps: multicast groups used by this family
* @n_mcgrps: number of multicast groups
* @resv_start_op: first operation for which reserved fields of the header
- * can be validated, new families should leave this field at zero
+ * can be validated and policies are required (see below);
+ * new families should leave this field at zero
* @mcgrp_offset: starting number of multicast group IDs in this family
* (private)
* @ops: the operations supported by this family
* @n_ops: number of operations supported by this family
* @small_ops: the small-struct operations supported by this family
* @n_small_ops: number of small-struct operations supported by this family
+ *
+ * Attribute policies (the combination of @policy and @maxattr fields)
+ * can be attached at the family level or at the operation level.
+ * If both are present the per-operation policy takes precedence.
+ * For operations before @resv_start_op lack of policy means that the core
+ * will perform no attribute parsing or validation. For newer operations
+ * if policy is not provided core will reject all TLV attributes.
*/
struct genl_family {
int id; /* private */
};
/**
- * struct genl_info - info that is available during dumpit op call
+ * struct genl_dumpit_info - info that is available during dumpit op call
* @family: generic netlink family - for internal genl code usage
- * @ops: generic netlink ops - for internal genl code usage
+ * @op: generic netlink ops - for internal genl code usage
* @attrs: netlink attributes
*/
struct genl_dumpit_info {
/**
* genlmsg_unicast - unicast a netlink message
+ * @net: network namespace to look up @portid in
* @skb: netlink message as socket buffer
* @portid: netlink portid of the destination socket
*/
}
/**
- * gennlmsg_data - head of message payload
+ * genlmsg_data - head of message payload
* @gnlh: genetlink message header
*/
static inline void *genlmsg_data(const struct genlmsghdr *gnlh)
NLA_S64,
NLA_BITFIELD32,
NLA_REJECT,
+ NLA_BE16,
+ NLA_BE32,
__NLA_TYPE_MAX,
};
* NLA_U32, NLA_U64,
* NLA_S8, NLA_S16,
* NLA_S32, NLA_S64,
+ * NLA_BE16, NLA_BE32,
* NLA_MSECS Leaving the length field zero will verify the
* given type fits, using it verifies minimum length
* just like "All other"
* NLA_U16,
* NLA_U32,
* NLA_U64,
+ * NLA_BE16,
+ * NLA_BE32,
* NLA_S8,
* NLA_S16,
* NLA_S32,
u8 validation_type;
u16 len;
union {
- const u32 bitfield32_valid;
- const u32 mask;
- const char *reject_message;
- const struct nla_policy *nested_policy;
- struct netlink_range_validation *range;
- struct netlink_range_validation_signed *range_signed;
- struct {
- s16 min, max;
- u8 network_byte_order:1;
- };
- int (*validate)(const struct nlattr *attr,
- struct netlink_ext_ack *extack);
- /* This entry is special, and used for the attribute at index 0
+ /**
+ * @strict_start_type: first attribute to validate strictly
+ *
+ * This entry is special, and used for the attribute at index 0
* only, and specifies special data about the policy, namely it
* specifies the "boundary type" where strict length validation
* starts for any attribute types >= this value, also, strict
* was added to enforce strict validation from thereon.
*/
u16 strict_start_type;
+
+ /* private: use NLA_POLICY_*() to set */
+ const u32 bitfield32_valid;
+ const u32 mask;
+ const char *reject_message;
+ const struct nla_policy *nested_policy;
+ struct netlink_range_validation *range;
+ struct netlink_range_validation_signed *range_signed;
+ struct {
+ s16 min, max;
+ };
+ int (*validate)(const struct nlattr *attr,
+ struct netlink_ext_ack *extack);
};
};
(tp == NLA_U8 || tp == NLA_U16 || tp == NLA_U32 || tp == NLA_U64)
#define __NLA_IS_SINT_TYPE(tp) \
(tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64)
+#define __NLA_IS_BEINT_TYPE(tp) \
+ (tp == NLA_BE16 || tp == NLA_BE32)
#define __NLA_ENSURE(condition) BUILD_BUG_ON_ZERO(!(condition))
#define NLA_ENSURE_UINT_TYPE(tp) \
#define NLA_ENSURE_INT_OR_BINARY_TYPE(tp) \
(__NLA_ENSURE(__NLA_IS_UINT_TYPE(tp) || \
__NLA_IS_SINT_TYPE(tp) || \
+ __NLA_IS_BEINT_TYPE(tp) || \
tp == NLA_MSECS || \
tp == NLA_BINARY) + tp)
#define NLA_ENSURE_NO_VALIDATION_PTR(tp) \
tp != NLA_REJECT && \
tp != NLA_NESTED && \
tp != NLA_NESTED_ARRAY) + tp)
+#define NLA_ENSURE_BEINT_TYPE(tp) \
+ (__NLA_ENSURE(__NLA_IS_BEINT_TYPE(tp)) + tp)
#define NLA_POLICY_RANGE(tp, _min, _max) { \
.type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \
.type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \
.validation_type = NLA_VALIDATE_MAX, \
.max = _max, \
- .network_byte_order = 0, \
-}
-
-#define NLA_POLICY_MAX_BE(tp, _max) { \
- .type = NLA_ENSURE_UINT_TYPE(tp), \
- .validation_type = NLA_VALIDATE_MAX, \
- .max = _max, \
- .network_byte_order = 1, \
}
#define NLA_POLICY_MASK(tp, _mask) { \
void sock_kzfree_s(struct sock *sk, void *mem, int size);
void sk_send_sigurg(struct sock *sk);
+static inline void sock_replace_proto(struct sock *sk, struct proto *proto)
+{
+ if (sk->sk_socket)
+ clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
+ WRITE_ONCE(sk->sk_prot, proto);
+}
+
struct sockcm_cookie {
u64 transmit_time;
u32 mark;
static inline gfp_t gfp_memcg_charge(void)
{
- return in_softirq() ? GFP_NOWAIT : GFP_KERNEL;
+ return in_softirq() ? GFP_ATOMIC : GFP_KERNEL;
}
static inline long sock_rcvtimeo(const struct sock *sk, bool noblock)
extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog);
extern int reuseport_detach_prog(struct sock *sk);
-static inline bool reuseport_has_conns(struct sock *sk, bool set)
+static inline bool reuseport_has_conns(struct sock *sk)
{
struct sock_reuseport *reuse;
bool ret = false;
rcu_read_lock();
reuse = rcu_dereference(sk->sk_reuseport_cb);
- if (reuse) {
- if (set)
- reuse->has_conns = 1;
- ret = reuse->has_conns;
- }
+ if (reuse && reuse->has_conns)
+ ret = true;
rcu_read_unlock();
return ret;
}
+void reuseport_has_conns_set(struct sock *sk);
+
#endif /* _SOCK_REUSEPORT_H */
int snd_ctl_replace(struct snd_card *card, struct snd_kcontrol *kcontrol, bool add_on_replace);
int snd_ctl_remove_id(struct snd_card * card, struct snd_ctl_elem_id *id);
int snd_ctl_rename_id(struct snd_card * card, struct snd_ctl_elem_id *src_id, struct snd_ctl_elem_id *dst_id);
+void snd_ctl_rename(struct snd_card *card, struct snd_kcontrol *kctl, const char *name);
int snd_ctl_activate_id(struct snd_card *card, struct snd_ctl_elem_id *id, int active);
struct snd_kcontrol *snd_ctl_find_numid(struct snd_card * card, unsigned int numid);
struct snd_kcontrol *snd_ctl_find_id(struct snd_card * card, struct snd_ctl_elem_id *id);
struct snd_pcm_hw_params *params);
void asoc_simple_parse_convert(struct device_node *np, char *prefix,
struct asoc_simple_data *data);
+bool asoc_simple_is_convert_required(const struct asoc_simple_data *data);
int asoc_simple_parse_routing(struct snd_soc_card *card,
char *prefix);
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM watchdog
+
+#if !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_WATCHDOG_H
+
+#include <linux/watchdog.h>
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(watchdog_template,
+
+ TP_PROTO(struct watchdog_device *wdd, int err),
+
+ TP_ARGS(wdd, err),
+
+ TP_STRUCT__entry(
+ __field(int, id)
+ __field(int, err)
+ ),
+
+ TP_fast_assign(
+ __entry->id = wdd->id;
+ __entry->err = err;
+ ),
+
+ TP_printk("watchdog%d err=%d", __entry->id, __entry->err)
+);
+
+DEFINE_EVENT(watchdog_template, watchdog_start,
+ TP_PROTO(struct watchdog_device *wdd, int err),
+ TP_ARGS(wdd, err));
+
+DEFINE_EVENT(watchdog_template, watchdog_ping,
+ TP_PROTO(struct watchdog_device *wdd, int err),
+ TP_ARGS(wdd, err));
+
+DEFINE_EVENT(watchdog_template, watchdog_stop,
+ TP_PROTO(struct watchdog_device *wdd, int err),
+ TP_ARGS(wdd, err));
+
+TRACE_EVENT(watchdog_set_timeout,
+
+ TP_PROTO(struct watchdog_device *wdd, unsigned int timeout, int err),
+
+ TP_ARGS(wdd, timeout, err),
+
+ TP_STRUCT__entry(
+ __field(int, id)
+ __field(unsigned int, timeout)
+ __field(int, err)
+ ),
+
+ TP_fast_assign(
+ __entry->id = wdd->id;
+ __entry->timeout = timeout;
+ __entry->err = err;
+ ),
+
+ TP_printk("watchdog%d timeout=%u err=%d", __entry->id, __entry->timeout, __entry->err)
+);
+
+#endif /* !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
#define AMDGPU_INFO_FW_MES_KIQ 0x19
/* Subquery id: Query MES firmware version */
#define AMDGPU_INFO_FW_MES 0x1a
+ /* Subquery id: Query IMU firmware version */
+ #define AMDGPU_INFO_FW_IMU 0x1b
/* number of bytes moved for TTM migration */
#define AMDGPU_INFO_NUM_BYTES_MOVED 0x0f
#define PANFROSTDUMP_BUF_BO (PANFROSTDUMP_BUF_BOMAP + 1)
#define PANFROSTDUMP_BUF_TRAILER (PANFROSTDUMP_BUF_BO + 1)
+/*
+ * This structure is the native endianness of the dumping machine, tools can
+ * detect the endianness by looking at the value in 'magic'.
+ */
struct panfrost_dump_object_header {
- __le32 magic;
- __le32 type;
- __le32 file_size;
- __le32 file_offset;
+ __u32 magic;
+ __u32 type;
+ __u32 file_size;
+ __u32 file_offset;
union {
- struct pan_reg_hdr {
- __le64 jc;
- __le32 gpu_id;
- __le32 major;
- __le32 minor;
- __le64 nbos;
+ struct {
+ __u64 jc;
+ __u32 gpu_id;
+ __u32 major;
+ __u32 minor;
+ __u64 nbos;
} reghdr;
struct pan_bomap_hdr {
- __le32 valid;
- __le64 iova;
- __le32 data[2];
+ __u32 valid;
+ __u64 iova;
+ __u32 data[2];
} bomap;
/*
* with new fields and also keep it 512-byte aligned
*/
- __le32 sizer[496];
+ __u32 sizer[496];
};
};
/* Registers object, an array of these */
struct panfrost_dump_registers {
- __le32 reg;
- __le32 value;
+ __u32 reg;
+ __u32 value;
};
#if defined(__cplusplus)
}
}
+static inline void cec_msg_set_audio_volume_level(struct cec_msg *msg,
+ __u8 audio_volume_level)
+{
+ msg->len = 3;
+ msg->msg[1] = CEC_MSG_SET_AUDIO_VOLUME_LEVEL;
+ msg->msg[2] = audio_volume_level;
+}
+
+static inline void cec_ops_set_audio_volume_level(const struct cec_msg *msg,
+ __u8 *audio_volume_level)
+{
+ *audio_volume_level = msg->msg[2];
+}
+
/* Audio Rate Control Feature */
static inline void cec_msg_set_audio_rate(struct cec_msg *msg,
#define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE 0x08
#define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX 0x04
#define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX 0x02
+#define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL 0x01
#define CEC_MSG_GIVE_FEATURES 0xa5 /* HDMI 2.0 */
#define CEC_OP_AUD_FMT_ID_CEA861 0
#define CEC_OP_AUD_FMT_ID_CEA861_CXT 1
+#define CEC_MSG_SET_AUDIO_VOLUME_LEVEL 0x73
/* Audio Rate Control Feature */
#define CEC_MSG_SET_AUDIO_RATE 0x9a
#define PERF_MEM_LVLNUM_L3 0x03 /* L3 */
#define PERF_MEM_LVLNUM_L4 0x04 /* L4 */
/* 5-0x8 available */
-#define PERF_MEM_LVLNUM_EXTN_MEM 0x09 /* Extension memory */
+#define PERF_MEM_LVLNUM_CXL 0x09 /* CXL */
#define PERF_MEM_LVLNUM_IO 0x0a /* I/O */
#define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */
#define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */
/*
* Defect Pixel Cluster Correction
*/
-#define RKISP1_CIF_ISP_DPCC_METHODS_MAX 3
+#define RKISP1_CIF_ISP_DPCC_METHODS_MAX 3
+
+#define RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE (1U << 2)
+
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_G_CENTER (1U << 0)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_RB_CENTER (1U << 1)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_G_3X3 (1U << 2)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_RB_3X3 (1U << 3)
+
+/* 0-2 for sets 1-3 */
+#define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_SET(n) ((n) << 0)
+#define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_FIX_SET (1U << 3)
+
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_GREEN_ENABLE (1U << 0)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_GREEN_ENABLE (1U << 1)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_GREEN_ENABLE (1U << 2)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_GREEN_ENABLE (1U << 3)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_GREEN_ENABLE (1U << 4)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_RED_BLUE_ENABLE (1U << 8)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_RED_BLUE_ENABLE (1U << 9)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_RED_BLUE_ENABLE (1U << 10)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_RED_BLUE_ENABLE (1U << 11)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_RED_BLUE_ENABLE (1U << 12)
+
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_G(v) ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RB(v) ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_G(v) ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RB(v) ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_G(v) ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_RB(v) ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_G(v) ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_RB(v) ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_G(v) ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_RB(v) ((v) << 8)
+
+#define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_G(n, v) ((v) << ((n) * 4))
+#define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_RB(n, v) ((v) << ((n) * 4 + 2))
+
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_G(n, v) ((v) << ((n) * 4))
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_RB(n, v) ((v) << ((n) * 4 + 2))
/*
* Denoising pre filter
};
/**
- * struct rkisp1_cif_isp_dpcc_methods_config - Methods Configuration used by DPCC
+ * struct rkisp1_cif_isp_dpcc_methods_config - DPCC methods set configuration
*
- * Methods Configuration used by Defect Pixel Cluster Correction
+ * This structure stores the configuration of one set of methods for the DPCC
+ * algorithm. Multiple methods can be selected in each set (independently for
+ * the Green and Red/Blue components) through the @method field, the result is
+ * the logical AND of all enabled methods. The remaining fields set thresholds
+ * and factors for each method.
*
- * @method: Method enable bits
- * @line_thresh: Line threshold
- * @line_mad_fac: Line MAD factor
- * @pg_fac: Peak gradient factor
- * @rnd_thresh: Rank Neighbor Difference threshold
- * @rg_fac: Rank gradient factor
+ * @method: Method enable bits (RKISP1_CIF_ISP_DPCC_METHODS_SET_*)
+ * @line_thresh: Line threshold (RKISP1_CIF_ISP_DPCC_LINE_THRESH_*)
+ * @line_mad_fac: Line Mean Absolute Difference factor (RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_*)
+ * @pg_fac: Peak gradient factor (RKISP1_CIF_ISP_DPCC_PG_FAC_*)
+ * @rnd_thresh: Rank Neighbor Difference threshold (RKISP1_CIF_ISP_DPCC_RND_THRESH_*)
+ * @rg_fac: Rank gradient factor (RKISP1_CIF_ISP_DPCC_RG_FAC_*)
*/
struct rkisp1_cif_isp_dpcc_methods_config {
__u32 method;
/**
* struct rkisp1_cif_isp_dpcc_config - Configuration used by DPCC
*
- * Configuration used by Defect Pixel Cluster Correction
+ * Configuration used by Defect Pixel Cluster Correction. Three sets of methods
+ * can be configured and selected through the @set_use field. The result is the
+ * logical OR of all enabled sets.
*
- * @mode: dpcc output mode
- * @output_mode: whether use hard coded methods
- * @set_use: stage1 methods set
- * @methods: methods config
- * @ro_limits: rank order limits
- * @rnd_offs: differential rank offsets for rank neighbor difference
+ * @mode: DPCC mode (RKISP1_CIF_ISP_DPCC_MODE_*)
+ * @output_mode: Interpolation output mode (RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_*)
+ * @set_use: Methods sets selection (RKISP1_CIF_ISP_DPCC_SET_USE_*)
+ * @methods: Methods sets configuration
+ * @ro_limits: Rank order limits (RKISP1_CIF_ISP_DPCC_RO_LIMITS_*)
+ * @rnd_offs: Differential rank offsets for rank neighbor difference (RKISP1_CIF_ISP_DPCC_RND_OFFS_*)
*/
struct rkisp1_cif_isp_dpcc_config {
__u32 mode;
((bt)->width + V4L2_DV_BT_BLANKING_WIDTH(bt))
#define V4L2_DV_BT_BLANKING_HEIGHT(bt) \
((bt)->vfrontporch + (bt)->vsync + (bt)->vbackporch + \
- (bt)->il_vfrontporch + (bt)->il_vsync + (bt)->il_vbackporch)
+ ((bt)->interlaced ? \
+ ((bt)->il_vfrontporch + (bt)->il_vsync + (bt)->il_vbackporch) : 0))
#define V4L2_DV_BT_FRAME_HEIGHT(bt) \
((bt)->height + V4L2_DV_BT_BLANKING_HEIGHT(bt))
This shows whether a suitable Rust toolchain is available (found).
Please see Documentation/rust/quick-start.rst for instructions on how
- to satify the build requirements of Rust support.
+ to satisfy the build requirements of Rust support.
In particular, the Makefile target 'rustavailable' is useful to check
why the Rust toolchain is not being detected.
#include <linux/file.h>
#include <linux/io_uring_types.h>
-/*
- * FFS_SCM is only available on 64-bit archs, for 32-bit we just define it as 0
- * and define IO_URING_SCM_ALL. For this case, we use SCM for all files as we
- * can't safely always dereference the file when the task has exited and ring
- * cleanup is done. If a file is tracked and part of SCM, then unix gc on
- * process exit may reap it before __io_sqe_files_unregister() is run.
- */
#define FFS_NOWAIT 0x1UL
#define FFS_ISREG 0x2UL
-#if defined(CONFIG_64BIT)
-#define FFS_SCM 0x4UL
-#else
-#define IO_URING_SCM_ALL
-#define FFS_SCM 0x0UL
-#endif
-#define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG|FFS_SCM)
+#define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG)
bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files);
void io_free_file_tables(struct io_file_table *table);
static inline void io_file_bitmap_clear(struct io_file_table *table, int bit)
{
+ WARN_ON_ONCE(!test_bit(bit, table->bitmap));
__clear_bit(bit, table->bitmap);
table->alloc_hint = bit;
}
wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
if (!wqe)
goto err;
+ wq->wqes[node] = wqe;
if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL))
goto err;
cpumask_copy(wqe->cpu_mask, cpumask_of_node(node));
- wq->wqes[node] = wqe;
wqe->node = alloc_node;
wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
}
}
-int __io_run_local_work(struct io_ring_ctx *ctx, bool locked)
+int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked)
{
struct llist_node *node;
struct llist_node fake;
struct io_kiocb *req = container_of(node, struct io_kiocb,
io_task_work.node);
prefetch(container_of(next, struct io_kiocb, io_task_work.node));
- req->io_task_work.func(req, &locked);
+ req->io_task_work.func(req, locked);
ret++;
node = next;
}
goto again;
}
- if (locked)
+ if (*locked)
io_submit_flush_completions(ctx);
trace_io_uring_local_work_run(ctx, ret, loops);
return ret;
__set_current_state(TASK_RUNNING);
locked = mutex_trylock(&ctx->uring_lock);
- ret = __io_run_local_work(ctx, locked);
+ ret = __io_run_local_work(ctx, &locked);
if (locked)
mutex_unlock(&ctx->uring_lock);
io_task_work_pending(ctx)) {
u32 tail = ctx->cached_cq_tail;
- if (!llist_empty(&ctx->work_llist))
- __io_run_local_work(ctx, true);
+ (void) io_run_local_work_locked(ctx);
if (task_work_pending(current) ||
wq_list_empty(&ctx->iopoll_list)) {
res |= FFS_ISREG;
if (__io_file_supports_nowait(file, mode))
res |= FFS_NOWAIT;
- if (io_file_need_scm(file))
- res |= FFS_SCM;
return res;
}
/* mask in overlapping REQ_F and FFS bits */
req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
io_req_set_rsrc_node(req, ctx, 0);
- WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap));
out:
io_ring_submit_unlock(ctx, issue_flags);
return file;
static void io_req_caches_free(struct io_ring_ctx *ctx)
{
- struct io_submit_state *state = &ctx->submit_state;
int nr = 0;
mutex_lock(&ctx->uring_lock);
- io_flush_cached_locked_reqs(ctx, state);
+ io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
while (!io_req_cache_empty(ctx)) {
- struct io_wq_work_node *node;
- struct io_kiocb *req;
+ struct io_kiocb *req = io_alloc_req(ctx);
- node = wq_stack_extract(&state->free_list);
- req = container_of(node, struct io_kiocb, comp_list);
kmem_cache_free(req_cachep, req);
nr++;
}
io_poll_remove_all(ctx, NULL, true);
mutex_unlock(&ctx->uring_lock);
- /* failed during ring init, it couldn't have issued any requests */
- if (ctx->rings) {
+ /*
+ * If we failed setting up the ctx, we might not have any rings
+ * and therefore did not submit any requests
+ */
+ if (ctx->rings)
io_kill_timeouts(ctx, NULL, true);
- /* if we failed setting up the ctx, we might not have any rings */
- io_iopoll_try_reap_events(ctx);
- /* drop cached put refs after potentially doing completions */
- if (current->io_uring)
- io_uring_drop_tctx_refs(current);
- }
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
/*
struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow);
bool io_req_cqe_overflow(struct io_kiocb *req);
int io_run_task_work_sig(struct io_ring_ctx *ctx);
-int __io_run_local_work(struct io_ring_ctx *ctx, bool locked);
+int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
int io_run_local_work(struct io_ring_ctx *ctx);
void io_req_complete_failed(struct io_kiocb *req, s32 res);
void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
{
+ bool locked;
+ int ret;
+
if (llist_empty(&ctx->work_llist))
return 0;
- return __io_run_local_work(ctx, true);
+
+ locked = true;
+ ret = __io_run_local_work(ctx, &locked);
+ /* shouldn't happen! */
+ if (WARN_ON_ONCE(!locked))
+ mutex_lock(&ctx->uring_lock);
+ return ret;
}
static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
msg->src_fd = array_index_nospec(msg->src_fd, ctx->nr_user_files);
file_ptr = io_fixed_file_slot(&ctx->file_table, msg->src_fd)->file_ptr;
+ if (!file_ptr)
+ goto out_unlock;
+
src_file = (struct file *) (file_ptr & FFS_MASK);
get_file(src_file);
sock = sock_from_file(req->file);
if (unlikely(!sock))
return -ENOTSOCK;
+ if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags))
+ return -EOPNOTSUPP;
msg.msg_name = NULL;
msg.msg_control = NULL;
sock = sock_from_file(req->file);
if (unlikely(!sock))
return -ENOTSOCK;
+ if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags))
+ return -EOPNOTSUPP;
if (req_has_async_data(req)) {
kmsg = req->async_data;
void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
{
-#if !defined(IO_URING_SCM_ALL)
int i;
for (i = 0; i < ctx->nr_user_files; i++) {
struct file *file = io_file_from_index(&ctx->file_table, i);
- if (!file)
- continue;
- if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM)
+ /* skip scm accounted files, they'll be freed by ->ring_sock */
+ if (!file || io_file_need_scm(file))
continue;
io_file_bitmap_clear(&ctx->file_table, i);
fput(file);
}
-#endif
#if defined(CONFIG_UNIX)
if (ctx->ring_sock) {
#if defined(CONFIG_UNIX)
static inline bool io_file_need_scm(struct file *filp)
{
-#if defined(IO_URING_SCM_ALL)
- return true;
-#else
return !!unix_get_socket(filp);
-#endif
}
#else
static inline bool io_file_need_scm(struct file *filp)
{
struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
- WARN_ON(!in_task());
-
if (rw->kiocb.ki_flags & IOCB_WRITE) {
kiocb_end_write(req);
fsnotify_modify(req->file);
#ifdef CONFIG_IPC_NS
void msg_exit_ns(struct ipc_namespace *ns)
{
- percpu_counter_destroy(&ns->percpu_msg_bytes);
- percpu_counter_destroy(&ns->percpu_msg_hdrs);
free_ipcs(ns, &msg_ids(ns), freeque);
idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr);
rhashtable_destroy(&ns->ids[IPC_MSG_IDS].key_ht);
+ percpu_counter_destroy(&ns->percpu_msg_bytes);
+ percpu_counter_destroy(&ns->percpu_msg_hdrs);
}
#endif
return -EINVAL;
}
+ if (btf_type_is_resolve_source_only(ret_type)) {
+ btf_verifier_log_type(env, t, "Invalid return type");
+ return -EINVAL;
+ }
+
if (btf_type_needs_resolve(ret_type) &&
!env_type_is_resolved(env, ret_type_id)) {
err = btf_resolve(env, ret_type, ret_type_id);
return -EINVAL;
if (fd)
- cgrp = cgroup_get_from_fd(fd);
+ cgrp = cgroup_v1v2_get_from_fd(fd);
else if (id)
cgrp = cgroup_get_from_id(id);
else /* walk the entire hierarchy by default. */
#include <linux/hash.h>
#include <linux/bpf.h>
#include <linux/filter.h>
+#include <linux/init.h>
/* The BPF dispatcher is a multiway branch code generator. The
* dispatcher is a mechanism to avoid the performance penalty of an
return -ENOTSUPP;
}
+int __weak __init bpf_arch_init_dispatcher_early(void *ip)
+{
+ return -ENOTSUPP;
+}
+
static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf)
{
s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0];
/* No progs are using this bpf_mem_cache, but htab_map_free() called
* bpf_mem_cache_free() for all remaining elements and they can be in
* free_by_rcu or in waiting_for_gp lists, so drain those lists now.
+ *
+ * Except for waiting_for_gp list, there are no concurrent operations
+ * on these lists, so it is safe to use __llist_del_all().
*/
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu))
free_one(c, llnode);
llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp))
free_one(c, llnode);
- llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist))
+ llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist))
free_one(c, llnode);
- llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra))
+ llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist_extra))
free_one(c, llnode);
}
rcu_in_progress = 0;
for_each_possible_cpu(cpu) {
c = per_cpu_ptr(ma->cache, cpu);
+ /*
+ * refill_work may be unfinished for PREEMPT_RT kernel
+ * in which irq work is invoked in a per-CPU RT thread.
+ * It is also possible for kernel with
+ * arch_irq_work_has_interrupt() being false and irq
+ * work is invoked in timer interrupt. So waiting for
+ * the completion of irq work to ease the handling of
+ * concurrency.
+ */
+ irq_work_sync(&c->refill_work);
drain_mem_cache(c);
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
cc = per_cpu_ptr(ma->caches, cpu);
for (i = 0; i < NUM_CACHES; i++) {
c = &cc->cache[i];
+ irq_work_sync(&c->refill_work);
drain_mem_cache(c);
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
__mark_reg_not_init(env, &callee->regs[BPF_REG_5]);
callee->in_callback_fn = true;
+ callee->callback_ret_range = tnum_range(0, 1);
return 0;
}
cgroup_free_root(root);
}
+/*
+ * Returned cgroup is without refcount but it's valid as long as cset pins it.
+ */
static inline struct cgroup *__cset_cgroup_from_root(struct css_set *cset,
struct cgroup_root *root)
{
res_cgroup = cset->dfl_cgrp;
} else {
struct cgrp_cset_link *link;
+ lockdep_assert_held(&css_set_lock);
list_for_each_entry(link, &cset->cgrp_links, cgrp_link) {
struct cgroup *c = link->cgrp;
}
}
+ BUG_ON(!res_cgroup);
return res_cgroup;
}
rcu_read_unlock();
- BUG_ON(!res);
return res;
}
+/*
+ * Look up cgroup associated with current task's cgroup namespace on the default
+ * hierarchy.
+ *
+ * Unlike current_cgns_cgroup_from_root(), this doesn't need locks:
+ * - Internal rcu_read_lock is unnecessary because we don't dereference any rcu
+ * pointers.
+ * - css_set_lock is not needed because we just read cset->dfl_cgrp.
+ * - As a bonus returned cgrp is pinned with the current because it cannot
+ * switch cgroup_ns asynchronously.
+ */
+static struct cgroup *current_cgns_cgroup_dfl(void)
+{
+ struct css_set *cset;
+
+ cset = current->nsproxy->cgroup_ns->root_cset;
+ return __cset_cgroup_from_root(cset, &cgrp_dfl_root);
+}
+
/* look up cgroup associated with given css_set on the specified hierarchy */
static struct cgroup *cset_cgroup_from_root(struct css_set *cset,
struct cgroup_root *root)
{
- struct cgroup *res = NULL;
-
lockdep_assert_held(&cgroup_mutex);
lockdep_assert_held(&css_set_lock);
- res = __cset_cgroup_from_root(cset, root);
-
- BUG_ON(!res);
- return res;
+ return __cset_cgroup_from_root(cset, root);
}
/*
if (!cgrp)
return ERR_PTR(-ENOENT);
- spin_lock_irq(&css_set_lock);
- root_cgrp = current_cgns_cgroup_from_root(&cgrp_dfl_root);
- spin_unlock_irq(&css_set_lock);
+ root_cgrp = current_cgns_cgroup_dfl();
if (!cgroup_is_descendant(cgrp, root_cgrp)) {
cgroup_put(cgrp);
return ERR_PTR(-ENOENT);
INIT_LIST_HEAD(&child->cg_list);
}
-static struct cgroup *cgroup_get_from_file(struct file *f)
+/**
+ * cgroup_v1v2_get_from_file - get a cgroup pointer from a file pointer
+ * @f: file corresponding to cgroup_dir
+ *
+ * Find the cgroup from a file pointer associated with a cgroup directory.
+ * Returns a pointer to the cgroup on success. ERR_PTR is returned if the
+ * cgroup cannot be found.
+ */
+static struct cgroup *cgroup_v1v2_get_from_file(struct file *f)
{
struct cgroup_subsys_state *css;
- struct cgroup *cgrp;
css = css_tryget_online_from_dir(f->f_path.dentry, NULL);
if (IS_ERR(css))
return ERR_CAST(css);
- cgrp = css->cgroup;
+ return css->cgroup;
+}
+
+/**
+ * cgroup_get_from_file - same as cgroup_v1v2_get_from_file, but only supports
+ * cgroup2.
+ * @f: file corresponding to cgroup2_dir
+ */
+static struct cgroup *cgroup_get_from_file(struct file *f)
+{
+ struct cgroup *cgrp = cgroup_v1v2_get_from_file(f);
+
+ if (IS_ERR(cgrp))
+ return ERR_CAST(cgrp);
+
+ if (!cgroup_on_dfl(cgrp)) {
+ cgroup_put(cgrp);
+ return ERR_PTR(-EBADF);
+ }
+
return cgrp;
}
struct cgroup *cgrp = ERR_PTR(-ENOENT);
struct cgroup *root_cgrp;
- spin_lock_irq(&css_set_lock);
- root_cgrp = current_cgns_cgroup_from_root(&cgrp_dfl_root);
+ root_cgrp = current_cgns_cgroup_dfl();
kn = kernfs_walk_and_get(root_cgrp->kn, path);
- spin_unlock_irq(&css_set_lock);
if (!kn)
goto out;
EXPORT_SYMBOL_GPL(cgroup_get_from_path);
/**
- * cgroup_get_from_fd - get a cgroup pointer from a fd
- * @fd: fd obtained by open(cgroup2_dir)
+ * cgroup_v1v2_get_from_fd - get a cgroup pointer from a fd
+ * @fd: fd obtained by open(cgroup_dir)
*
* Find the cgroup from a fd which should be obtained
* by opening a cgroup directory. Returns a pointer to the
* cgroup on success. ERR_PTR is returned if the cgroup
* cannot be found.
*/
-struct cgroup *cgroup_get_from_fd(int fd)
+struct cgroup *cgroup_v1v2_get_from_fd(int fd)
{
struct cgroup *cgrp;
struct file *f;
if (!f)
return ERR_PTR(-EBADF);
- cgrp = cgroup_get_from_file(f);
+ cgrp = cgroup_v1v2_get_from_file(f);
fput(f);
return cgrp;
}
+
+/**
+ * cgroup_get_from_fd - same as cgroup_v1v2_get_from_fd, but only supports
+ * cgroup2.
+ * @fd: fd obtained by open(cgroup2_dir)
+ */
+struct cgroup *cgroup_get_from_fd(int fd)
+{
+ struct cgroup *cgrp = cgroup_v1v2_get_from_fd(fd);
+
+ if (IS_ERR(cgrp))
+ return ERR_CAST(cgrp);
+
+ if (!cgroup_on_dfl(cgrp)) {
+ cgroup_put(cgrp);
+ return ERR_PTR(-EBADF);
+ }
+ return cgrp;
+}
EXPORT_SYMBOL_GPL(cgroup_get_from_fd);
static u64 power_of_ten(int power)
#include <linux/highmem.h>
#include <linux/pgtable.h>
#include <linux/buildid.h>
+#include <linux/task_work.h>
#include "internal.h"
event->pmu->del(event, 0);
event->oncpu = -1;
- if (READ_ONCE(event->pending_disable) >= 0) {
- WRITE_ONCE(event->pending_disable, -1);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
perf_cgroup_event_disable(event, ctx);
state = PERF_EVENT_STATE_OFF;
}
+
+ if (event->pending_sigtrap) {
+ bool dec = true;
+
+ event->pending_sigtrap = 0;
+ if (state != PERF_EVENT_STATE_OFF &&
+ !event->pending_work) {
+ event->pending_work = 1;
+ dec = false;
+ task_work_add(current, &event->pending_task, TWA_RESUME);
+ }
+ if (dec)
+ local_dec(&event->ctx->nr_pending);
+ }
+
perf_event_set_state(event, state);
if (!is_software_event(event))
* hold the top-level event's child_mutex, so any descendant that
* goes to exit will block in perf_event_exit_event().
*
- * When called from perf_pending_event it's OK because event->ctx
+ * When called from perf_pending_irq it's OK because event->ctx
* is the current context on this CPU and preemption is disabled,
* hence we can't get into perf_event_task_sched_out for this context.
*/
void perf_event_disable_inatomic(struct perf_event *event)
{
- WRITE_ONCE(event->pending_disable, smp_processor_id());
- /* can fail, see perf_pending_event_disable() */
- irq_work_queue(&event->pending);
+ event->pending_disable = 1;
+ irq_work_queue(&event->pending_irq);
}
#define MAX_INTERRUPTS (~0ULL)
raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
if (context_equiv(ctx, next_ctx)) {
+ perf_pmu_disable(pmu);
+
+ /* PMIs are disabled; ctx->nr_pending is stable. */
+ if (local_read(&ctx->nr_pending) ||
+ local_read(&next_ctx->nr_pending)) {
+ /*
+ * Must not swap out ctx when there's pending
+ * events that rely on the ctx->task relation.
+ */
+ raw_spin_unlock(&next_ctx->lock);
+ rcu_read_unlock();
+ goto inside_switch;
+ }
+
WRITE_ONCE(ctx->task, next);
WRITE_ONCE(next_ctx->task, task);
- perf_pmu_disable(pmu);
-
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);
raw_spin_lock(&ctx->lock);
perf_pmu_disable(pmu);
+inside_switch:
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);
task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
static void _free_event(struct perf_event *event)
{
- irq_work_sync(&event->pending);
+ irq_work_sync(&event->pending_irq);
unaccount_event(event);
return;
/*
- * perf_pending_event() can race with the task exiting.
+ * Both perf_pending_task() and perf_pending_irq() can race with the
+ * task exiting.
*/
if (current->flags & PF_EXITING)
return;
event->attr.type, event->attr.sig_data);
}
-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_irq(struct perf_event *event)
{
- int cpu = READ_ONCE(event->pending_disable);
+ int cpu = READ_ONCE(event->oncpu);
+ /*
+ * If the event isn't running; we done. event_sched_out() will have
+ * taken care of things.
+ */
if (cpu < 0)
return;
+ /*
+ * Yay, we hit home and are in the context of the event.
+ */
if (cpu == smp_processor_id()) {
- WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
+ local_dec(&event->ctx->nr_pending);
+ }
+ if (event->pending_disable) {
+ event->pending_disable = 0;
+ perf_event_disable_local(event);
}
-
- perf_event_disable_local(event);
return;
}
* irq_work_queue(); // FAILS
*
* irq_work_run()
- * perf_pending_event()
+ * perf_pending_irq()
*
* But the event runs on CPU-B and wants disabling there.
*/
- irq_work_queue_on(&event->pending, cpu);
+ irq_work_queue_on(&event->pending_irq, cpu);
}
-static void perf_pending_event(struct irq_work *entry)
+static void perf_pending_irq(struct irq_work *entry)
{
- struct perf_event *event = container_of(entry, struct perf_event, pending);
+ struct perf_event *event = container_of(entry, struct perf_event, pending_irq);
int rctx;
- rctx = perf_swevent_get_recursion_context();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
*/
+ rctx = perf_swevent_get_recursion_context();
- perf_pending_event_disable(event);
-
+ /*
+ * The wakeup isn't bound to the context of the event -- it can happen
+ * irrespective of where the event is.
+ */
if (event->pending_wakeup) {
event->pending_wakeup = 0;
perf_event_wakeup(event);
}
+ __perf_pending_irq(event);
+
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
}
+static void perf_pending_task(struct callback_head *head)
+{
+ struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ int rctx;
+
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+ */
+ preempt_disable_notrace();
+ rctx = perf_swevent_get_recursion_context();
+
+ if (event->pending_work) {
+ event->pending_work = 0;
+ perf_sigtrap(event);
+ local_dec(&event->ctx->nr_pending);
+ }
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+ preempt_enable_notrace();
+}
+
#ifdef CONFIG_GUEST_PERF_EVENTS
struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
*/
static int __perf_event_overflow(struct perf_event *event,
- int throttle, struct perf_sample_data *data,
- struct pt_regs *regs)
+ int throttle, struct perf_sample_data *data,
+ struct pt_regs *regs)
{
int events = atomic_read(&event->event_limit);
int ret = 0;
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}
+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ if (!event->pending_sigtrap) {
+ event->pending_sigtrap = 1;
+ local_inc(&event->ctx->nr_pending);
+ }
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending_irq);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);
if (*perf_event_fasync(event) && event->pending_kill) {
event->pending_wakeup = 1;
- irq_work_queue(&event->pending);
+ irq_work_queue(&event->pending_irq);
}
return ret;
}
int perf_event_overflow(struct perf_event *event,
- struct perf_sample_data *data,
- struct pt_regs *regs)
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
{
return __perf_event_overflow(event, 1, data, regs);
}
perf_sample_data_init(&data, 0, 0);
data.raw = &raw;
+ data.sample_flags |= PERF_SAMPLE_RAW;
perf_trace_buf_update(record, event_type);
init_waitqueue_head(&event->waitq);
- event->pending_disable = -1;
- init_irq_work(&event->pending, perf_pending_event);
+ init_irq_work(&event->pending_irq, perf_pending_irq);
+ init_task_work(&event->pending_task, perf_pending_task);
mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
if (parent_event)
event->event_caps = parent_event->event_caps;
- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*
{
/* Most test cases want 2 distinct CPUs. */
if (num_online_cpus() < 2)
- return -EINVAL;
+ kunit_skip(test, "not enough cpus");
/* Want the system to not use breakpoints elsewhere. */
if (hw_breakpoint_is_used())
- return -EBUSY;
+ kunit_skip(test, "hw breakpoint already in use");
return 0;
}
atomic_set(&handle->rb->poll, EPOLLIN);
handle->event->pending_wakeup = 1;
- irq_work_queue(&handle->event->pending);
+ irq_work_queue(&handle->event->pending_irq);
}
/*
#define GCOV_TAG_FUNCTION_LENGTH 3
+/* Since GCC 12.1 sizes are in BYTES and not in WORDS (4B). */
+#if (__GNUC__ >= 12)
+#define GCOV_UNIT_SIZE 4
+#else
+#define GCOV_UNIT_SIZE 1
+#endif
+
static struct gcov_info *gcov_info_head;
/**
pos += store_gcov_u32(buffer, pos, info->version);
pos += store_gcov_u32(buffer, pos, info->stamp);
+#if (__GNUC__ >= 12)
+ /* Use zero as checksum of the compilation unit. */
+ pos += store_gcov_u32(buffer, pos, 0);
+#endif
+
for (fi_idx = 0; fi_idx < info->n_functions; fi_idx++) {
fi_ptr = info->functions[fi_idx];
/* Function record. */
pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
- pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION_LENGTH);
+ pos += store_gcov_u32(buffer, pos,
+ GCOV_TAG_FUNCTION_LENGTH * GCOV_UNIT_SIZE);
pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
pos += store_gcov_u32(buffer, pos, fi_ptr->lineno_checksum);
pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
/* Counter record. */
pos += store_gcov_u32(buffer, pos,
GCOV_TAG_FOR_COUNTER(ct_idx));
- pos += store_gcov_u32(buffer, pos, ci_ptr->num * 2);
+ pos += store_gcov_u32(buffer, pos,
+ ci_ptr->num * 2 * GCOV_UNIT_SIZE);
for (cv_idx = 0; cv_idx < ci_ptr->num; cv_idx++) {
pos += store_gcov_u64(buffer, pos,
if (!kprobes_all_disarmed && kprobe_disabled(p)) {
p->flags &= ~KPROBE_FLAG_DISABLED;
ret = arm_kprobe(p);
- if (ret)
+ if (ret) {
p->flags |= KPROBE_FLAG_DISABLED;
+ if (p != kp)
+ kp->flags |= KPROBE_FLAG_DISABLED;
+ }
}
out:
mutex_unlock(&kprobe_mutex);
int error;
if (hibernation_mode == HIBERNATION_SUSPEND) {
- error = suspend_devices_and_enter(PM_SUSPEND_MEM);
+ error = suspend_devices_and_enter(mem_sleep_current);
if (error) {
hibernation_mode = hibernation_ops ?
HIBERNATION_PLATFORM :
// where caller does not hold the root rcu_node structure's lock.
static void rcu_poll_gp_seq_start_unlocked(unsigned long *snap)
{
+ unsigned long flags;
struct rcu_node *rnp = rcu_get_root();
if (rcu_init_invoked()) {
lockdep_assert_irqs_enabled();
- raw_spin_lock_irq_rcu_node(rnp);
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_start(snap);
if (rcu_init_invoked())
- raw_spin_unlock_irq_rcu_node(rnp);
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}
// Make the polled API aware of the end of a grace period, but where
// caller does not hold the root rcu_node structure's lock.
static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
{
+ unsigned long flags;
struct rcu_node *rnp = rcu_get_root();
if (rcu_init_invoked()) {
lockdep_assert_irqs_enabled();
- raw_spin_lock_irq_rcu_node(rnp);
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_end(snap);
if (rcu_init_invoked())
- raw_spin_unlock_irq_rcu_node(rnp);
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}
/*
#ifdef CONFIG_SMP
-static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
{
void (*func)(struct rq *rq);
- struct callback_head *next;
+ struct balance_callback *next;
lockdep_assert_rq_held(rq);
* This abuse is tolerated because it places all the unlikely/odd cases behind
* a single test, namely: rq->balance_callback == NULL.
*/
-struct callback_head balance_push_callback = {
+struct balance_callback balance_push_callback = {
.next = NULL,
- .func = (void (*)(struct callback_head *))balance_push,
+ .func = balance_push,
};
-static inline struct callback_head *
+static inline struct balance_callback *
__splice_balance_callbacks(struct rq *rq, bool split)
{
- struct callback_head *head = rq->balance_callback;
+ struct balance_callback *head = rq->balance_callback;
if (likely(!head))
return NULL;
return head;
}
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
{
return __splice_balance_callbacks(rq, true);
}
do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
}
-static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
{
unsigned long flags;
{
}
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
{
return NULL;
}
-static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
{
}
preempt_enable();
}
-static DEFINE_PER_CPU(struct callback_head, core_balance_head);
+static DEFINE_PER_CPU(struct balance_callback, core_balance_head);
static void queue_core_balance(struct rq *rq)
{
int oldpolicy = -1, policy = attr->sched_policy;
int retval, oldprio, newprio, queued, running;
const struct sched_class *prev_class;
- struct callback_head *head;
+ struct balance_callback *head;
struct rq_flags rf;
int reset_on_fork;
int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
return rq->online && dl_task(prev);
}
-static DEFINE_PER_CPU(struct callback_head, dl_push_head);
-static DEFINE_PER_CPU(struct callback_head, dl_pull_head);
+static DEFINE_PER_CPU(struct balance_callback, dl_push_head);
+static DEFINE_PER_CPU(struct balance_callback, dl_pull_head);
static void push_dl_tasks(struct rq *);
static void pull_dl_task(struct rq *);
return !plist_head_empty(&rq->rt.pushable_tasks);
}
-static DEFINE_PER_CPU(struct callback_head, rt_push_head);
-static DEFINE_PER_CPU(struct callback_head, rt_pull_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_push_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_pull_head);
static void push_rt_tasks(struct rq *);
static void pull_rt_task(struct rq *);
DECLARE_STATIC_KEY_FALSE(sched_uclamp_used);
#endif /* CONFIG_UCLAMP_TASK */
+struct rq;
+struct balance_callback {
+ struct balance_callback *next;
+ void (*func)(struct rq *rq);
+};
+
/*
* This is the main, per-CPU runqueue data structure.
*
unsigned long cpu_capacity;
unsigned long cpu_capacity_orig;
- struct callback_head *balance_callback;
+ struct balance_callback *balance_callback;
unsigned char nohz_idle_balance;
unsigned char idle_balance;
#endif
}
+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+
+#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
+#define this_rq() this_cpu_ptr(&runqueues)
+#define task_rq(p) cpu_rq(task_cpu(p))
+#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
+#define raw_rq() raw_cpu_ptr(&runqueues)
+
struct sched_group;
#ifdef CONFIG_SCHED_CORE
static inline struct cpumask *sched_group_span(struct sched_group *sg);
return true;
for_each_cpu_and(cpu, sched_group_span(group), p->cpus_ptr) {
- if (sched_core_cookie_match(rq, p))
+ if (sched_core_cookie_match(cpu_rq(cpu), p))
return true;
}
return false;
static inline void update_idle_core(struct rq *rq) { }
#endif
-DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-
-#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
-#define this_rq() this_cpu_ptr(&runqueues)
-#define task_rq(p) cpu_rq(task_cpu(p))
-#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
-#define raw_rq() raw_cpu_ptr(&runqueues)
-
#ifdef CONFIG_FAIR_GROUP_SCHED
static inline struct task_struct *task_of(struct sched_entity *se)
{
#endif
};
-extern struct callback_head balance_push_callback;
+extern struct balance_callback balance_push_callback;
/*
* Lockdep annotation that avoids accidental unlocks; it's like a
static inline void
queue_balance_callback(struct rq *rq,
- struct callback_head *head,
+ struct balance_callback *head,
void (*func)(struct rq *rq))
{
lockdep_assert_rq_held(rq);
if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
return;
- head->func = (void (*)(struct callback_head *))func;
+ head->func = func;
head->next = rq->balance_callback;
rq->balance_callback = head;
}
mutex_unlock(&blk_probe_mutex);
}
+static int blk_trace_start(struct blk_trace *bt)
+{
+ if (bt->trace_state != Blktrace_setup &&
+ bt->trace_state != Blktrace_stopped)
+ return -EINVAL;
+
+ blktrace_seq++;
+ smp_mb();
+ bt->trace_state = Blktrace_running;
+ raw_spin_lock_irq(&running_trace_lock);
+ list_add(&bt->running_list, &running_trace_list);
+ raw_spin_unlock_irq(&running_trace_lock);
+ trace_note_time(bt);
+
+ return 0;
+}
+
+static int blk_trace_stop(struct blk_trace *bt)
+{
+ if (bt->trace_state != Blktrace_running)
+ return -EINVAL;
+
+ bt->trace_state = Blktrace_stopped;
+ raw_spin_lock_irq(&running_trace_lock);
+ list_del_init(&bt->running_list);
+ raw_spin_unlock_irq(&running_trace_lock);
+ relay_flush(bt->rchan);
+
+ return 0;
+}
+
static void blk_trace_cleanup(struct request_queue *q, struct blk_trace *bt)
{
+ blk_trace_stop(bt);
synchronize_rcu();
blk_trace_free(q, bt);
put_probe_ref();
if (!bt)
return -EINVAL;
- if (bt->trace_state != Blktrace_running)
- blk_trace_cleanup(q, bt);
+ blk_trace_cleanup(q, bt);
return 0;
}
static int __blk_trace_startstop(struct request_queue *q, int start)
{
- int ret;
struct blk_trace *bt;
bt = rcu_dereference_protected(q->blk_trace,
if (bt == NULL)
return -EINVAL;
- /*
- * For starting a trace, we can transition from a setup or stopped
- * trace. For stopping a trace, the state must be running
- */
- ret = -EINVAL;
- if (start) {
- if (bt->trace_state == Blktrace_setup ||
- bt->trace_state == Blktrace_stopped) {
- blktrace_seq++;
- smp_mb();
- bt->trace_state = Blktrace_running;
- raw_spin_lock_irq(&running_trace_lock);
- list_add(&bt->running_list, &running_trace_list);
- raw_spin_unlock_irq(&running_trace_lock);
-
- trace_note_time(bt);
- ret = 0;
- }
- } else {
- if (bt->trace_state == Blktrace_running) {
- bt->trace_state = Blktrace_stopped;
- raw_spin_lock_irq(&running_trace_lock);
- list_del_init(&bt->running_list);
- raw_spin_unlock_irq(&running_trace_lock);
- relay_flush(bt->rchan);
- ret = 0;
- }
- }
-
- return ret;
+ if (start)
+ return blk_trace_start(bt);
+ else
+ return blk_trace_stop(bt);
}
int blk_trace_startstop(struct request_queue *q, int start)
void blk_trace_shutdown(struct request_queue *q)
{
if (rcu_dereference_protected(q->blk_trace,
- lockdep_is_held(&q->debugfs_mutex))) {
- __blk_trace_startstop(q, 0);
+ lockdep_is_held(&q->debugfs_mutex)))
__blk_trace_remove(q);
- }
}
#ifdef CONFIG_BLK_CGROUP
if (bt == NULL)
return -EINVAL;
- if (bt->trace_state == Blktrace_running) {
- bt->trace_state = Blktrace_stopped;
- raw_spin_lock_irq(&running_trace_lock);
- list_del_init(&bt->running_list);
- raw_spin_unlock_irq(&running_trace_lock);
- relay_flush(bt->rchan);
- }
+ blk_trace_stop(bt);
put_probe_ref();
synchronize_rcu();
perf_sample_data_init(sd, 0, 0);
sd->raw = &raw;
+ sd->sample_flags |= PERF_SAMPLE_RAW;
err = __bpf_perf_event_output(regs, map, flags, sd);
perf_fetch_caller_regs(regs);
perf_sample_data_init(sd, 0, 0);
sd->raw = &raw;
+ sd->sample_flags |= PERF_SAMPLE_RAW;
ret = __bpf_perf_event_output(regs, map, flags, sd);
out:
return -E2BIG;
fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler);
+ if (!fp->rethook)
+ return -ENOMEM;
for (i = 0; i < size; i++) {
struct fprobe_rethook_node *node;
{
int ret;
- if (!fp || fp->ops.func != fprobe_handler)
+ if (!fp || (fp->ops.saved_func != fprobe_handler &&
+ fp->ops.saved_func != fprobe_kprobe_handler))
return -EINVAL;
/*
command |= FTRACE_UPDATE_TRACE_FUNC;
}
- if (!command || !ftrace_enabled) {
- /*
- * If these are dynamic or per_cpu ops, they still
- * need their data freed. Since, function tracing is
- * not currently active, we can just free them
- * without synchronizing all CPUs.
- */
- if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
- goto free_ops;
-
- return 0;
- }
+ if (!command || !ftrace_enabled)
+ goto out;
/*
* If the ops uses a trampoline, then it needs to be
removed_ops = NULL;
ops->flags &= ~FTRACE_OPS_FL_REMOVING;
+out:
/*
* Dynamic ops may be freed, we must make sure that all
* callers are done before leaving this function.
if (IS_ENABLED(CONFIG_PREEMPTION))
synchronize_rcu_tasks();
- free_ops:
ftrace_trampoline_free(ops);
}
KPROBE_GEN_TEST_FUNC,
KPROBE_GEN_TEST_ARG0, KPROBE_GEN_TEST_ARG1);
if (ret)
- goto free;
+ goto out;
/* Use kprobe_event_add_fields to add the rest of the fields */
ret = kprobe_event_add_fields(&cmd, KPROBE_GEN_TEST_ARG2, KPROBE_GEN_TEST_ARG3);
if (ret)
- goto free;
+ goto out;
/*
* This actually creates the event.
*/
ret = kprobe_event_gen_cmd_end(&cmd);
if (ret)
- goto free;
+ goto out;
/*
* Now get the gen_kprobe_test event file. We need to prevent
goto delete;
}
out:
+ kfree(buf);
return ret;
delete:
/* We got an error after creating the event, delete it */
ret = kprobe_event_delete("gen_kprobe_test");
- free:
- kfree(buf);
-
goto out;
}
KPROBE_GEN_TEST_FUNC,
"$retval");
if (ret)
- goto free;
+ goto out;
/*
* This actually creates the event.
*/
ret = kretprobe_event_gen_cmd_end(&cmd);
if (ret)
- goto free;
+ goto out;
/*
* Now get the gen_kretprobe_test event file. We need to
goto delete;
}
out:
+ kfree(buf);
return ret;
delete:
/* We got an error after creating the event, delete it */
ret = kprobe_event_delete("gen_kretprobe_test");
- free:
- kfree(buf);
-
goto out;
}
struct ring_buffer_per_cpu *cpu_buffer;
struct rb_irq_work *rbwork;
+ if (!buffer)
+ return;
+
if (cpu == RING_BUFFER_ALL_CPUS) {
/* Wake up individual ones too. One level recursion */
rbwork = &buffer->irq_work;
} else {
+ if (WARN_ON_ONCE(!buffer->buffers))
+ return;
+ if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
+ return;
+
cpu_buffer = buffer->buffers[cpu];
+ /* The CPU buffer may not have been initialized yet */
+ if (!cpu_buffer)
+ return;
rbwork = &cpu_buffer->irq_work;
}
static DEFINE_CTL_TABLE_POLL(hostname_poll);
static DEFINE_CTL_TABLE_POLL(domainname_poll);
+// Note: update 'enum uts_proc' to match any changes to this table
static struct ctl_table uts_kern_table[] = {
{
.procname = "arch",
default 1536 if (!64BIT && XTENSA)
default 1024 if !64BIT
default 2048 if 64BIT
+ default 0 if KMSAN
help
- Tell gcc to warn at build time for stack frames larger than this.
+ Tell the compiler to warn at build time for stack frames larger than this.
Setting this too low will cause a lot of warnings.
Setting it to 0 disables the warning.
frag_container = alloc_string_stream_fragment(stream->test,
len,
stream->gfp);
- if (!frag_container)
- return -ENOMEM;
+ if (IS_ERR(frag_container))
+ return PTR_ERR(frag_container);
len = vsnprintf(frag_container->fragment, len, fmt, args);
spin_lock(&stream->lock);
kunit_set_failure(test);
stream = alloc_string_stream(test, GFP_KERNEL);
- if (!stream) {
+ if (IS_ERR(stream)) {
WARN(true,
"Could not allocate stream to print failed assertion in %s:%d\n",
loc->file,
unsigned long max, min;
unsigned long prev_max, prev_min;
- last = next = mas->node;
- prev_min = min = mas->min;
+ next = mas->node;
+ min = mas->min;
max = mas->max;
do {
offset = 0;
range->max = U8_MAX;
break;
case NLA_U16:
+ case NLA_BE16:
case NLA_BINARY:
range->max = U16_MAX;
break;
case NLA_U32:
+ case NLA_BE32:
range->max = U32_MAX;
break;
case NLA_U64:
}
}
-static u64 nla_get_attr_bo(const struct nla_policy *pt,
- const struct nlattr *nla)
-{
- switch (pt->type) {
- case NLA_U16:
- if (pt->network_byte_order)
- return ntohs(nla_get_be16(nla));
-
- return nla_get_u16(nla);
- case NLA_U32:
- if (pt->network_byte_order)
- return ntohl(nla_get_be32(nla));
-
- return nla_get_u32(nla);
- case NLA_U64:
- if (pt->network_byte_order)
- return be64_to_cpu(nla_get_be64(nla));
-
- return nla_get_u64(nla);
- }
-
- WARN_ON_ONCE(1);
- return 0;
-}
-
static int nla_validate_range_unsigned(const struct nla_policy *pt,
const struct nlattr *nla,
struct netlink_ext_ack *extack,
value = nla_get_u8(nla);
break;
case NLA_U16:
+ value = nla_get_u16(nla);
+ break;
case NLA_U32:
+ value = nla_get_u32(nla);
+ break;
case NLA_U64:
- value = nla_get_attr_bo(pt, nla);
+ value = nla_get_u64(nla);
break;
case NLA_MSECS:
value = nla_get_u64(nla);
case NLA_BINARY:
value = nla_len(nla);
break;
+ case NLA_BE16:
+ value = ntohs(nla_get_be16(nla));
+ break;
+ case NLA_BE32:
+ value = ntohl(nla_get_be32(nla));
+ break;
default:
return -EINVAL;
}
case NLA_U64:
case NLA_MSECS:
case NLA_BINARY:
+ case NLA_BE16:
+ case NLA_BE32:
return nla_validate_range_unsigned(pt, nla, extack, validate);
case NLA_S8:
case NLA_S16:
#include <linux/types.h>
#include <linux/vmalloc.h>
+#define SKIP(cond, reason) do { \
+ if (cond) { \
+ kunit_skip(test, reason); \
+ return; \
+ } \
+} while (0)
+
+/*
+ * Clang 11 and earlier generate unwanted libcalls for signed output
+ * on unsigned input.
+ */
+#if defined(CONFIG_CC_IS_CLANG) && __clang_major__ <= 11
+# define SKIP_SIGN_MISMATCH(t) SKIP(t, "Clang 11 unwanted libcalls")
+#else
+# define SKIP_SIGN_MISMATCH(t) do { } while (0)
+#endif
+
+/*
+ * Clang 13 and earlier generate unwanted libcalls for 64-bit tests on
+ * 32-bit hosts.
+ */
+#if defined(CONFIG_CC_IS_CLANG) && __clang_major__ <= 13 && \
+ BITS_PER_LONG != 64
+# define SKIP_64_ON_32(t) SKIP(t, "Clang 13 unwanted libcalls")
+#else
+# define SKIP_64_ON_32(t) do { } while (0)
+#endif
+
#define DEFINE_TEST_ARRAY_TYPED(t1, t2, t) \
static const struct test_ ## t1 ## _ ## t2 ## __ ## t { \
t1 a; \
{-4U, 5U, 1U, -9U, -20U, true, false, true},
};
-#if BITS_PER_LONG == 64
DEFINE_TEST_ARRAY(u64) = {
{0, 0, 0, 0, 0, false, false, false},
{1, 1, 2, 0, 1, false, false, false},
false, true, false},
{-15ULL, 10ULL, -5ULL, -25ULL, -150ULL, false, false, true},
};
-#endif
DEFINE_TEST_ARRAY(s8) = {
{0, 0, 0, 0, 0, false, false, false},
{S32_MAX, S32_MAX, -2, 0, 1, true, false, true},
};
-#if BITS_PER_LONG == 64
DEFINE_TEST_ARRAY(s64) = {
{0, 0, 0, 0, 0, false, false, false},
{-128, -1, -129, -127, 128, false, false, false},
{0, -S64_MAX, -S64_MAX, S64_MAX, 0, false, false, false},
};
-#endif
#define check_one_op(t, fmt, op, sym, a, b, r, of) do { \
int _a_orig = a, _a_bump = a + 1; \
#define DEFINE_TEST_FUNC_TYPED(n, t, fmt) \
static void do_test_ ## n(struct kunit *test, const struct test_ ## n *p) \
-{ \
+{ \
check_one_op(t, fmt, add, "+", p->a, p->b, p->sum, p->s_of); \
check_one_op(t, fmt, add, "+", p->b, p->a, p->sum, p->s_of); \
check_one_op(t, fmt, sub, "-", p->a, p->b, p->diff, p->d_of); \
static void n ## _overflow_test(struct kunit *test) { \
unsigned i; \
\
+ SKIP_64_ON_32(__same_type(t, u64)); \
+ SKIP_64_ON_32(__same_type(t, s64)); \
+ SKIP_SIGN_MISMATCH(__same_type(n ## _tests[0].a, u32) && \
+ __same_type(n ## _tests[0].b, u32) && \
+ __same_type(n ## _tests[0].sum, int)); \
+ \
for (i = 0; i < ARRAY_SIZE(n ## _tests); ++i) \
do_test_ ## n(test, &n ## _tests[i]); \
kunit_info(test, "%zu %s arithmetic tests finished\n", \
DEFINE_TEST_FUNC(s16, "%d");
DEFINE_TEST_FUNC(u32, "%u");
DEFINE_TEST_FUNC(s32, "%d");
-#if BITS_PER_LONG == 64
DEFINE_TEST_FUNC(u64, "%llu");
DEFINE_TEST_FUNC(s64, "%lld");
-#endif
DEFINE_TEST_ARRAY_TYPED(u32, u32, u8) = {
{0, 0, 0, 0, 0, false, false, false},
KUNIT_CASE(s16_s16__s16_overflow_test),
KUNIT_CASE(u32_u32__u32_overflow_test),
KUNIT_CASE(s32_s32__s32_overflow_test),
-/* Clang 13 and earlier generate unwanted libcalls on 32-bit. */
-#if BITS_PER_LONG == 64
KUNIT_CASE(u64_u64__u64_overflow_test),
KUNIT_CASE(s64_s64__s64_overflow_test),
-#endif
- KUNIT_CASE(u32_u32__u8_overflow_test),
KUNIT_CASE(u32_u32__int_overflow_test),
+ KUNIT_CASE(u32_u32__u8_overflow_test),
KUNIT_CASE(u8_u8__int_overflow_test),
KUNIT_CASE(int_int__u8_overflow_test),
KUNIT_CASE(shift_sane_test),
pr_info("test %d random rhlist add/delete operations\n", entries);
for (j = 0; j < entries; j++) {
u32 i = prandom_u32_max(entries);
- u32 prand = get_random_u32();
+ u32 prand = prandom_u32_max(4);
cond_resched();
- if (prand == 0)
- prand = get_random_u32();
-
- if (prand & 1) {
- prand >>= 1;
- continue;
- }
-
err = rhltable_remove(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
if (test_bit(i, obj_in_table)) {
clear_bit(i, obj_in_table);
}
if (prand & 1) {
- prand >>= 1;
- continue;
- }
-
- err = rhltable_insert(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
- if (err == 0) {
- if (WARN(test_and_set_bit(i, obj_in_table), "succeeded to insert same object %d", i))
- continue;
- } else {
- if (WARN(!test_bit(i, obj_in_table), "failed to insert object %d", i))
- continue;
- }
-
- if (prand & 1) {
- prand >>= 1;
- continue;
+ err = rhltable_insert(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
+ if (err == 0) {
+ if (WARN(test_and_set_bit(i, obj_in_table), "succeeded to insert same object %d", i))
+ continue;
+ } else {
+ if (WARN(!test_bit(i, obj_in_table), "failed to insert object %d", i))
+ continue;
+ }
}
- i = prandom_u32_max(entries);
- if (test_bit(i, obj_in_table)) {
- err = rhltable_remove(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
- WARN(err, "cannot remove element at slot %d", i);
- if (err == 0)
- clear_bit(i, obj_in_table);
- } else {
- err = rhltable_insert(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
- WARN(err, "failed to insert object %d", i);
- if (err == 0)
- set_bit(i, obj_in_table);
+ if (prand & 2) {
+ i = prandom_u32_max(entries);
+ if (test_bit(i, obj_in_table)) {
+ err = rhltable_remove(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
+ WARN(err, "cannot remove element at slot %d", i);
+ if (err == 0)
+ clear_bit(i, obj_in_table);
+ } else {
+ err = rhltable_insert(&rhlt, &rhl_test_objects[i].list_node, test_rht_params);
+ WARN(err, "failed to insert object %d", i);
+ if (err == 0)
+ set_bit(i, obj_in_table);
+ }
}
}
page_tail);
page_tail->mapping = head->mapping;
page_tail->index = head->index + tail;
- page_tail->private = 0;
+
+ /*
+ * page->private should not be set in tail pages with the exception
+ * of swap cache pages that store the swp_entry_t in tail pages.
+ * Fix up and warn once if private is unexpectedly set.
+ */
+ if (!folio_test_swapcache(page_folio(head))) {
+ VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, page_tail);
+ page_tail->private = 0;
+ }
/* Page flags must be visible before we make the page non-compound. */
smp_wmb();
VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
/*
* Clear vm_private_data
+ * - For shared mappings this is a per-vma semaphore that may be
+ * allocated in a subsequent call to hugetlb_vm_op_open.
+ * Before clearing, make sure pointer is not associated with vma
+ * as this will leak the structure. This is the case when called
+ * via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already
+ * been called to allocate a new structure.
* - For MAP_PRIVATE mappings, this is the reserve map which does
* not apply to children. Faults generated by the children are
* not guaranteed to succeed, even if read-only.
- * - For shared mappings this is a per-vma semaphore that may be
- * allocated in a subsequent call to hugetlb_vm_op_open.
*/
- vma->vm_private_data = (void *)0;
- if (!(vma->vm_flags & VM_MAYSHARE))
- return;
+ if (vma->vm_flags & VM_MAYSHARE) {
+ struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
+
+ if (vma_lock && vma_lock->vma != vma)
+ vma->vm_private_data = NULL;
+ } else
+ vma->vm_private_data = NULL;
}
/*
page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
if (!page)
goto out_uncharge_cgroup;
+ spin_lock_irq(&hugetlb_lock);
if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
}
- spin_lock_irq(&hugetlb_lock);
list_add(&page->lru, &h->hugepage_activelist);
set_page_refcounted(page);
/* Fall through */
struct resv_map *resv = vma_resv_map(vma);
/*
+ * HPAGE_RESV_OWNER indicates a private mapping.
* This new VMA should share its siblings reservation map if present.
* The VMA will only ever have a valid reservation map pointer where
* it is being copied for another still existing VMA. As that VMA
/*
* vma_lock structure for sharable mappings is vma specific.
- * Clear old pointer (if copied via vm_area_dup) and create new.
+ * Clear old pointer (if copied via vm_area_dup) and allocate
+ * new structure. Before clearing, make sure vma_lock is not
+ * for this vma.
*/
if (vma->vm_flags & VM_MAYSHARE) {
- vma->vm_private_data = NULL;
- hugetlb_vma_lock_alloc(vma);
+ struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
+
+ if (vma_lock) {
+ if (vma_lock->vma != vma) {
+ vma->vm_private_data = NULL;
+ hugetlb_vma_lock_alloc(vma);
+ } else
+ pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__);
+ } else
+ hugetlb_vma_lock_alloc(vma);
}
}
WARN_ON(!list_empty(&gray_list));
}
+/*
+ * Conditionally call resched() in a object iteration loop while making sure
+ * that the given object won't go away without RCU read lock by performing a
+ * get_object() if !pinned.
+ *
+ * Return: false if can't do a cond_resched() due to get_object() failure
+ * true otherwise
+ */
+static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned)
+{
+ if (!pinned && !get_object(object))
+ return false;
+
+ rcu_read_unlock();
+ cond_resched();
+ rcu_read_lock();
+ if (!pinned)
+ put_object(object);
+ return true;
+}
+
/*
* Scan data sections and all the referenced memory blocks allocated via the
* kernel's standard allocators. This function must be called with the
struct zone *zone;
int __maybe_unused i;
int new_leaks = 0;
- int loop1_cnt = 0;
+ int loop_cnt = 0;
jiffies_last_scan = jiffies;
list_for_each_entry_rcu(object, &object_list, object_list) {
bool obj_pinned = false;
- loop1_cnt++;
raw_spin_lock_irq(&object->lock);
#ifdef DEBUG
/*
raw_spin_unlock_irq(&object->lock);
/*
- * Do a cond_resched() to avoid soft lockup every 64k objects.
- * Make sure a reference has been taken so that the object
- * won't go away without RCU read lock.
+ * Do a cond_resched() every 64k objects to avoid soft lockup.
*/
- if (!(loop1_cnt & 0xffff)) {
- if (!obj_pinned && !get_object(object)) {
- /* Try the next object instead */
- loop1_cnt--;
- continue;
- }
-
- rcu_read_unlock();
- cond_resched();
- rcu_read_lock();
-
- if (!obj_pinned)
- put_object(object);
- }
+ if (!(++loop_cnt & 0xffff) &&
+ !kmemleak_cond_resched(object, obj_pinned))
+ loop_cnt--; /* Try again on next object */
}
rcu_read_unlock();
* scan and color them gray until the next scan.
*/
rcu_read_lock();
+ loop_cnt = 0;
list_for_each_entry_rcu(object, &object_list, object_list) {
+ /*
+ * Do a cond_resched() every 64k objects to avoid soft lockup.
+ */
+ if (!(++loop_cnt & 0xffff) &&
+ !kmemleak_cond_resched(object, false))
+ loop_cnt--; /* Try again on next object */
+
/*
* This is racy but we can save the overhead of lock/unlock
* calls. The missed objects, if any, should be caught in
* Scanning result reporting.
*/
rcu_read_lock();
+ loop_cnt = 0;
list_for_each_entry_rcu(object, &object_list, object_list) {
+ /*
+ * Do a cond_resched() every 64k objects to avoid soft lockup.
+ */
+ if (!(++loop_cnt & 0xffff) &&
+ !kmemleak_cond_resched(object, false))
+ loop_cnt--; /* Try again on next object */
+
/*
* This is racy but we can save the overhead of lock/unlock
* calls. The missed objects, if any, should be caught in
#include "kmsan.h"
#include <linux/gfp.h>
+#include <linux/kmsan_string.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
__memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE);
kmsan_leave_runtime();
}
+EXPORT_SYMBOL(kmsan_copy_page_meta);
void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags)
{
if (start & ~huge_page_mask(hstate_vma(vma)))
return false;
- *end = ALIGN(*end, huge_page_size(hstate_vma(vma)));
+ /*
+ * Madvise callers expect the length to be rounded up to PAGE_SIZE
+ * boundaries, and may be unaware that this VMA uses huge pages.
+ * Avoid unexpected data loss by rounding down the number of
+ * huge pages freed.
+ */
+ *end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma)));
+
return true;
}
if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior))
return -EINVAL;
+ if (start == end)
+ return 0;
+
if (!userfaultfd_remove(vma, start, end)) {
*prev = NULL; /* mmap_lock has been dropped, prev is stale */
kfree(tier);
}
-static ssize_t nodes_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t nodelist_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
int ret;
nodemask_t nmask;
mutex_unlock(&memory_tier_lock);
return ret;
}
-static DEVICE_ATTR_RO(nodes);
+static DEVICE_ATTR_RO(nodelist);
static struct attribute *memtier_dev_attrs[] = {
- &dev_attr_nodes.attr,
+ &dev_attr_nodelist.attr,
NULL
};
static int mbind_range(struct mm_struct *mm, unsigned long start,
unsigned long end, struct mempolicy *new_pol)
{
- MA_STATE(mas, &mm->mm_mt, start - 1, start - 1);
+ MA_STATE(mas, &mm->mm_mt, start, start);
struct vm_area_struct *prev;
struct vm_area_struct *vma;
int err = 0;
pgoff_t pgoff;
- prev = mas_find_rev(&mas, 0);
- if (prev && (start < prev->vm_end))
- vma = prev;
- else
- vma = mas_next(&mas, end - 1);
+ prev = mas_prev(&mas, 0);
+ if (unlikely(!prev))
+ mas_set(&mas, start);
+
+ vma = mas_find(&mas, end - 1);
+ if (WARN_ON(!vma))
+ return 0;
+
+ if (start > vma->vm_start)
+ prev = vma;
for (; vma; vma = mas_next(&mas, end - 1)) {
unsigned long vmstart = max(start, vma->vm_start);
*/
list_splice(&ret_pages, from);
+ /*
+ * Return 0 in case all subpages of fail-to-migrate THPs are
+ * migrated successfully.
+ */
+ if (list_empty(from))
+ rc = 0;
+
count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
count_vm_events(PGMIGRATE_FAIL, nr_failed_pages);
count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
struct vm_area_struct *expand)
{
struct mm_struct *mm = vma->vm_mm;
- struct vm_area_struct *next_next, *next = find_vma(mm, vma->vm_end);
+ struct vm_area_struct *next_next = NULL; /* uninit var warning */
+ struct vm_area_struct *next = find_vma(mm, vma->vm_end);
struct vm_area_struct *orig_vma = vma;
struct address_space *mapping = NULL;
struct rb_root_cached *root = NULL;
if (error)
goto unmap_and_free_vma;
- /* Can addr have changed??
- *
- * Answer: Yes, several device drivers can do it in their
- * f_op->mmap method. -DaveM
+ /*
+ * Expansion is handled above, merging is handled below.
+ * Drivers should not alter the address of the VMA.
*/
- WARN_ON_ONCE(addr != vma->vm_start);
-
- addr = vma->vm_start;
+ if (WARN_ON((addr != vma->vm_start))) {
+ error = -EINVAL;
+ goto close_and_free_vma;
+ }
mas_reset(&mas);
/*
vm_area_free(vma);
vma = merge;
/* Update vm_flags to pick up the change. */
- addr = vma->vm_start;
vm_flags = vma->vm_flags;
goto unmap_writable;
}
if (mas_preallocate(&mas, vma, GFP_KERNEL)) {
error = -ENOMEM;
if (file)
- goto unmap_and_free_vma;
+ goto close_and_free_vma;
else
goto free_vma;
}
if (next->vm_flags != vma->vm_flags)
goto out;
+ if (start + size <= next->vm_end)
+ break;
+
prev = next;
}
p->mapping = TAIL_MAPPING;
set_compound_head(p, head);
+ set_page_private(p, 0);
}
void prep_compound_page(struct page *page, unsigned int order)
size_t size)
{
if (addr) {
- unsigned long alloc_end = addr + (PAGE_SIZE << order);
- unsigned long used = addr + PAGE_ALIGN(size);
-
- split_page(virt_to_page((void *)addr), order);
- while (used < alloc_end) {
- free_page(used);
- used += PAGE_SIZE;
- }
+ unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE);
+ struct page *page = virt_to_page((void *)addr);
+ struct page *last = page + nr;
+
+ split_page_owner(page, 1 << order);
+ split_page_memcg(page, 1 << order);
+ while (page < --last)
+ set_page_refcounted(last);
+
+ last = page + (1UL << order);
+ for (page += nr; page < last; page++)
+ __free_pages_ok(page, 0, FPI_TO_TAIL);
}
return (void *)addr;
}
zone->zone_start_pfn);
if (skip_isolation) {
- int mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
+ int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
VM_BUG_ON(!is_migrate_isolate(mt));
} else {
if (!zeropage) { /* COPY */
page_kaddr = kmap_local_folio(folio, 0);
+ /*
+ * The read mmap_lock is held here. Despite the
+ * mmap_lock being read recursive a deadlock is still
+ * possible if a writer has taken a lock. For example:
+ *
+ * process A thread 1 takes read lock on own mmap_lock
+ * process A thread 2 calls mmap, blocks taking write lock
+ * process B thread 1 takes page fault, read lock on own mmap lock
+ * process B thread 2 calls mmap, blocks taking write lock
+ * process A thread 1 blocks taking read lock on process B
+ * process B thread 1 blocks taking read lock on process A
+ *
+ * Disable page faults to prevent potential deadlock
+ * and retry the copy outside the mmap_lock.
+ */
+ pagefault_disable();
ret = copy_from_user(page_kaddr,
(const void __user *)src_addr,
PAGE_SIZE);
+ pagefault_enable();
kunmap_local(page_kaddr);
/* fallback to copy_from_user outside mmap_lock */
if (!page)
goto out;
- page_kaddr = kmap_atomic(page);
+ page_kaddr = kmap_local_page(page);
+ /*
+ * The read mmap_lock is held here. Despite the
+ * mmap_lock being read recursive a deadlock is still
+ * possible if a writer has taken a lock. For example:
+ *
+ * process A thread 1 takes read lock on own mmap_lock
+ * process A thread 2 calls mmap, blocks taking write lock
+ * process B thread 1 takes page fault, read lock on own mmap lock
+ * process B thread 2 calls mmap, blocks taking write lock
+ * process A thread 1 blocks taking read lock on process B
+ * process B thread 1 blocks taking read lock on process A
+ *
+ * Disable page faults to prevent potential deadlock
+ * and retry the copy outside the mmap_lock.
+ */
+ pagefault_disable();
ret = copy_from_user(page_kaddr,
(const void __user *) src_addr,
PAGE_SIZE);
- kunmap_atomic(page_kaddr);
+ pagefault_enable();
+ kunmap_local(page_kaddr);
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
mmap_read_unlock(dst_mm);
BUG_ON(!page);
- page_kaddr = kmap(page);
+ page_kaddr = kmap_local_page(page);
err = copy_from_user(page_kaddr,
(const void __user *) src_addr,
PAGE_SIZE);
- kunmap(page);
+ kunmap_local(page_kaddr);
if (unlikely(err)) {
err = -EFAULT;
goto out;
int fg;
struct size_class *class = pool->size_class[i];
+ if (!class)
+ continue;
+
if (class->index != i)
continue;
if (!page)
return -ENOMEM;
- for (p = page, len = 0; len < nbytes; p++, len++) {
+ for (p = page, len = 0; len < nbytes; p++) {
if (get_user(*p, buff++)) {
free_page((unsigned long)page);
return -EFAULT;
}
+ len += 1;
if (*p == '\0' || *p == '\n')
break;
}
hdev->acl_cnt += conn->sent;
} else {
struct hci_conn *acl = conn->link;
+
if (acl) {
acl->link = NULL;
hci_conn_drop(acl);
}
+
+ /* Unacked ISO frames */
+ if (conn->type == ISO_LINK) {
+ if (hdev->iso_pkts)
+ hdev->iso_cnt += conn->sent;
+ else if (hdev->le_pkts)
+ hdev->le_cnt += conn->sent;
+ else
+ hdev->acl_cnt += conn->sent;
+ }
}
if (conn->amp_mgr)
if (!cis)
return ERR_PTR(-ENOMEM);
cis->cleanup = cis_cleanup;
+ cis->dst_type = dst_type;
}
if (cis->state == BT_CONNECTED)
struct hci_conn *le;
struct hci_conn *cis;
- /* Convert from ISO socket address type to HCI address type */
- if (dst_type == BDADDR_LE_PUBLIC)
- dst_type = ADDR_LE_DEV_PUBLIC;
- else
- dst_type = ADDR_LE_DEV_RANDOM;
-
if (hci_dev_test_flag(hdev, HCI_ADVERTISING))
le = hci_connect_le(hdev, dst, dst_type, false,
BT_SECURITY_LOW,
return err;
}
+static inline u8 le_addr_type(u8 bdaddr_type)
+{
+ if (bdaddr_type == BDADDR_LE_PUBLIC)
+ return ADDR_LE_DEV_PUBLIC;
+ else
+ return ADDR_LE_DEV_RANDOM;
+}
+
static int iso_connect_bis(struct sock *sk)
{
struct iso_conn *conn;
/* Just bind if DEFER_SETUP has been set */
if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
hcon = hci_bind_cis(hdev, &iso_pi(sk)->dst,
- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);
+ le_addr_type(iso_pi(sk)->dst_type),
+ &iso_pi(sk)->qos);
if (IS_ERR(hcon)) {
err = PTR_ERR(hcon);
goto done;
}
} else {
hcon = hci_connect_cis(hdev, &iso_pi(sk)->dst,
- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);
+ le_addr_type(iso_pi(sk)->dst_type),
+ &iso_pi(sk)->qos);
if (IS_ERR(hcon)) {
err = PTR_ERR(hcon);
goto done;
if (link_type == LE_LINK && c->src_type == BDADDR_BREDR)
continue;
- if (c->psm == psm) {
+ if (c->chan_type != L2CAP_CHAN_FIXED && c->psm == psm) {
int src_match, dst_match;
int src_any, dst_any;
l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
- if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) {
+ if (remote_efs &&
+ test_bit(FLAG_EFS_ENABLE, &chan->flags)) {
chan->remote_id = efs.id;
chan->remote_stype = efs.stype;
chan->remote_msdu = le16_to_cpu(efs.msdu);
BT_DBG("psm 0x%2.2x scid 0x%4.4x mtu %u mps %u", __le16_to_cpu(psm),
scid, mtu, mps);
+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A
+ * page 1059:
+ *
+ * Valid range: 0x0001-0x00ff
+ *
+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges
+ */
+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {
+ result = L2CAP_CR_LE_BAD_PSM;
+ chan = NULL;
+ goto response;
+ }
+
/* Check if we have socket listening on psm */
pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,
&conn->hcon->dst, LE_LINK);
psm = req->psm;
+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A
+ * page 1059:
+ *
+ * Valid range: 0x0001-0x00ff
+ *
+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges
+ */
+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {
+ result = L2CAP_CR_LE_BAD_PSM;
+ goto response;
+ }
+
BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);
memset(&pdu, 0, sizeof(pdu));
struct l2cap_ctrl *control,
struct sk_buff *skb, u8 event)
{
+ struct l2cap_ctrl local_control;
int err = 0;
bool skb_in_use = false;
chan->buffer_seq = chan->expected_tx_seq;
skb_in_use = true;
+ /* l2cap_reassemble_sdu may free skb, hence invalidate
+ * control, so make a copy in advance to use it after
+ * l2cap_reassemble_sdu returns and to avoid the race
+ * condition, for example:
+ *
+ * The current thread calls:
+ * l2cap_reassemble_sdu
+ * chan->ops->recv == l2cap_sock_recv_cb
+ * __sock_queue_rcv_skb
+ * Another thread calls:
+ * bt_sock_recvmsg
+ * skb_recv_datagram
+ * skb_free_datagram
+ * Then the current thread tries to access control, but
+ * it was freed by skb_free_datagram.
+ */
+ local_control = *control;
err = l2cap_reassemble_sdu(chan, skb, control);
if (err)
break;
- if (control->final) {
+ if (local_control.final) {
if (!test_and_clear_bit(CONN_REJ_ACT,
&chan->conn_state)) {
- control->final = 0;
- l2cap_retransmit_all(chan, control);
+ local_control.final = 0;
+ l2cap_retransmit_all(chan, &local_control);
l2cap_ertm_send(chan);
}
}
static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
struct sk_buff *skb)
{
+ /* l2cap_reassemble_sdu may free skb, hence invalidate control, so store
+ * the txseq field in advance to use it after l2cap_reassemble_sdu
+ * returns and to avoid the race condition, for example:
+ *
+ * The current thread calls:
+ * l2cap_reassemble_sdu
+ * chan->ops->recv == l2cap_sock_recv_cb
+ * __sock_queue_rcv_skb
+ * Another thread calls:
+ * bt_sock_recvmsg
+ * skb_recv_datagram
+ * skb_free_datagram
+ * Then the current thread tries to access control, but it was freed by
+ * skb_free_datagram.
+ */
+ u16 txseq = control->txseq;
+
BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb,
chan->rx_state);
- if (l2cap_classify_txseq(chan, control->txseq) ==
- L2CAP_TXSEQ_EXPECTED) {
+ if (l2cap_classify_txseq(chan, txseq) == L2CAP_TXSEQ_EXPECTED) {
l2cap_pass_to_tx(chan, control);
BT_DBG("buffer_seq %u->%u", chan->buffer_seq,
}
}
- chan->last_acked_seq = control->txseq;
- chan->expected_tx_seq = __next_seq(chan, control->txseq);
+ chan->last_acked_seq = txseq;
+ chan->expected_tx_seq = __next_seq(chan, txseq);
return 0;
}
return;
}
+ l2cap_chan_hold(chan);
l2cap_chan_lock(chan);
} else {
BT_DBG("unknown cid 0x%4.4x", cid);
* expected length.
*/
if (skb->len < L2CAP_LEN_SIZE) {
- if (l2cap_recv_frag(conn, skb, conn->mtu) < 0)
- goto drop;
- return;
+ l2cap_recv_frag(conn, skb, conn->mtu);
+ break;
}
len = get_unaligned_le16(skb->data) + L2CAP_HDR_SIZE;
/* Header still could not be read just continue */
if (conn->rx_skb->len < L2CAP_LEN_SIZE)
- return;
+ break;
}
if (skb->len > conn->rx_len) {
if (data[IFLA_BR_FDB_FLUSH]) {
struct net_bridge_fdb_flush_desc desc = {
- .flags_mask = BR_FDB_STATIC
+ .flags_mask = BIT(BR_FDB_STATIC)
};
br_fdb_flush(br, &desc);
struct netlink_ext_ack *extack)
{
struct net_bridge_fdb_flush_desc desc = {
- .flags_mask = BR_FDB_STATIC
+ .flags_mask = BIT(BR_FDB_STATIC)
};
br_fdb_flush(br, &desc);
__skb_unlink(do_skb, &session->skb_queue);
/* drop ref taken in j1939_session_skb_queue() */
skb_unref(do_skb);
+ spin_unlock_irqrestore(&session->skb_queue.lock, flags);
kfree_skb(do_skb);
+ } else {
+ spin_unlock_irqrestore(&session->skb_queue.lock, flags);
}
- spin_unlock_irqrestore(&session->skb_queue.lock, flags);
}
void j1939_session_skb_queue(struct j1939_session *session,
case TC_ACT_SHOT:
mini_qdisc_qstats_cpu_drop(miniq);
kfree_skb_reason(skb, SKB_DROP_REASON_TC_INGRESS);
+ *ret = NET_RX_DROP;
return NULL;
case TC_ACT_STOLEN:
case TC_ACT_QUEUED:
case TC_ACT_TRAP:
consume_skb(skb);
+ *ret = NET_RX_SUCCESS;
return NULL;
case TC_ACT_REDIRECT:
/* skb_mac_header check was done by cls/act_bpf, so
*another = true;
break;
}
+ *ret = NET_RX_SUCCESS;
return NULL;
case TC_ACT_CONSUMED:
+ *ret = NET_RX_SUCCESS;
return NULL;
default:
break;
write_lock_bh(&tbl->lock);
neigh_flush_dev(tbl, dev, skip_perm);
pneigh_ifdown_and_unlock(tbl, dev);
- pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev));
+ pneigh_queue_purge(&tbl->proxy_queue, dev ? dev_net(dev) : NULL);
if (skb_queue_empty_lockless(&tbl->proxy_queue))
del_timer_sync(&tbl->proxy_timer);
return 0;
static int ops_init(const struct pernet_operations *ops, struct net *net)
{
+ struct net_generic *ng;
int err = -ENOMEM;
void *data = NULL;
if (!err)
return 0;
+ if (ops->id && ops->size) {
cleanup:
+ ng = rcu_dereference_protected(net->gen,
+ lockdep_is_held(&pernet_ops_rwsem));
+ ng->ptr[*ops->id] = NULL;
+ }
+
kfree(data);
out:
} else if (i < MAX_SKB_FRAGS) {
skb_zcopy_downgrade_managed(skb);
get_page(page);
- skb_fill_page_desc(skb, i, page, offset, size);
+ skb_fill_page_desc_noacc(skb, i, page, offset, size);
} else {
return -EMSGSIZE;
}
}
EXPORT_SYMBOL_GPL(sk_msg_is_readable);
-static struct sk_msg *alloc_sk_msg(void)
+static struct sk_msg *alloc_sk_msg(gfp_t gfp)
{
struct sk_msg *msg;
- msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_KERNEL);
+ msg = kzalloc(sizeof(*msg), gfp | __GFP_NOWARN);
if (unlikely(!msg))
return NULL;
sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS);
if (!sk_rmem_schedule(sk, skb, skb->truesize))
return NULL;
- return alloc_sk_msg();
+ return alloc_sk_msg(GFP_KERNEL);
}
static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
u32 off, u32 len)
{
- struct sk_msg *msg = alloc_sk_msg();
+ struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC);
struct sock *sk = psock->sk;
int err;
static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
struct sock_reuseport *reuse, bool bind_inany);
+void reuseport_has_conns_set(struct sock *sk)
+{
+ struct sock_reuseport *reuse;
+
+ if (!rcu_access_pointer(sk->sk_reuseport_cb))
+ return;
+
+ spin_lock_bh(&reuseport_lock);
+ reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
+ lockdep_is_held(&reuseport_lock));
+ if (likely(reuse))
+ reuse->has_conns = 1;
+ spin_unlock_bh(&reuseport_lock);
+}
+EXPORT_SYMBOL(reuseport_has_conns_set);
+
static int reuseport_sock_index(struct sock *sk,
const struct sock_reuseport *reuse,
bool closed)
static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
const char *user_protocol)
{
+ const struct dsa_device_ops *tag_ops = NULL;
struct dsa_switch *ds = dp->ds;
struct dsa_switch_tree *dst = ds->dst;
- const struct dsa_device_ops *tag_ops;
enum dsa_tag_protocol default_proto;
/* Find out which protocol the switch would prefer. */
}
tag_ops = dsa_find_tagger_by_name(user_protocol);
- } else {
- tag_ops = dsa_tag_driver_get(default_proto);
+ if (IS_ERR(tag_ops)) {
+ dev_warn(ds->dev,
+ "Failed to find a tagging driver for protocol %s, using default\n",
+ user_protocol);
+ tag_ops = NULL;
+ }
}
+ if (!tag_ops)
+ tag_ops = dsa_tag_driver_get(default_proto);
+
if (IS_ERR(tag_ops)) {
if (PTR_ERR(tag_ops) == -ENOPROTOOPT)
return -EPROBE_DEFER;
case NETDEV_CHANGELOWERSTATE: {
struct netdev_notifier_changelowerstate_info *info = ptr;
struct dsa_port *dp;
- int err;
+ int err = 0;
if (dsa_slave_dev_check(dev)) {
dp = dsa_slave_to_port(dev);
if (ret)
goto err_free;
- ret = get_module_eeprom_by_page(dev, &page_data, info->extack);
+ ret = get_module_eeprom_by_page(dev, &page_data, info ? info->extack : NULL);
if (ret < 0)
goto err_ops;
if (ret < 0)
return ret;
- ret = pse_get_pse_attributes(dev, info->extack, data);
+ ret = pse_get_pse_attributes(dev, info ? info->extack : NULL, data);
ethnl_ops_complete(dev);
struct hsr_port *port)
{
if (!frame->skb_std) {
- if (frame->skb_hsr) {
+ if (frame->skb_hsr)
frame->skb_std =
create_stripped_skb_hsr(frame->skb_hsr, frame);
- } else {
- /* Unexpected */
- WARN_ONCE(1, "%s:%d: Unexpected frame received (port_src %s)\n",
- __FILE__, __LINE__, port->dev->name);
+ else
+ netdev_warn_once(port->dev,
+ "Unexpected frame received in hsr_get_untagged_frame()\n");
+
+ if (!frame->skb_std)
return NULL;
- }
}
return skb_clone(frame->skb_std, GFP_ATOMIC);
if (err < 0)
goto out;
- if (addr->family != AF_IEEE802154)
+ if (addr->family != AF_IEEE802154) {
+ err = -EINVAL;
goto out;
+ }
ieee802154_addr_from_sa(&haddr, &addr->addr);
dev = ieee802154_get_dev(sock_net(sk), &haddr);
(TCPF_ESTABLISHED | TCPF_SYN_RECV |
TCPF_CLOSE_WAIT | TCPF_CLOSE)));
+ if (test_bit(SOCK_SUPPORT_ZC, &sock->flags))
+ set_bit(SOCK_SUPPORT_ZC, &newsock->flags);
sock_graft(sk2, newsock);
newsock->state = SS_CONNECTED;
}
inet->inet_daddr = fl4->daddr;
inet->inet_dport = usin->sin_port;
- reuseport_has_conns(sk, true);
+ reuseport_has_conns_set(sk);
sk->sk_state = TCP_ESTABLISHED;
sk_set_txhash(sk);
inet->inet_id = get_random_u16();
dev_match = dev_match || (res.type == RTN_LOCAL &&
dev == net->loopback_dev);
if (dev_match) {
- ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
return ret;
}
if (no_addr)
ret = 0;
if (fib_lookup(net, &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE) == 0) {
if (res.type == RTN_UNICAST)
- ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
}
return ret;
nh->fib_nh_dev = in_dev->dev;
netdev_hold(nh->fib_nh_dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
- nh->fib_nh_scope = RT_SCOPE_LINK;
+ nh->fib_nh_scope = RT_SCOPE_HOST;
if (!netif_carrier_ok(nh->fib_nh_dev))
nh->fib_nh_flags |= RTNH_F_LINKDOWN;
err = 0;
flow.flowi4_tos = iph->tos & IPTOS_RT_MASK;
flow.flowi4_scope = RT_SCOPE_UNIVERSE;
flow.flowi4_l3mdev = l3mdev_master_ifindex_rcu(xt_in(par));
+ flow.flowi4_uid = sock_net_uid(xt_net(par), NULL);
return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert;
}
struct flowi4 fl4 = {
.flowi4_scope = RT_SCOPE_UNIVERSE,
.flowi4_iif = LOOPBACK_IFINDEX,
+ .flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
};
const struct net_device *oif;
const struct net_device *found;
if (!err) {
nh->nh_flags = fib_nh->fib_nh_flags;
fib_info_update_nhc_saddr(net, &fib_nh->nh_common,
- fib_nh->fib_nh_scope);
+ !fib_nh->fib_nh_scope ? 0 : fib_nh->fib_nh_scope - 1);
} else {
fib_nh_release(net, fib_nh);
}
WRITE_ONCE(sk->sk_sndbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]));
WRITE_ONCE(sk->sk_rcvbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]));
+ set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
sk_sockets_allocated_inc(sk);
}
EXPORT_SYMBOL(tcp_init_sock);
} else {
sk->sk_write_space = psock->saved_write_space;
/* Pairs with lockless read in sk_clone_lock() */
- WRITE_ONCE(sk->sk_prot, psock->sk_proto);
+ sock_replace_proto(sk, psock->sk_proto);
}
return 0;
}
}
/* Pairs with lockless read in sk_clone_lock() */
- WRITE_ONCE(sk->sk_prot, &tcp_bpf_prots[family][config]);
+ sock_replace_proto(sk, &tcp_bpf_prots[family][config]);
return 0;
}
EXPORT_SYMBOL_GPL(tcp_bpf_update_proto);
*/
static bool tcp_check_sack_reneging(struct sock *sk, int flag)
{
- if (flag & FLAG_SACK_RENEGING) {
+ if (flag & FLAG_SACK_RENEGING &&
+ flag & FLAG_SND_UNA_ADVANCED) {
struct tcp_sock *tp = tcp_sk(sk);
unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
msecs_to_jiffies(10));
__skb_push(skb, hdrlen);
no_coalesce:
+ limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
+
/* Only socket owner can try to collapse/prune rx queues
* to reduce memory overhead, so add a little headroom here.
* Few sockets backlog are possibly concurrently non empty.
*/
- limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+ limit += 64 * 1024;
if (unlikely(sk_add_backlog(sk, skb, limit))) {
bh_unlock_sock(sk);
if (icsk->icsk_ulp_ops)
goto out_err;
+ if (sk->sk_socket)
+ clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
+
err = ulp_ops->init(sk);
if (err)
goto out_err;
result = lookup_reuseport(net, sk, skb,
saddr, sport, daddr, hnum);
/* Fall back to scoring if group has connections */
- if (result && !reuseport_has_conns(sk, false))
+ if (result && !reuseport_has_conns(sk))
return result;
result = result ? : sk;
{
skb_queue_head_init(&udp_sk(sk)->reader_queue);
sk->sk_destruct = udp_destruct_sock;
+ set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
return 0;
}
if (restore) {
sk->sk_write_space = psock->saved_write_space;
- WRITE_ONCE(sk->sk_prot, psock->sk_proto);
+ sock_replace_proto(sk, psock->sk_proto);
return 0;
}
if (sk->sk_family == AF_INET6)
udp_bpf_check_v6_needs_rebuild(psock->sk_proto);
- WRITE_ONCE(sk->sk_prot, &udp_bpf_prots[family]);
+ sock_replace_proto(sk, &udp_bpf_prots[family]);
return 0;
}
EXPORT_SYMBOL_GPL(udp_bpf_update_proto);
__addrconf_sysctl_unregister(net, all, NETCONFA_IFINDEX_ALL);
err_reg_all:
kfree(dflt);
+ net->ipv6.devconf_dflt = NULL;
#endif
err_alloc_dflt:
kfree(all);
+ net->ipv6.devconf_all = NULL;
err_alloc_all:
kfree(net->ipv6.inet6_addr_lst);
err_alloc_addr:
goto out;
}
- reuseport_has_conns(sk, true);
+ reuseport_has_conns_set(sk);
sk->sk_state = TCP_ESTABLISHED;
sk_set_txhash(sk);
out:
dev->needed_headroom = dst_len;
if (set_mtu) {
- dev->mtu = rt->dst.dev->mtu - t_hlen;
+ int mtu = rt->dst.dev->mtu - t_hlen;
+
if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
- dev->mtu -= 8;
+ mtu -= 8;
if (dev->type == ARPHRD_ETHER)
- dev->mtu -= ETH_HLEN;
+ mtu -= ETH_HLEN;
- if (dev->mtu < IPV6_MIN_MTU)
- dev->mtu = IPV6_MIN_MTU;
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+ WRITE_ONCE(dev->mtu, mtu);
}
}
ip6_rt_put(rt);
struct net_device *tdev = NULL;
struct __ip6_tnl_parm *p = &t->parms;
struct flowi6 *fl6 = &t->fl.u.ip6;
- unsigned int mtu;
int t_hlen;
+ int mtu;
__dev_addr_set(dev, &p->laddr, sizeof(struct in6_addr));
memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr));
dev->hard_header_len = tdev->hard_header_len + t_hlen;
mtu = min_t(unsigned int, tdev->mtu, IP6_MAX_MTU);
- dev->mtu = mtu - t_hlen;
+ mtu = mtu - t_hlen;
if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
- dev->mtu -= 8;
+ mtu -= 8;
- if (dev->mtu < IPV6_MIN_MTU)
- dev->mtu = IPV6_MIN_MTU;
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+ WRITE_ONCE(dev->mtu, mtu);
}
}
}
.flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev),
.flowlabel = (* (__be32 *) iph) & IPV6_FLOWINFO_MASK,
.flowi6_proto = iph->nexthdr,
+ .flowi6_uid = sock_net_uid(net, NULL),
.daddr = iph->saddr,
};
int lookup_flags;
struct flowi6 fl6 = {
.flowi6_iif = LOOPBACK_IFINDEX,
.flowi6_proto = pkt->tprot,
+ .flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
};
u32 ret = 0;
struct flowi6 fl6 = {
.flowi6_iif = LOOPBACK_IFINDEX,
.flowi6_proto = pkt->tprot,
+ .flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
};
struct rt6_info *rt;
int lookup_flags;
static int __net_init ip6_route_net_init_late(struct net *net)
{
#ifdef CONFIG_PROC_FS
- proc_create_net("ipv6_route", 0, net->proc_net, &ipv6_route_seq_ops,
- sizeof(struct ipv6_route_iter));
- proc_create_net_single("rt6_stats", 0444, net->proc_net,
- rt6_stats_seq_show, NULL);
+ if (!proc_create_net("ipv6_route", 0, net->proc_net,
+ &ipv6_route_seq_ops,
+ sizeof(struct ipv6_route_iter)))
+ return -ENOMEM;
+
+ if (!proc_create_net_single("rt6_stats", 0444, net->proc_net,
+ rt6_stats_seq_show, NULL)) {
+ remove_proc_entry("ipv6_route", net->proc_net);
+ return -ENOMEM;
+ }
#endif
return 0;
}
if (tdev && !netif_is_l3_master(tdev)) {
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ int mtu;
- dev->mtu = tdev->mtu - t_hlen;
- if (dev->mtu < IPV6_MIN_MTU)
- dev->mtu = IPV6_MIN_MTU;
+ mtu = tdev->mtu - t_hlen;
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+ WRITE_ONCE(dev->mtu, mtu);
}
}
{
skb_queue_head_init(&udp_sk(sk)->reader_queue);
sk->sk_destruct = udpv6_destruct_sock;
+ set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
return 0;
}
result = lookup_reuseport(net, sk, skb,
saddr, sport, daddr, hnum);
/* Fall back to scoring if group has connections */
- if (result && !reuseport_has_conns(sk, false))
+ if (result && !reuseport_has_conns(sk))
return result;
result = result ? : sk;
/* Buffer limit is okay now, add to ready list */
list_add_tail(&kcm->wait_rx_list,
&kcm->mux->kcm_rx_waiters);
- kcm->rx_wait = true;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_wait, true);
}
static void kcm_rfree(struct sk_buff *skb)
/* For reading rx_wait and rx_psock without holding lock */
smp_mb__after_atomic();
- if (!kcm->rx_wait && !kcm->rx_psock &&
+ if (!READ_ONCE(kcm->rx_wait) && !READ_ONCE(kcm->rx_psock) &&
sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) {
spin_lock_bh(&mux->rx_lock);
kcm_rcv_ready(kcm);
if (kcm_queue_rcv_skb(&kcm->sk, skb)) {
/* Should mean socket buffer full */
list_del(&kcm->wait_rx_list);
- kcm->rx_wait = false;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_wait, false);
/* Commit rx_wait to read in kcm_free */
smp_wmb();
kcm = list_first_entry(&mux->kcm_rx_waiters,
struct kcm_sock, wait_rx_list);
list_del(&kcm->wait_rx_list);
- kcm->rx_wait = false;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_wait, false);
psock->rx_kcm = kcm;
- kcm->rx_psock = psock;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_psock, psock);
spin_unlock_bh(&mux->rx_lock);
spin_lock_bh(&mux->rx_lock);
psock->rx_kcm = NULL;
- kcm->rx_psock = NULL;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_psock, NULL);
/* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with
* kcm_rfree
}
get_page(page);
- skb_fill_page_desc(skb, i, page, offset, size);
+ skb_fill_page_desc_noacc(skb, i, page, offset, size);
skb_shinfo(skb)->flags |= SKBFL_SHARED_FRAG;
coalesced:
if (!kcm->rx_psock) {
if (kcm->rx_wait) {
list_del(&kcm->wait_rx_list);
- kcm->rx_wait = false;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_wait, false);
}
requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue);
if (kcm->rx_wait) {
list_del(&kcm->wait_rx_list);
- kcm->rx_wait = false;
+ /* paired with lockless reads in kcm_rfree() */
+ WRITE_ONCE(kcm->rx_wait, false);
}
/* Move any pending receive messages to other kcm sockets */
requeue_rx_msgs(mux, &sk->sk_receive_queue);
ieee802154_parse_frame_start(struct sk_buff *skb, struct ieee802154_hdr *hdr)
{
int hlen;
- struct ieee802154_mac_cb *cb = mac_cb_init(skb);
+ struct ieee802154_mac_cb *cb = mac_cb(skb);
skb_reset_mac_header(skb);
ieee802154_rx_irqsafe(struct ieee802154_hw *hw, struct sk_buff *skb, u8 lqi)
{
struct ieee802154_local *local = hw_to_local(hw);
+ struct ieee802154_mac_cb *cb = mac_cb_init(skb);
- mac_cb(skb)->lqi = lqi;
+ cb->lqi = lqi;
skb->pkt_type = IEEE802154_RX_MSG;
skb_queue_tail(&local->skb_queue, skb);
tasklet_schedule(&local->tasklet);
set_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags);
}
+static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msghdr *msg,
+ size_t len, int *copied_syn)
+{
+ unsigned int saved_flags = msg->msg_flags;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ int ret;
+
+ lock_sock(ssk);
+ msg->msg_flags |= MSG_DONTWAIT;
+ msk->connect_flags = O_NONBLOCK;
+ msk->is_sendmsg = 1;
+ ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL);
+ msk->is_sendmsg = 0;
+ msg->msg_flags = saved_flags;
+ release_sock(ssk);
+
+ /* do the blocking bits of inet_stream_connect outside the ssk socket lock */
+ if (ret == -EINPROGRESS && !(msg->msg_flags & MSG_DONTWAIT)) {
+ ret = __inet_stream_connect(sk->sk_socket, msg->msg_name,
+ msg->msg_namelen, msg->msg_flags, 1);
+
+ /* Keep the same behaviour of plain TCP: zero the copied bytes in
+ * case of any error, except timeout or signal
+ */
+ if (ret && ret != -EINPROGRESS && ret != -ERESTARTSYS && ret != -EINTR)
+ *copied_syn = 0;
+ }
+
+ return ret;
+}
+
static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
{
struct mptcp_sock *msk = mptcp_sk(sk);
ssock = __mptcp_nmpc_socket(msk);
if (unlikely(ssock && inet_sk(ssock->sk)->defer_connect)) {
- struct sock *ssk = ssock->sk;
int copied_syn = 0;
- lock_sock(ssk);
-
- ret = tcp_sendmsg_fastopen(ssk, msg, &copied_syn, len, NULL);
+ ret = mptcp_sendmsg_fastopen(sk, ssock->sk, msg, len, &copied_syn);
copied += copied_syn;
- if (ret == -EINPROGRESS && copied_syn > 0) {
- /* reflect the new state on the MPTCP socket */
- inet_sk_state_store(sk, inet_sk_state_load(ssk));
- release_sock(ssk);
+ if (ret == -EINPROGRESS && copied_syn > 0)
goto out;
- } else if (ret) {
- release_sock(ssk);
+ else if (ret)
goto do_error;
- }
- release_sock(ssk);
}
timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
sock_put(sk);
}
-static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
+void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
{
#if IS_ENABLED(CONFIG_MPTCP_IPV6)
const struct ipv6_pinfo *ssk6 = inet6_sk(ssk);
return put_user(answ, (int __user *)arg);
}
+static void mptcp_subflow_early_fallback(struct mptcp_sock *msk,
+ struct mptcp_subflow_context *subflow)
+{
+ subflow->request_mptcp = 0;
+ __mptcp_do_fallback(msk);
+}
+
+static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+{
+ struct mptcp_subflow_context *subflow;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct socket *ssock;
+ int err = -EINVAL;
+
+ ssock = __mptcp_nmpc_socket(msk);
+ if (!ssock)
+ return -EINVAL;
+
+ mptcp_token_destroy(msk);
+ inet_sk_state_store(sk, TCP_SYN_SENT);
+ subflow = mptcp_subflow_ctx(ssock->sk);
+#ifdef CONFIG_TCP_MD5SIG
+ /* no MPTCP if MD5SIG is enabled on this socket or we may run out of
+ * TCP option space.
+ */
+ if (rcu_access_pointer(tcp_sk(ssock->sk)->md5sig_info))
+ mptcp_subflow_early_fallback(msk, subflow);
+#endif
+ if (subflow->request_mptcp && mptcp_token_new_connect(ssock->sk)) {
+ MPTCP_INC_STATS(sock_net(ssock->sk), MPTCP_MIB_TOKENFALLBACKINIT);
+ mptcp_subflow_early_fallback(msk, subflow);
+ }
+ if (likely(!__mptcp_check_fallback(msk)))
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVE);
+
+ /* if reaching here via the fastopen/sendmsg path, the caller already
+ * acquired the subflow socket lock, too.
+ */
+ if (msk->is_sendmsg)
+ err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1);
+ else
+ err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags);
+ inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect;
+
+ /* on successful connect, the msk state will be moved to established by
+ * subflow_finish_connect()
+ */
+ if (unlikely(err && err != -EINPROGRESS)) {
+ inet_sk_state_store(sk, inet_sk_state_load(ssock->sk));
+ return err;
+ }
+
+ mptcp_copy_inaddrs(sk, ssock->sk);
+
+ /* unblocking connect, mptcp-level inet_stream_connect will error out
+ * without changing the socket state, update it here.
+ */
+ if (err == -EINPROGRESS)
+ sk->sk_socket->state = ssock->state;
+ return err;
+}
+
static struct proto mptcp_prot = {
.name = "MPTCP",
.owner = THIS_MODULE,
.init = mptcp_init_sock,
+ .connect = mptcp_connect,
.disconnect = mptcp_disconnect,
.close = mptcp_close,
.accept = mptcp_accept,
return err;
}
-static void mptcp_subflow_early_fallback(struct mptcp_sock *msk,
- struct mptcp_subflow_context *subflow)
-{
- subflow->request_mptcp = 0;
- __mptcp_do_fallback(msk);
-}
-
static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
int addr_len, int flags)
{
- struct mptcp_sock *msk = mptcp_sk(sock->sk);
- struct mptcp_subflow_context *subflow;
- struct socket *ssock;
- int err = -EINVAL;
+ int ret;
lock_sock(sock->sk);
- if (uaddr) {
- if (addr_len < sizeof(uaddr->sa_family))
- goto unlock;
-
- if (uaddr->sa_family == AF_UNSPEC) {
- err = mptcp_disconnect(sock->sk, flags);
- sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED;
- goto unlock;
- }
- }
-
- if (sock->state != SS_UNCONNECTED && msk->subflow) {
- /* pending connection or invalid state, let existing subflow
- * cope with that
- */
- ssock = msk->subflow;
- goto do_connect;
- }
-
- ssock = __mptcp_nmpc_socket(msk);
- if (!ssock)
- goto unlock;
-
- mptcp_token_destroy(msk);
- inet_sk_state_store(sock->sk, TCP_SYN_SENT);
- subflow = mptcp_subflow_ctx(ssock->sk);
-#ifdef CONFIG_TCP_MD5SIG
- /* no MPTCP if MD5SIG is enabled on this socket or we may run out of
- * TCP option space.
- */
- if (rcu_access_pointer(tcp_sk(ssock->sk)->md5sig_info))
- mptcp_subflow_early_fallback(msk, subflow);
-#endif
- if (subflow->request_mptcp && mptcp_token_new_connect(ssock->sk)) {
- MPTCP_INC_STATS(sock_net(ssock->sk), MPTCP_MIB_TOKENFALLBACKINIT);
- mptcp_subflow_early_fallback(msk, subflow);
- }
- if (likely(!__mptcp_check_fallback(msk)))
- MPTCP_INC_STATS(sock_net(sock->sk), MPTCP_MIB_MPCAPABLEACTIVE);
-
-do_connect:
- err = ssock->ops->connect(ssock, uaddr, addr_len, flags);
- inet_sk(sock->sk)->defer_connect = inet_sk(ssock->sk)->defer_connect;
- sock->state = ssock->state;
-
- /* on successful connect, the msk state will be moved to established by
- * subflow_finish_connect()
- */
- if (!err || err == -EINPROGRESS)
- mptcp_copy_inaddrs(sock->sk, ssock->sk);
- else
- inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
-
-unlock:
+ mptcp_sk(sock->sk)->connect_flags = flags;
+ ret = __inet_stream_connect(sock, uaddr, addr_len, flags, 0);
release_sock(sock->sk);
- return err;
+ return ret;
}
static int mptcp_listen(struct socket *sock, int backlog)
if (mptcp_is_fully_established(newsk))
mptcp_pm_fully_established(msk, msk->first, GFP_KERNEL);
- mptcp_copy_inaddrs(newsk, msk->first);
mptcp_rcv_space_init(msk, msk->first);
mptcp_propagate_sndbuf(newsk, msk->first);
u8 mpc_endpoint_id;
u8 recvmsg_inq:1,
cork:1,
- nodelay:1;
+ nodelay:1,
+ is_sendmsg:1;
+ int connect_flags;
struct work_struct work;
struct sk_buff *ooo_last_skb;
struct rb_root out_of_order_queue;
int mptcp_allow_join_id0(const struct net *net);
unsigned int mptcp_stale_loss_cnt(const struct net *net);
int mptcp_get_pm_type(const struct net *net);
+void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk);
void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
struct mptcp_options_received *mp_opt);
bool __mptcp_retransmit_pending_data(struct sock *sk);
goto dispose_child;
}
+ if (new_msk)
+ mptcp_copy_inaddrs(new_msk, child);
subflow_drop_ctx(child);
goto out;
}
ctx->conn = new_msk;
new_msk = NULL;
+ /* set msk addresses early to ensure mptcp_pm_get_local_id()
+ * uses the correct data
+ */
+ mptcp_copy_inaddrs(ctx->conn, child);
+
/* with OoO packets we can reach here without ingress
* mpc option
*/
#define AHASH_MAX_SIZE (6 * AHASH_INIT_SIZE)
/* Max muber of elements in the array block when tuned */
#define AHASH_MAX_TUNED 64
-
#define AHASH_MAX(h) ((h)->bucketsize)
-/* Max number of elements can be tuned */
-#ifdef IP_SET_HASH_WITH_MULTI
-static u8
-tune_bucketsize(u8 curr, u32 multi)
-{
- u32 n;
-
- if (multi < curr)
- return curr;
-
- n = curr + AHASH_INIT_SIZE;
- /* Currently, at listing one hash bucket must fit into a message.
- * Therefore we have a hard limit here.
- */
- return n > curr && n <= AHASH_MAX_TUNED ? n : curr;
-}
-#define TUNE_BUCKETSIZE(h, multi) \
- ((h)->bucketsize = tune_bucketsize((h)->bucketsize, multi))
-#else
-#define TUNE_BUCKETSIZE(h, multi)
-#endif
-
/* A hash bucket */
struct hbucket {
struct rcu_head rcu; /* for call_rcu */
goto set_full;
/* Create a new slot */
if (n->pos >= n->size) {
- TUNE_BUCKETSIZE(h, multi);
+#ifdef IP_SET_HASH_WITH_MULTI
+ if (h->bucketsize >= AHASH_MAX_TUNED)
+ goto set_full;
+ else if (h->bucketsize < multi)
+ h->bucketsize += AHASH_INIT_SIZE;
+#endif
if (n->size >= AHASH_MAX(h)) {
/* Trigger rehashing */
mtype_data_next(&h->next, d);
int __net_init ip_vs_app_net_init(struct netns_ipvs *ipvs)
{
INIT_LIST_HEAD(&ipvs->app_list);
- proc_create_net("ip_vs_app", 0, ipvs->net->proc_net, &ip_vs_app_seq_ops,
- sizeof(struct seq_net_private));
+#ifdef CONFIG_PROC_FS
+ if (!proc_create_net("ip_vs_app", 0, ipvs->net->proc_net,
+ &ip_vs_app_seq_ops,
+ sizeof(struct seq_net_private)))
+ return -ENOMEM;
+#endif
return 0;
}
void __net_exit ip_vs_app_net_cleanup(struct netns_ipvs *ipvs)
{
unregister_ip_vs_app(ipvs, NULL /* all */);
+#ifdef CONFIG_PROC_FS
remove_proc_entry("ip_vs_app", ipvs->net->proc_net);
+#endif
}
* The drop rate array needs tuning for real environments.
* Called from timer bh only => no locking
*/
- static const char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
- static char todrop_counter[9] = {0};
+ static const signed char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
+ static signed char todrop_counter[9] = {0};
int i;
/* if the conn entry hasn't lasted for 60 seconds, don't drop it.
{
atomic_set(&ipvs->conn_count, 0);
- proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net,
- &ip_vs_conn_seq_ops, sizeof(struct ip_vs_iter_state));
- proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net,
- &ip_vs_conn_sync_seq_ops,
- sizeof(struct ip_vs_iter_state));
+#ifdef CONFIG_PROC_FS
+ if (!proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net,
+ &ip_vs_conn_seq_ops,
+ sizeof(struct ip_vs_iter_state)))
+ goto err_conn;
+
+ if (!proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net,
+ &ip_vs_conn_sync_seq_ops,
+ sizeof(struct ip_vs_iter_state)))
+ goto err_conn_sync;
+#endif
+
return 0;
+
+#ifdef CONFIG_PROC_FS
+err_conn_sync:
+ remove_proc_entry("ip_vs_conn", ipvs->net->proc_net);
+err_conn:
+ return -ENOMEM;
+#endif
}
void __net_exit ip_vs_conn_net_cleanup(struct netns_ipvs *ipvs)
{
/* flush all the connection entries first */
ip_vs_conn_flush(ipvs);
+#ifdef CONFIG_PROC_FS
remove_proc_entry("ip_vs_conn", ipvs->net->proc_net);
remove_proc_entry("ip_vs_conn_sync", ipvs->net->proc_net);
+#endif
}
int __init ip_vs_conn_init(void)
WARN_ON(nf_nat_hook != NULL);
RCU_INIT_POINTER(nf_nat_hook, &nat_hook);
- return register_nf_nat_bpf();
+ ret = register_nf_nat_bpf();
+ if (ret < 0) {
+ RCU_INIT_POINTER(nf_nat_hook, NULL);
+ nf_ct_helper_expectfn_unregister(&follow_master_nat);
+ synchronize_net();
+ unregister_pernet_subsys(&nat_net_ops);
+ kvfree(nf_nat_bysource);
+ }
+
+ return ret;
}
static void __exit nf_nat_cleanup(void)
(NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
if (flags & NFT_SET_ELEM_INTERVAL_END)
return false;
- if (!nla[NFTA_SET_ELEM_KEY_END] &&
- !(flags & NFT_SET_ELEM_CATCHALL))
+
+ if (nla[NFTA_SET_ELEM_KEY_END] &&
+ flags & NFT_SET_ELEM_CATCHALL)
return false;
} else {
if (nla[NFTA_SET_ELEM_KEY_END])
nf_tables_chain_destroy(&trans->ctx);
break;
case NFT_MSG_DELRULE:
- if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
- nft_flow_rule_destroy(nft_trans_flow_rule(trans));
-
nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
break;
case NFT_MSG_DELSET:
nft_rule_expr_deactivate(&trans->ctx,
nft_trans_rule(trans),
NFT_TRANS_COMMIT);
+
+ if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
+ nft_flow_rule_destroy(nft_trans_flow_rule(trans));
break;
case NFT_MSG_NEWSET:
nft_clear(net, nft_trans_set(trans));
nft_net = nft_pernet(net);
deleted = 0;
mutex_lock(&nft_net->commit_mutex);
+ if (!list_empty(&nf_tables_destroy_list))
+ rcu_barrier();
again:
list_for_each_entry(table, &nft_net->tables, list) {
if (nft_table_has_owner(table) &&
[NFTA_PAYLOAD_SREG] = { .type = NLA_U32 },
[NFTA_PAYLOAD_DREG] = { .type = NLA_U32 },
[NFTA_PAYLOAD_BASE] = { .type = NLA_U32 },
- [NFTA_PAYLOAD_OFFSET] = NLA_POLICY_MAX_BE(NLA_U32, 255),
- [NFTA_PAYLOAD_LEN] = NLA_POLICY_MAX_BE(NLA_U32, 255),
+ [NFTA_PAYLOAD_OFFSET] = NLA_POLICY_MAX(NLA_BE32, 255),
+ [NFTA_PAYLOAD_LEN] = NLA_POLICY_MAX(NLA_BE32, 255),
[NFTA_PAYLOAD_CSUM_TYPE] = { .type = NLA_U32 },
- [NFTA_PAYLOAD_CSUM_OFFSET] = NLA_POLICY_MAX_BE(NLA_U32, 255),
+ [NFTA_PAYLOAD_CSUM_OFFSET] = NLA_POLICY_MAX(NLA_BE32, 255),
[NFTA_PAYLOAD_CSUM_FLAGS] = { .type = NLA_U32 },
};
static unsigned long *mc_groups = &mc_group_start;
static unsigned long mc_groups_longs = 1;
+/* We need the last attribute with non-zero ID therefore a 2-entry array */
+static struct nla_policy genl_policy_reject_all[] = {
+ { .type = NLA_REJECT },
+ { .type = NLA_REJECT },
+};
+
static int genl_ctrl_event(int event, const struct genl_family *family,
const struct genl_multicast_group *grp,
int grp_id);
+static void
+genl_op_fill_in_reject_policy(const struct genl_family *family,
+ struct genl_ops *op)
+{
+ BUILD_BUG_ON(ARRAY_SIZE(genl_policy_reject_all) - 1 != 1);
+
+ if (op->policy || op->cmd < family->resv_start_op)
+ return;
+
+ op->policy = genl_policy_reject_all;
+ op->maxattr = 1;
+}
+
static const struct genl_family *genl_family_find_byid(unsigned int id)
{
return idr_find(&genl_fam_idr, id);
op->maxattr = family->maxattr;
if (!op->policy)
op->policy = family->policy;
+
+ genl_op_fill_in_reject_policy(family, op);
}
static int genl_get_cmd_full(u32 cmd, const struct genl_family *family,
op->maxattr = family->maxattr;
op->policy = family->policy;
+
+ genl_op_fill_in_reject_policy(family, op);
}
static int genl_get_cmd_small(u32 cmd, const struct genl_family *family,
genl_get_cmd_by_index(i, family, &op);
if (op.dumpit == NULL && op.doit == NULL)
return -EINVAL;
+ if (WARN_ON(op.cmd >= family->resv_start_op && op.validate))
+ return -EINVAL;
for (j = i + 1; j < genl_get_cmd_cnt(family); j++) {
struct genl_ops op2;
if (IS_ERR(dp))
return;
- WARN(dp->user_features, "Dropping previously announced user features\n");
+ pr_warn("%s: Dropping previously announced user features\n",
+ ovs_dp_name(dp));
dp->user_features = 0;
}
.parallel_ops = true,
.small_ops = dp_vport_genl_ops,
.n_small_ops = ARRAY_SIZE(dp_vport_genl_ops),
+ .resv_start_op = OVS_VPORT_CMD_SET + 1,
.mcgrps = &ovs_dp_vport_multicast_group,
.n_mcgrps = 1,
.module = THIS_MODULE,
unsigned char *dptr;
int len;
+ if (!neigh->dev)
+ return;
+
len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 3;
if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
skip:
if (!ingress) {
- notify_and_destroy(net, skb, n, classid,
- rtnl_dereference(dev->qdisc), new);
+ old = rtnl_dereference(dev->qdisc);
if (new && !new->ops->attach)
qdisc_refcount_inc(new);
rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc);
+ notify_and_destroy(net, skb, n, classid, old, new);
+
if (new && new->ops->attach)
new->ops->attach(new);
} else {
static void cake_reset(struct Qdisc *sch)
{
+ struct cake_sched_data *q = qdisc_priv(sch);
u32 c;
+ if (!q->tins)
+ return;
+
for (c = 0; c < CAKE_MAX_TINS; c++)
cake_clear_tin(sch, c);
}
if (opt) {
err = fq_codel_change(sch, opt, extack);
if (err)
- return err;
+ goto init_failure;
}
err = tcf_block_get(&q->block, &q->filter_list, sch, extack);
if (err)
- return err;
+ goto init_failure;
if (!q->flows) {
q->flows = kvcalloc(q->flows_cnt,
sizeof(struct fq_codel_flow),
GFP_KERNEL);
- if (!q->flows)
- return -ENOMEM;
-
+ if (!q->flows) {
+ err = -ENOMEM;
+ goto init_failure;
+ }
q->backlogs = kvcalloc(q->flows_cnt, sizeof(u32), GFP_KERNEL);
- if (!q->backlogs)
- return -ENOMEM;
-
+ if (!q->backlogs) {
+ err = -ENOMEM;
+ goto alloc_failure;
+ }
for (i = 0; i < q->flows_cnt; i++) {
struct fq_codel_flow *flow = q->flows + i;
else
sch->flags &= ~TCQ_F_CAN_BYPASS;
return 0;
+
+alloc_failure:
+ kvfree(q->flows);
+ q->flows = NULL;
+init_failure:
+ q->flows_cnt = 0;
+ return err;
}
static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
{
struct red_sched_data *q = qdisc_priv(sch);
struct Qdisc *child = q->qdisc;
+ unsigned int len;
int ret;
q->vars.qavg = red_calc_qavg(&q->parms,
break;
}
+ len = qdisc_pkt_len(skb);
ret = qdisc_enqueue(skb, child, to_free);
if (likely(ret == NET_XMIT_SUCCESS)) {
- qdisc_qstats_backlog_inc(sch, skb);
+ sch->qstats.backlog += len;
sch->q.qlen++;
} else if (net_xmit_drop_count(ret)) {
q->stats.pdrop++;
{
struct sfb_sched_data *q = qdisc_priv(sch);
- qdisc_reset(q->qdisc);
+ if (likely(q->qdisc))
+ qdisc_reset(q->qdisc);
q->slot = 0;
q->double_buffering = false;
sfb_zero_all_buckets(q);
rc = register_pernet_subsys(&smc_net_stat_ops);
if (rc)
- return rc;
+ goto out_pernet_subsys;
smc_ism_init();
smc_clc_init();
rc = smc_nl_init();
if (rc)
- goto out_pernet_subsys;
+ goto out_pernet_subsys_stat;
rc = smc_pnet_init();
if (rc)
smc_pnet_exit();
out_nl:
smc_nl_exit();
+out_pernet_subsys_stat:
+ unregister_pernet_subsys(&smc_net_stat_ops);
out_pernet_subsys:
unregister_pernet_subsys(&smc_net_ops);
}
memcpy(lgr->pnet_id, ibdev->pnetid[ibport - 1],
SMC_MAX_PNETID_LEN);
- if (smc_wr_alloc_lgr_mem(lgr))
+ rc = smc_wr_alloc_lgr_mem(lgr);
+ if (rc)
goto free_wq;
smc_llc_lgr_init(lgr, smc);
goto unwrap_failed;
mic.len = len;
mic.data = kmalloc(len, GFP_KERNEL);
- if (!mic.data)
+ if (ZERO_OR_NULL_PTR(mic.data))
goto unwrap_failed;
if (read_bytes_from_xdr_buf(rcv_buf, offset, mic.data, mic.len))
goto unwrap_failed;
struct net *net)
{
struct rpc_sysfs_client *rpc_client;
+ struct rpc_sysfs_xprt_switch *xswitch =
+ (struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs;
+
+ if (!xswitch)
+ return;
rpc_client = rpc_sysfs_client_alloc(rpc_sunrpc_client_kobj,
net, clnt->cl_clid);
if (rpc_client) {
char name[] = "switch";
- struct rpc_sysfs_xprt_switch *xswitch =
- (struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs;
int ret;
clnt->cl_sysfs = rpc_client;
rpc_xprt_switch->xprt_switch = xprt_switch;
rpc_xprt_switch->xprt = xprt;
kobject_uevent(&rpc_xprt_switch->kobject, KOBJ_ADD);
+ } else {
+ xprt_switch->xps_sysfs = NULL;
}
}
struct rpc_sysfs_xprt_switch *switch_obj =
(struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs;
+ if (!switch_obj)
+ return;
+
rpc_xprt = rpc_sysfs_xprt_alloc(&switch_obj->kobject, xprt, gfp_flags);
if (rpc_xprt) {
xprt->xprt_sysfs = rpc_xprt;
{
struct net *net = d->net;
struct tipc_net *tn = tipc_net(net);
- bool trial = time_before(jiffies, tn->addr_trial_end);
u32 self = tipc_own_addr(net);
+ bool trial = time_before(jiffies, tn->addr_trial_end) && !self;
if (mtyp == DSC_TRIAL_FAIL_MSG) {
if (!trial)
static void tipc_topsrv_accept(struct work_struct *work)
{
struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork);
- struct socket *lsock = srv->listener;
- struct socket *newsock;
+ struct socket *newsock, *lsock;
struct tipc_conn *con;
struct sock *newsk;
int ret;
+ spin_lock_bh(&srv->idr_lock);
+ if (!srv->listener) {
+ spin_unlock_bh(&srv->idr_lock);
+ return;
+ }
+ lsock = srv->listener;
+ spin_unlock_bh(&srv->idr_lock);
+
while (1) {
ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
if (ret < 0)
read_lock_bh(&sk->sk_callback_lock);
srv = sk->sk_user_data;
- if (srv->listener)
+ if (srv)
queue_work(srv->rcv_wq, &srv->awork);
read_unlock_bh(&sk->sk_callback_lock);
}
sub.seq.upper = upper;
sub.timeout = TIPC_WAIT_FOREVER;
sub.filter = filter;
- *(u32 *)&sub.usr_handle = port;
+ *(u64 *)&sub.usr_handle = (u64)port;
con = tipc_conn_alloc(tipc_topsrv(net));
if (IS_ERR(con))
__module_get(lsock->sk->sk_prot_creator->owner);
srv->listener = NULL;
spin_unlock_bh(&srv->idr_lock);
- sock_release(lsock);
+
tipc_topsrv_work_stop(srv);
+ sock_release(lsock);
idr_destroy(&srv->conn_idr);
kfree(srv);
}
return desc.error;
}
-static int tls_strp_read_short(struct tls_strparser *strp)
+static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
{
struct skb_shared_info *shinfo;
struct page *page;
* to read the data out. Otherwise the connection will stall.
* Without pressure threshold of INT_MAX will never be ready.
*/
- if (likely(!tcp_epollin_ready(strp->sk, INT_MAX)))
+ if (likely(qshort && !tcp_epollin_ready(strp->sk, INT_MAX)))
return 0;
shinfo = skb_shinfo(strp->anchor);
return 0;
}
+static bool tls_strp_check_no_dup(struct tls_strparser *strp)
+{
+ unsigned int len = strp->stm.offset + strp->stm.full_len;
+ struct sk_buff *skb;
+ u32 seq;
+
+ skb = skb_shinfo(strp->anchor)->frag_list;
+ seq = TCP_SKB_CB(skb)->seq;
+
+ while (skb->len < len) {
+ seq += skb->len;
+ len -= skb->len;
+ skb = skb->next;
+
+ if (TCP_SKB_CB(skb)->seq != seq)
+ return false;
+ }
+
+ return true;
+}
+
static void tls_strp_load_anchor_with_queue(struct tls_strparser *strp, int len)
{
struct tcp_sock *tp = tcp_sk(strp->sk);
return tls_strp_read_copyin(strp);
if (inq < strp->stm.full_len)
- return tls_strp_read_short(strp);
+ return tls_strp_read_copy(strp, true);
if (!strp->stm.full_len) {
tls_strp_load_anchor_with_queue(strp, inq);
strp->stm.full_len = sz;
if (!strp->stm.full_len || inq < strp->stm.full_len)
- return tls_strp_read_short(strp);
+ return tls_strp_read_copy(strp, true);
}
+ if (!tls_strp_check_no_dup(strp))
+ return tls_strp_read_copy(strp, false);
+
strp->msg_ready = 1;
tls_rx_msg_ready(strp);
if (restore) {
sk->sk_write_space = psock->saved_write_space;
- WRITE_ONCE(sk->sk_prot, psock->sk_proto);
+ sock_replace_proto(sk, psock->sk_proto);
return 0;
}
unix_dgram_bpf_check_needs_rebuild(psock->sk_proto);
- WRITE_ONCE(sk->sk_prot, &unix_dgram_bpf_prot);
+ sock_replace_proto(sk, &unix_dgram_bpf_prot);
return 0;
}
{
if (restore) {
sk->sk_write_space = psock->saved_write_space;
- WRITE_ONCE(sk->sk_prot, psock->sk_proto);
+ sock_replace_proto(sk, psock->sk_proto);
return 0;
}
unix_stream_bpf_check_needs_rebuild(psock->sk_proto);
- WRITE_ONCE(sk->sk_prot, &unix_stream_bpf_prot);
+ sock_replace_proto(sk, &unix_stream_bpf_prot);
return 0;
}
err = 0;
transport = vsk->transport;
- while ((data = vsock_connectible_has_data(vsk)) == 0) {
+ while (1) {
prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE);
+ data = vsock_connectible_has_data(vsk);
+ if (data != 0)
+ break;
if (sk->sk_err != 0 ||
(sk->sk_shutdown & RCV_SHUTDOWN) ||
const struct vsock_transport *transport;
int err;
- DEFINE_WAIT(wait);
-
sk = sock->sk;
vsk = vsock_sk(sk);
err = 0;
sed 's/ko$$/o/' $(or $(modorder-if-needed), /dev/null) | $(MODPOST) $(modpost-args) -T - $(vmlinux.o-if-present)
targets += $(output-symdump)
-$(output-symdump): $(modorder-if-needed) $(vmlinux.o-if-present) $(moudle.symvers-if-present) $(MODPOST) FORCE
+$(output-symdump): $(modorder-if-needed) $(vmlinux.o-if-present) $(module.symvers-if-present) $(MODPOST) FORCE
$(call if_changed,modpost)
__modpost: $(output-symdump)
if (!expr_eq(prop->menu->dep, prop->visible.expr))
get_dep_str(r, prop->visible.expr, " Visible if: ");
- menu = prop->menu->parent;
- for (i = 0; menu && i < 8; menu = menu->parent) {
+ menu = prop->menu;
+ for (i = 0; menu != &rootmenu && i < 8; menu = menu->parent) {
bool accessible = menu_is_visible(menu);
submenu[i++] = menu;
if (head && location) {
jump = xmalloc(sizeof(struct jump_key));
- if (menu_is_visible(prop->menu)) {
- /*
- * There is not enough room to put the hint at the
- * beginning of the "Prompt" line. Put the hint on the
- * last "Location" line even when it would belong on
- * the former.
- */
- jump->target = prop->menu;
- } else
- jump->target = location;
+ jump->target = location;
if (list_empty(head))
jump->index = 0;
menu = submenu[i];
if (jump && menu == location)
jump->offset = strlen(r->s);
-
- if (menu == &rootmenu)
- /* The real rootmenu prompt is ugly */
- str_printf(r, "%*cMain menu", j, ' ');
- else
- str_printf(r, "%*c-> %s", j, ' ', menu_get_prompt(menu));
-
+ str_printf(r, "%*c-> %s", j, ' ', menu_get_prompt(menu));
if (menu->sym) {
str_printf(r, " (%s [=%s])", menu->sym->name ?
menu->sym->name : "<choice>",
&tmpbuf, size, GFP_NOFS);
dput(dentry);
- if (ret < 0 || !tmpbuf)
- return ret;
+ if (ret < 0 || !tmpbuf) {
+ size = ret;
+ goto out_free;
+ }
fs_ns = inode->i_sb->s_user_ns;
cap = (struct vfs_cap_data *) tmpbuf;
* in `newc'. Verify that the context is valid
* under the new policy.
*/
-static int convert_context(struct context *oldc, struct context *newc, void *p)
+static int convert_context(struct context *oldc, struct context *newc, void *p,
+ gfp_t gfp_flags)
{
struct convert_context_args *args;
struct ocontext *oc;
args = p;
if (oldc->str) {
- s = kstrdup(oldc->str, GFP_KERNEL);
+ s = kstrdup(oldc->str, gfp_flags);
if (!s)
return -ENOMEM;
}
rc = convert->func(context, &dst_convert->context,
- convert->args);
+ convert->args, GFP_ATOMIC);
if (rc) {
context_destroy(&dst->context);
goto out_unlock;
while (i < SIDTAB_LEAF_ENTRIES && *pos < count) {
rc = convert->func(&esrc->ptr_leaf->entries[i].context,
&edst->ptr_leaf->entries[i].context,
- convert->args);
+ convert->args, GFP_KERNEL);
if (rc)
return rc;
(*pos)++;
};
struct sidtab_convert_params {
- int (*func)(struct context *oldc, struct context *newc, void *args);
+ int (*func)(struct context *oldc, struct context *newc, void *args, gfp_t gfp_flags);
void *args;
struct sidtab *target;
};
return rc;
}
+/* Returns 1 if added, 0 for otherwise; don't return a negative value! */
/* FIXME: look at device node refcounting */
static int i2sbus_add_dev(struct macio_dev *macio,
struct i2sbus_control *control,
* either as the second one in that case is just a modem. */
if (!ok) {
kfree(dev);
- return -ENODEV;
+ return 0;
}
mutex_init(&dev->lock);
if (soundbus_add_one(&dev->sound)) {
printk(KERN_DEBUG "i2sbus: device registration error!\n");
+ if (dev->sound.ofdev.dev.kobj.state_initialized) {
+ soundbus_dev_put(&dev->sound);
+ return 0;
+ }
goto err;
}
}
EXPORT_SYMBOL(snd_ctl_rename_id);
+/**
+ * snd_ctl_rename - rename the control on the card
+ * @card: the card instance
+ * @kctl: the control to rename
+ * @name: the new name
+ *
+ * Renames the specified control on the card to the new name.
+ *
+ * Make sure to take the control write lock - down_write(&card->controls_rwsem).
+ */
+void snd_ctl_rename(struct snd_card *card, struct snd_kcontrol *kctl,
+ const char *name)
+{
+ remove_hash_entries(card, kctl);
+
+ if (strscpy(kctl->id.name, name, sizeof(kctl->id.name)) < 0)
+ pr_warn("ALSA: Renamed control new name '%s' truncated to '%s'\n",
+ name, kctl->id.name);
+
+ add_hash_entries(card, kctl);
+}
+EXPORT_SYMBOL(snd_ctl_rename);
+
#ifndef CONFIG_SND_CTL_FAST_LOOKUP
static struct snd_kcontrol *
snd_ctl_find_numid_slow(struct snd_card *card, unsigned int numid)
err = device_register(&ac97->dev);
if (err < 0) {
ac97_err(ac97, "Can't register ac97 bus\n");
+ put_device(&ac97->dev);
ac97->dev.bus = NULL;
return err;
}
*/
static void set_ctl_name(char *dst, const char *src, const char *suffix)
{
- if (suffix)
- sprintf(dst, "%s %s", src, suffix);
- else
- strcpy(dst, src);
-}
+ const size_t msize = SNDRV_CTL_ELEM_ID_NAME_MAXLEN;
+
+ if (suffix) {
+ if (snprintf(dst, msize, "%s %s", src, suffix) >= msize)
+ pr_warn("ALSA: AC97 control name '%s %s' truncated to '%s'\n",
+ src, suffix, dst);
+ } else {
+ if (strscpy(dst, src, msize) < 0)
+ pr_warn("ALSA: AC97 control name '%s' truncated to '%s'\n",
+ src, dst);
+ }
+}
/* remove the control with the given name and optional suffix */
static int snd_ac97_remove_ctl(struct snd_ac97 *ac97, const char *name,
const char *dst, const char *suffix)
{
struct snd_kcontrol *kctl = ctl_find(ac97, src, suffix);
+ char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
+
if (kctl) {
- set_ctl_name(kctl->id.name, dst, suffix);
+ set_ctl_name(name, dst, suffix);
+ snd_ctl_rename(ac97->bus->card, kctl, name);
return 0;
}
return -ENOENT;
const char *s2, const char *suffix)
{
struct snd_kcontrol *kctl1, *kctl2;
+ char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
+
kctl1 = ctl_find(ac97, s1, suffix);
kctl2 = ctl_find(ac97, s2, suffix);
if (kctl1 && kctl2) {
- set_ctl_name(kctl1->id.name, s2, suffix);
- set_ctl_name(kctl2->id.name, s1, suffix);
+ set_ctl_name(name, s2, suffix);
+ snd_ctl_rename(ac97->bus->card, kctl1, name);
+
+ set_ctl_name(name, s1, suffix);
+ snd_ctl_rename(ac97->bus->card, kctl2, name);
+
return 0;
}
return -ENOENT;
#ifndef CHIP_AU8810
stream_t dma_wt[NR_WT];
wt_voice_t wt_voice[NR_WT]; /* WT register cache. */
- char mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */
+ s8 mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */
#endif
/* Global resources */
static void vortex_connect_default(vortex_t * vortex, int en);
static int vortex_adb_allocroute(vortex_t * vortex, int dma, int nr_ch,
int dir, int type, int subdev);
-static char vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,
- int restype);
+static int vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,
+ int restype);
#ifndef CHIP_AU8810
static int vortex_wt_allocroute(vortex_t * vortex, int dma, int nr_ch);
static void vortex_wt_connect(vortex_t * vortex, int en);
out: Mean checkout if != 0. Else mean Checkin resource.
restype: Indicates type of resource to be checked in or out.
*/
-static char
+static int
vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, int restype)
{
int i, qty = resnum[restype], resinuse = 0;
{
struct snd_kcontrol *kctl = ctl_find(card, src);
if (kctl) {
- strcpy(kctl->id.name, dst);
+ snd_ctl_rename(card, kctl, dst);
return 0;
}
return -ENOENT;
{
struct snd_kcontrol *kctl = ctl_find(card, src);
if (kctl) {
- strcpy(kctl->id.name, dst);
+ snd_ctl_rename(card, kctl, dst);
return 0;
}
return -ENOENT;
kctl = snd_hda_find_mixer_ctl(codec, oldname);
if (kctl)
- strcpy(kctl->id.name, newname);
+ snd_ctl_rename(codec->card, kctl, newname);
}
static void alc1220_fixup_gb_dual_codecs(struct hda_codec *codec,
{
struct hda_codec *cdc = dev_to_hda_codec(dev);
struct alc_spec *spec = cdc->spec;
- int ret;
- ret = component_bind_all(dev, spec->comps);
- if (ret)
- return ret;
-
- return 0;
+ return component_bind_all(dev, spec->comps);
}
static void comp_unbind(struct device *dev)
SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x896d, "HP ZBook Firefly 16 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x896e, "HP EliteBook x360 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8971, "HP EliteBook 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8972, "HP EliteBook 840 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89ac, "HP EliteBook 640 G9", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89ae, "HP EliteBook 650 G9", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x89c0, "HP ZBook Power 15.6 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89c3, "Zbook Studio G9", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89c6, "Zbook Fury 17 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2),
SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
struct snd_rawmidi *rmidi;
struct snd_rawmidi_substream *input;
struct snd_rawmidi_substream *output;
- char istimer; /* timer in use */
+ signed char istimer; /* timer in use */
struct timer_list timer;
spinlock_t lock;
int pending;
pid_t playback_pid;
int running;
int system_sample_rate;
- const char *channel_map;
+ const signed char *channel_map;
int dev;
int irq;
unsigned long port;
where the data for that channel can be read/written from/to.
*/
-static const char channel_map_df_ss[HDSP_MAX_CHANNELS] = {
+static const signed char channel_map_df_ss[HDSP_MAX_CHANNELS] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25
};
-1, -1, -1, -1, -1, -1, -1, -1
};
-static const char channel_map_ds[HDSP_MAX_CHANNELS] = {
+static const signed char channel_map_ds[HDSP_MAX_CHANNELS] = {
/* ADAT channels are remapped */
1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23,
/* channels 12 and 13 are S/PDIF */
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
};
-static const char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = {
+static const signed char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = {
/* ADAT channels */
0, 1, 2, 3, 4, 5, 6, 7,
/* SPDIF */
-1, -1
};
-static const char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = {
+static const signed char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = {
/* ADAT */
1, 3, 5, 7,
/* SPDIF */
-1, -1, -1, -1, -1, -1
};
-static const char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = {
+static const signed char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = {
/* ADAT is disabled in this mode */
/* SPDIF */
8, 9,
return hdsp_hw_pointer(hdsp);
}
-static char *hdsp_channel_buffer_location(struct hdsp *hdsp,
+static signed char *hdsp_channel_buffer_location(struct hdsp *hdsp,
int stream,
int channel)
void __user *src, unsigned long count)
{
struct hdsp *hdsp = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES))
return -EINVAL;
void *src, unsigned long count)
{
struct hdsp *hdsp = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel);
if (snd_BUG_ON(!channel_buf))
void __user *dst, unsigned long count)
{
struct hdsp *hdsp = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES))
return -EINVAL;
void *dst, unsigned long count)
{
struct hdsp *hdsp = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel);
if (snd_BUG_ON(!channel_buf))
unsigned long count)
{
struct hdsp *hdsp = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = hdsp_channel_buffer_location (hdsp, substream->pstr->stream, channel);
if (snd_BUG_ON(!channel_buf))
int last_spdif_sample_rate; /* so that we can catch externally ... */
int last_adat_sample_rate; /* ... induced rate changes */
- const char *channel_map;
+ const signed char *channel_map;
struct snd_card *card;
struct snd_pcm *pcm;
where the data for that channel can be read/written from/to.
*/
-static const char channel_map_9652_ss[26] = {
+static const signed char channel_map_9652_ss[26] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25
};
-static const char channel_map_9636_ss[26] = {
+static const signed char channel_map_9636_ss[26] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
/* channels 16 and 17 are S/PDIF */
24, 25,
-1, -1, -1, -1, -1, -1, -1, -1
};
-static const char channel_map_9652_ds[26] = {
+static const signed char channel_map_9652_ds[26] = {
/* ADAT channels are remapped */
1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23,
/* channels 12 and 13 are S/PDIF */
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
};
-static const char channel_map_9636_ds[26] = {
+static const signed char channel_map_9636_ds[26] = {
/* ADAT channels are remapped */
1, 3, 5, 7, 9, 11, 13, 15,
/* channels 8 and 9 are S/PDIF */
return rme9652_hw_pointer(rme9652);
}
-static char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652,
+static signed char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652,
int stream,
int channel)
void __user *src, unsigned long count)
{
struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES))
return -EINVAL;
void *src, unsigned long count)
{
struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = rme9652_channel_buffer_location(rme9652,
substream->pstr->stream,
void __user *dst, unsigned long count)
{
struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES))
return -EINVAL;
void *dst, unsigned long count)
{
struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = rme9652_channel_buffer_location(rme9652,
substream->pstr->stream,
unsigned long count)
{
struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
- char *channel_buf;
+ signed char *channel_buf;
channel_buf = rme9652_channel_buffer_location (rme9652,
substream->pstr->stream,
};
static const struct dmi_system_id yc_acp_quirk_table[] = {
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D0"),
+ }
+ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D0"),
+ }
+ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D1"),
+ }
+ },
{
.driver_data = &acp6x_card,
.matches = {
config SND_SOC_TLV320ADC3XXX
tristate "Texas Instruments TLV320ADC3001/3101 audio ADC"
depends on I2C
+ depends on GPIOLIB
help
Enable support for Texas Instruments TLV320ADC3001 and TLV320ADC3101
ADCs.
#define CX2072X_PLBK_DRC_PARM_LEN 9
#define CX2072X_CLASSD_AMP_LEN 6
-/* DAI interfae type */
+/* DAI interface type */
#define CX2072X_DAI_HIFI 1
#define CX2072X_DAI_DSP 2
#define CX2072X_DAI_DSP_PWM 3 /* 4 ch, including mic and AEC */
#define REG_CGR3_GO1L_OFFSET 0
#define REG_CGR3_GO1L_MASK (0x1f << REG_CGR3_GO1L_OFFSET)
+#define REG_CGR10_GIL_OFFSET 0
+#define REG_CGR10_GIR_OFFSET 4
+
struct jz_icdc {
struct regmap *regmap;
void __iomem *base;
struct clk *clk;
};
-static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_dac_tlv, -2250, 0);
-static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_line_tlv, -1500, 600);
+static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_adc_tlv, 0, 150, 0);
+static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_dac_tlv, -2250, 150, 0);
static const struct snd_kcontrol_new jz4725b_codec_controls[] = {
SOC_DOUBLE_TLV("Master Playback Volume",
REG_CGR1_GODL_OFFSET,
REG_CGR1_GODR_OFFSET,
0xf, 1, jz4725b_dac_tlv),
- SOC_DOUBLE_R_TLV("Master Capture Volume",
- JZ4725B_CODEC_REG_CGR3,
- JZ4725B_CODEC_REG_CGR2,
- REG_CGR2_GO1R_OFFSET,
- 0x1f, 1, jz4725b_line_tlv),
+ SOC_DOUBLE_TLV("Master Capture Volume",
+ JZ4725B_CODEC_REG_CGR10,
+ REG_CGR10_GIL_OFFSET,
+ REG_CGR10_GIR_OFFSET,
+ 0xf, 0, jz4725b_adc_tlv),
SOC_SINGLE("Master Playback Switch", JZ4725B_CODEC_REG_CR1,
REG_CR1_DAC_MUTE_OFFSET, 1, 1),
jz4725b_codec_adc_src_texts,
jz4725b_codec_adc_src_values);
static const struct snd_kcontrol_new jz4725b_codec_adc_src_ctrl =
- SOC_DAPM_ENUM("Route", jz4725b_codec_adc_src_enum);
+ SOC_DAPM_ENUM("ADC Source Capture Route", jz4725b_codec_adc_src_enum);
static const struct snd_kcontrol_new jz4725b_codec_mixer_controls[] = {
SOC_DAPM_SINGLE("Line In Bypass", JZ4725B_CODEC_REG_CR1,
SND_SOC_DAPM_ADC("ADC", "Capture",
JZ4725B_CODEC_REG_PMR1, REG_PMR1_SB_ADC_OFFSET, 1),
- SND_SOC_DAPM_MUX("ADC Source", SND_SOC_NOPM, 0, 0,
+ SND_SOC_DAPM_MUX("ADC Source Capture Route", SND_SOC_NOPM, 0, 0,
&jz4725b_codec_adc_src_ctrl),
/* Mixer */
SND_SOC_DAPM_MIXER("DAC to Mixer", JZ4725B_CODEC_REG_CR1,
REG_CR1_DACSEL_OFFSET, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("Line In", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER("Line In", JZ4725B_CODEC_REG_PMR1,
+ REG_PMR1_SB_LIN_OFFSET, 1, NULL, 0),
SND_SOC_DAPM_MIXER("HP Out", JZ4725B_CODEC_REG_CR1,
REG_CR1_HP_DIS_OFFSET, 1, NULL, 0),
{"Mixer", NULL, "DAC to Mixer"},
{"Mixer to ADC", NULL, "Mixer"},
- {"ADC Source", "Mixer", "Mixer to ADC"},
- {"ADC Source", "Line In", "Line In"},
- {"ADC Source", "Mic 1", "Mic 1"},
- {"ADC Source", "Mic 2", "Mic 2"},
- {"ADC", NULL, "ADC Source"},
+ {"ADC Source Capture Route", "Mixer", "Mixer to ADC"},
+ {"ADC Source Capture Route", "Line In", "Line In"},
+ {"ADC Source Capture Route", "Mic 1", "Mic 1"},
+ {"ADC Source Capture Route", "Mic 2", "Mic 2"},
+ {"ADC", NULL, "ADC Source Capture Route"},
{"Out Stage", NULL, "Mixer"},
{"HP Out", NULL, "Out Stage"},
dev_err(chip->dev, "read chip revision fail\n");
goto probe_fail;
}
+ pm_runtime_set_active(chip->dev);
+ pm_runtime_enable(chip->dev);
ret = devm_snd_soc_register_component(chip->dev,
&mt6660_component_driver,
&mt6660_codec_dai, 1);
- if (!ret) {
- pm_runtime_set_active(chip->dev);
- pm_runtime_enable(chip->dev);
- }
+ if (ret)
+ pm_runtime_disable(chip->dev);
return ret;
unsigned int rx_mask, int slots, int slot_width)
{
struct snd_soc_component *component = dai->component;
- unsigned int val = 0, rx_slotnum;
+ unsigned int cn = 0, cl = 0, rx_slotnum;
int ret = 0, first_bit;
switch (slots) {
case 4:
- val |= RT1019_I2S_TX_4CH;
+ cn = RT1019_I2S_TX_4CH;
break;
case 6:
- val |= RT1019_I2S_TX_6CH;
+ cn = RT1019_I2S_TX_6CH;
break;
case 8:
- val |= RT1019_I2S_TX_8CH;
+ cn = RT1019_I2S_TX_8CH;
break;
case 2:
break;
switch (slot_width) {
case 20:
- val |= RT1019_I2S_DL_20;
+ cl = RT1019_TDM_CL_20;
break;
case 24:
- val |= RT1019_I2S_DL_24;
+ cl = RT1019_TDM_CL_24;
break;
case 32:
- val |= RT1019_I2S_DL_32;
+ cl = RT1019_TDM_CL_32;
break;
case 8:
- val |= RT1019_I2S_DL_8;
+ cl = RT1019_TDM_CL_8;
break;
case 16:
break;
goto _set_tdm_err_;
}
+ snd_soc_component_update_bits(component, RT1019_TDM_1,
+ RT1019_TDM_CL_MASK, cl);
snd_soc_component_update_bits(component, RT1019_TDM_2,
- RT1019_I2S_CH_TX_MASK | RT1019_I2S_DF_MASK, val);
+ RT1019_I2S_CH_TX_MASK, cn);
_set_tdm_err_:
return ret;
#define RT1019_TDM_BCLK_MASK (0x1 << 6)
#define RT1019_TDM_BCLK_NORM (0x0 << 6)
#define RT1019_TDM_BCLK_INV (0x1 << 6)
+#define RT1019_TDM_CL_MASK (0x7)
+#define RT1019_TDM_CL_8 (0x4)
+#define RT1019_TDM_CL_32 (0x3)
+#define RT1019_TDM_CL_24 (0x2)
+#define RT1019_TDM_CL_20 (0x1)
+#define RT1019_TDM_CL_16 (0x0)
/* 0x0401 TDM Control-2 */
#define RT1019_I2S_CH_TX_MASK (0x3 << 6)
case 0x3008:
case 0x300a:
case 0xc000:
+ case 0xc710:
case 0xc860 ... 0xc863:
case 0xc870 ... 0xc873:
return true;
{
struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(dev);
int ret = 0;
+ unsigned int tmp;
if (rt1308->hw_init)
return 0;
/* sw reset */
regmap_write(rt1308->regmap, RT1308_SDW_RESET, 0);
+ regmap_read(rt1308->regmap, 0xc710, &tmp);
+ rt1308->hw_ver = tmp;
+ dev_dbg(dev, "%s, hw_ver=0x%x\n", __func__, rt1308->hw_ver);
+
/* initial settings */
regmap_write(rt1308->regmap, 0xc103, 0xc0);
regmap_write(rt1308->regmap, 0xc030, 0x17);
regmap_write(rt1308->regmap, 0xc062, 0x05);
regmap_write(rt1308->regmap, 0xc171, 0x07);
regmap_write(rt1308->regmap, 0xc173, 0x0d);
- regmap_write(rt1308->regmap, 0xc311, 0x7f);
- regmap_write(rt1308->regmap, 0xc900, 0x90);
+ if (rt1308->hw_ver == RT1308_VER_C) {
+ regmap_write(rt1308->regmap, 0xc311, 0x7f);
+ regmap_write(rt1308->regmap, 0xc300, 0x09);
+ } else {
+ regmap_write(rt1308->regmap, 0xc311, 0x4f);
+ regmap_write(rt1308->regmap, 0xc300, 0x0b);
+ }
+ regmap_write(rt1308->regmap, 0xc900, 0x5a);
regmap_write(rt1308->regmap, 0xc1a0, 0x84);
regmap_write(rt1308->regmap, 0xc1a1, 0x01);
regmap_write(rt1308->regmap, 0xc360, 0x78);
regmap_write(rt1308->regmap, 0xc070, 0x00);
regmap_write(rt1308->regmap, 0xc100, 0xd7);
regmap_write(rt1308->regmap, 0xc101, 0xd7);
- regmap_write(rt1308->regmap, 0xc300, 0x09);
if (rt1308->first_hw_init) {
regcache_cache_bypass(rt1308->regmap, false);
{ 0x3005, 0x23 },
{ 0x3008, 0x02 },
{ 0x300a, 0x00 },
+ { 0xc000 | (RT1308_DATA_PATH << 4), 0x00 },
{ 0xc003 | (RT1308_DAC_SET << 4), 0x00 },
{ 0xc000 | (RT1308_POWER << 4), 0x00 },
{ 0xc001 | (RT1308_POWER << 4), 0x00 },
{ 0xc002 | (RT1308_POWER << 4), 0x00 },
+ { 0xc000 | (RT1308_POWER_STATUS << 4), 0x00 },
};
#define RT1308_SDW_OFFSET 0xc000
bool first_hw_init;
int rx_mask;
int slots;
+ int hw_ver;
};
struct sdw_stream_data {
RT1308_AIFS
};
+enum rt1308_hw_ver {
+ RT1308_VER_C = 2,
+ RT1308_VER_D
+};
+
#endif /* end of _RT1308_H_ */
unsigned int rx_mask, int slots, int slot_width)
{
struct snd_soc_component *component = dai->component;
- unsigned int cl, val = 0;
+ unsigned int cl, val = 0, tx_slotnum;
if (tx_mask || rx_mask)
snd_soc_component_update_bits(component,
snd_soc_component_update_bits(component,
RT5682S_TDM_ADDA_CTRL_2, RT5682S_TDM_EN, 0);
+ /* Tx slot configuration */
+ tx_slotnum = hweight_long(tx_mask);
+ if (tx_slotnum) {
+ if (tx_slotnum > slots) {
+ dev_err(component->dev, "Invalid or oversized Tx slots.\n");
+ return -EINVAL;
+ }
+ val |= (tx_slotnum - 1) << RT5682S_TDM_ADC_DL_SFT;
+ }
+
switch (slots) {
case 4:
val |= RT5682S_TDM_TX_CH_4;
}
snd_soc_component_update_bits(component, RT5682S_TDM_CTRL,
- RT5682S_TDM_TX_CH_MASK | RT5682S_TDM_RX_CH_MASK, val);
+ RT5682S_TDM_TX_CH_MASK | RT5682S_TDM_RX_CH_MASK |
+ RT5682S_TDM_ADC_DL_MASK, val);
switch (slot_width) {
case 8:
#define RT5682S_TDM_RX_CH_8 (0x3 << 8)
#define RT5682S_TDM_ADC_LCA_MASK (0x7 << 4)
#define RT5682S_TDM_ADC_LCA_SFT 4
+#define RT5682S_TDM_ADC_DL_MASK (0x3 << 0)
#define RT5682S_TDM_ADC_DL_SFT 0
/* TDM control 2 (0x007a) */
.of_match_table = tlv320adc3xxx_of_match,
},
.probe_new = adc3xxx_i2c_probe,
- .remove = adc3xxx_i2c_remove,
+ .remove = __exit_p(adc3xxx_i2c_remove),
.id_table = adc3xxx_i2c_id,
};
regmap_update_bits(arizona->regmap, wm5102_digital_vu[i],
WM5102_DIG_VU, WM5102_DIG_VU);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_idle(&pdev->dev);
+
ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
"ADSP2 Compressed IRQ", wm5102_adsp2_irq,
wm5102);
goto err_spk_irqs;
}
- pm_runtime_enable(&pdev->dev);
- pm_runtime_idle(&pdev->dev);
-
return ret;
err_spk_irqs:
arizona_set_irq_wake(arizona, ARIZONA_IRQ_DSP_IRQ1, 0);
arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, wm5102);
err_jack_codec_dev:
+ pm_runtime_disable(&pdev->dev);
arizona_jack_codec_dev_remove(&wm5102->core);
return ret;
regmap_update_bits(arizona->regmap, wm5110_digital_vu[i],
WM5110_DIG_VU, WM5110_DIG_VU);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_idle(&pdev->dev);
+
ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
"ADSP2 Compressed IRQ", wm5110_adsp2_irq,
wm5110);
goto err_spk_irqs;
}
- pm_runtime_enable(&pdev->dev);
- pm_runtime_idle(&pdev->dev);
-
return ret;
err_spk_irqs:
arizona_set_irq_wake(arizona, ARIZONA_IRQ_DSP_IRQ1, 0);
arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, wm5110);
err_jack_codec_dev:
+ pm_runtime_disable(&pdev->dev);
arizona_jack_codec_dev_remove(&wm5110->core);
return ret;
4, 1, 0, inmix_tlv),
};
+static int tp_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ int ret, reg, val, mask;
+ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+
+ ret = pm_runtime_resume_and_get(component->dev);
+ if (ret < 0) {
+ dev_err(component->dev, "Failed to resume device: %d\n", ret);
+ return ret;
+ }
+
+ reg = WM8962_ADDITIONAL_CONTROL_4;
+
+ if (!strcmp(w->name, "TEMP_HP")) {
+ mask = WM8962_TEMP_ENA_HP_MASK;
+ val = WM8962_TEMP_ENA_HP;
+ } else if (!strcmp(w->name, "TEMP_SPK")) {
+ mask = WM8962_TEMP_ENA_SPK_MASK;
+ val = WM8962_TEMP_ENA_SPK;
+ } else {
+ pm_runtime_put(component->dev);
+ return -EINVAL;
+ }
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMD:
+ val = 0;
+ fallthrough;
+ case SND_SOC_DAPM_POST_PMU:
+ ret = snd_soc_component_update_bits(component, reg, mask, val);
+ break;
+ default:
+ WARN(1, "Invalid event %d\n", event);
+ pm_runtime_put(component->dev);
+ return -EINVAL;
+ }
+
+ pm_runtime_put(component->dev);
+
+ return 0;
+}
+
static int cp_event(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol, int event)
{
SND_SOC_DAPM_SUPPLY_S("DSP2", 1, WM8962_DSP2_POWER_MANAGEMENT,
WM8962_DSP2_ENA_SHIFT, 0, dsp2_event,
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD),
-SND_SOC_DAPM_SUPPLY("TEMP_HP", WM8962_ADDITIONAL_CONTROL_4, 2, 0, NULL, 0),
-SND_SOC_DAPM_SUPPLY("TEMP_SPK", WM8962_ADDITIONAL_CONTROL_4, 1, 0, NULL, 0),
+SND_SOC_DAPM_SUPPLY("TEMP_HP", SND_SOC_NOPM, 0, 0, tp_event,
+ SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD),
+SND_SOC_DAPM_SUPPLY("TEMP_SPK", SND_SOC_NOPM, 0, 0, tp_event,
+ SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD),
SND_SOC_DAPM_MIXER("INPGAL", WM8962_LEFT_INPUT_PGA_CONTROL, 4, 0,
inpgal, ARRAY_SIZE(inpgal)),
if (ret < 0)
goto err_pm_runtime;
+ regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4,
+ WM8962_TEMP_ENA_HP_MASK, 0);
+ regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4,
+ WM8962_TEMP_ENA_SPK_MASK, 0);
+
regcache_cache_only(wm8962->regmap, true);
/* The drivers should power up as needed */
regmap_update_bits(arizona->regmap, wm8997_digital_vu[i],
WM8997_DIG_VU, WM8997_DIG_VU);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_idle(&pdev->dev);
+
arizona_init_common(arizona);
ret = arizona_init_vol_limit(arizona);
goto err_spk_irqs;
}
- pm_runtime_enable(&pdev->dev);
- pm_runtime_idle(&pdev->dev);
-
return ret;
err_spk_irqs:
arizona_free_spk_irqs(arizona);
err_jack_codec_dev:
+ pm_runtime_disable(&pdev->dev);
arizona_jack_codec_dev_remove(&wm8997->core);
return ret;
* or has convert-xxx property
*/
if ((of_get_child_count(codec_port) > 1) ||
- (adata->convert_rate || adata->convert_channels))
+ asoc_simple_is_convert_required(adata))
return true;
return false;
}
EXPORT_SYMBOL_GPL(asoc_simple_parse_convert);
+/**
+ * asoc_simple_is_convert_required() - Query if HW param conversion was requested
+ * @data: Link data.
+ *
+ * Returns true if any HW param conversion was requested for this DAI link with
+ * any "convert-xxx" properties.
+ */
+bool asoc_simple_is_convert_required(const struct asoc_simple_data *data)
+{
+ return data->convert_rate ||
+ data->convert_channels ||
+ data->convert_sample_format;
+}
+EXPORT_SYMBOL_GPL(asoc_simple_is_convert_required);
+
int asoc_simple_parse_daifmt(struct device *dev,
struct device_node *node,
struct device_node *codec,
* or has convert-xxx property
*/
if (dpcm_selectable &&
- (num > 2 ||
- adata.convert_rate || adata.convert_channels)) {
+ (num > 2 || asoc_simple_is_convert_required(&adata))) {
/*
* np
* |1(CPU)|0(Codec) li->cpu
SOF_RT5682_SSP_AMP(2) |
SOF_RT5682_NUM_HDMIDEV(4)),
},
+ {
+ .callback = sof_rt5682_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Rex"),
+ },
+ .driver_data = (void *)(SOF_RT5682_MCLK_EN |
+ SOF_RT5682_SSP_CODEC(2) |
+ SOF_SPEAKER_AMP_PRESENT |
+ SOF_RT5682_SSP_AMP(0) |
+ SOF_RT5682_NUM_HDMIDEV(4)
+ ),
+ },
{}
};
SOF_SDW_PCH_DMIC |
RT711_JD1),
},
+ {
+ /* NUC15 LAPBC710 skews */
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
+ DMI_MATCH(DMI_BOARD_NAME, "LAPBC710"),
+ },
+ .driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ SOF_SDW_PCH_DMIC |
+ RT711_JD1),
+ },
/* TigerLake-SDCA devices */
{
.callback = sof_sdw_quirk_cb,
#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */
-static void skl_codec_device_exit(struct device *dev)
-{
- snd_hdac_device_exit(dev_to_hdac_dev(dev));
-}
-
static struct hda_codec *skl_codec_device_init(struct hdac_bus *bus, int addr)
{
struct hda_codec *codec;
}
codec->core.type = HDA_DEV_ASOC;
- codec->core.dev.release = skl_codec_device_exit;
ret = snd_hdac_device_register(&codec->core);
if (ret) {
dev_err(bus->dev, "failed to register hdac device\n");
- snd_hdac_device_exit(&codec->core);
+ put_device(&codec->core.dev);
return ERR_PTR(ret);
}
config SND_SOC_SC7180
tristate "SoC Machine driver for SC7180 boards"
depends on I2C && GPIOLIB
+ depends on SOUNDWIRE || SOUNDWIRE=n
select SND_SOC_QCOM_COMMON
select SND_SOC_LPASS_SC7180
select SND_SOC_MAX98357A
return true;
if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v))
return true;
+ if (reg == LPASS_HDMI_TX_VBIT_CTL_ADDR(v))
+ return true;
+ if (reg == LPASS_HDMI_TX_PARITY_ADDR(v))
+ return true;
for (i = 0; i < v->hdmi_rdma_channels; ++i) {
if (reg == LPAIF_HDMI_RDMACURR_REG(v, i))
return true;
+ if (reg == LPASS_HDMI_TX_DMA_ADDR(v, i))
+ return true;
+ if (reg == LPASS_HDMI_TX_CH_LSB_ADDR(v, i))
+ return true;
+ if (reg == LPASS_HDMI_TX_CH_MSB_ADDR(v, i))
+ return true;
}
return false;
}
int i;
for_each_rtd_components(rtd, i, component) {
- int ret = pm_runtime_resume_and_get(component->dev);
- if (ret < 0 && ret != -EACCES)
+ int ret = pm_runtime_get_sync(component->dev);
+ if (ret < 0 && ret != -EACCES) {
+ pm_runtime_put_noidle(component->dev);
return soc_component_ret(component, ret);
+ }
/* mark stream if succeeded */
soc_component_mark_push(component, stream, pm);
}
#define is_generic_config(x) 0
#endif
-static void hda_codec_device_exit(struct device *dev)
-{
- snd_hdac_device_exit(dev_to_hdac_dev(dev));
-}
-
static struct hda_codec *hda_codec_device_init(struct hdac_bus *bus, int addr, int type)
{
struct hda_codec *codec;
}
codec->core.type = type;
- codec->core.dev.release = hda_codec_device_exit;
ret = snd_hdac_device_register(&codec->core);
if (ret) {
dev_err(bus->dev, "failed to register hdac device\n");
- snd_hdac_device_exit(&codec->core);
+ put_device(&codec->core.dev);
return ERR_PTR(ret);
}
[SOF_INTEL_IPC4] = "intel/sof-ace-tplg",
},
.default_fw_filename = {
- [SOF_INTEL_IPC4] = "dsp_basefw.bin",
+ [SOF_INTEL_IPC4] = "sof-mtl.ri",
},
.nocodec_tplg_filename = "sof-mtl-nocodec.tplg",
.ops = &sof_mtl_ops,
.ops_init = sof_tgl_ops_init,
};
+static const struct sof_dev_desc adl_n_desc = {
+ .machines = snd_soc_acpi_intel_adl_machines,
+ .alt_machines = snd_soc_acpi_intel_adl_sdw_machines,
+ .use_acpi_target_states = true,
+ .resindex_lpe_base = 0,
+ .resindex_pcicfg_base = -1,
+ .resindex_imr_base = -1,
+ .irqindex_host_ipc = -1,
+ .chip_info = &tgl_chip_info,
+ .ipc_supported_mask = BIT(SOF_IPC) | BIT(SOF_INTEL_IPC4),
+ .ipc_default = SOF_IPC,
+ .default_fw_path = {
+ [SOF_IPC] = "intel/sof",
+ [SOF_INTEL_IPC4] = "intel/avs/adl-n",
+ },
+ .default_tplg_path = {
+ [SOF_IPC] = "intel/sof-tplg",
+ [SOF_INTEL_IPC4] = "intel/avs-tplg",
+ },
+ .default_fw_filename = {
+ [SOF_IPC] = "sof-adl-n.ri",
+ [SOF_INTEL_IPC4] = "dsp_basefw.bin",
+ },
+ .nocodec_tplg_filename = "sof-adl-nocodec.tplg",
+ .ops = &sof_tgl_ops,
+ .ops_init = sof_tgl_ops_init,
+};
+
static const struct sof_dev_desc rpls_desc = {
.machines = snd_soc_acpi_intel_rpl_machines,
.alt_machines = snd_soc_acpi_intel_rpl_sdw_machines,
{ PCI_DEVICE(0x8086, 0x51cf), /* RPL-PX */
.driver_data = (unsigned long)&rpl_desc},
{ PCI_DEVICE(0x8086, 0x54c8), /* ADL-N */
- .driver_data = (unsigned long)&adl_desc},
+ .driver_data = (unsigned long)&adl_n_desc},
{ 0, }
};
MODULE_DEVICE_TABLE(pci, sof_pci_ids);
int id;
u32 slot_offset;
void *log_buffer;
+ struct mutex buffer_lock; /* for log_buffer alloc/free */
u32 host_read_ptr;
u32 dsp_write_ptr;
/* pos update IPC arrived before the slot offset is known, queried */
struct sof_mtrace_core_data *core_data = inode->i_private;
int ret;
+ mutex_lock(&core_data->buffer_lock);
+
+ if (core_data->log_buffer) {
+ ret = -EBUSY;
+ goto out;
+ }
+
ret = debugfs_file_get(file->f_path.dentry);
if (unlikely(ret))
- return ret;
+ goto out;
core_data->log_buffer = kmalloc(SOF_MTRACE_SLOT_SIZE, GFP_KERNEL);
if (!core_data->log_buffer) {
debugfs_file_put(file->f_path.dentry);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out;
}
ret = simple_open(inode, file);
debugfs_file_put(file->f_path.dentry);
}
+out:
+ mutex_unlock(&core_data->buffer_lock);
+
return ret;
}
debugfs_file_put(file->f_path.dentry);
+ mutex_lock(&core_data->buffer_lock);
kfree(core_data->log_buffer);
+ core_data->log_buffer = NULL;
+ mutex_unlock(&core_data->buffer_lock);
return 0;
}
struct sof_mtrace_core_data *core_data = &priv->cores[i];
init_waitqueue_head(&core_data->trace_sleep);
+ mutex_init(&core_data->buffer_lock);
core_data->sdev = sdev;
core_data->id = i;
}
*/
int snd_emux_free(struct snd_emux *emu)
{
- unsigned long flags;
-
if (! emu)
return -EINVAL;
- spin_lock_irqsave(&emu->voice_lock, flags);
- if (emu->timer_active)
- del_timer(&emu->tlist);
- spin_unlock_irqrestore(&emu->voice_lock, flags);
+ del_timer_sync(&emu->tlist);
snd_emux_proc_free(emu);
snd_emux_delete_virmidi(emu);
static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
/* Fixed EP */
/* FIXME: check the availability of generic matching */
+ IMPLICIT_FB_FIXED_DEV(0x0763, 0x2030, 0x81, 3), /* M-Audio Fast Track C400 */
+ IMPLICIT_FB_FIXED_DEV(0x0763, 0x2031, 0x81, 3), /* M-Audio Fast Track C600 */
IMPLICIT_FB_FIXED_DEV(0x0763, 0x2080, 0x81, 2), /* M-Audio FastTrack Ultra */
IMPLICIT_FB_FIXED_DEV(0x0763, 0x2081, 0x81, 2), /* M-Audio FastTrack Ultra */
IMPLICIT_FB_FIXED_DEV(0x2466, 0x8010, 0x81, 2), /* Fractal Audio Axe-Fx III */
if (!found)
return;
- strscpy(kctl->id.name, "Headphone", sizeof(kctl->id.name));
+ snd_ctl_rename(card, kctl, "Headphone");
}
static const struct usb_feature_control_info *get_feature_control_info(int control)
#define ARM_CPU_IMP_FUJITSU 0x46
#define ARM_CPU_IMP_HISI 0x48
#define ARM_CPU_IMP_APPLE 0x61
+#define ARM_CPU_IMP_AMPERE 0xC0
#define ARM_CPU_PART_AEM_V8 0xD0F
#define ARM_CPU_PART_FOUNDATION 0xD00
#define APPLE_CPU_PART_M1_ICESTORM_MAX 0x028
#define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029
+#define AMPERE_CPU_PART_AMPERE1 0xAC3
+
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
#define MIDR_APPLE_M1_FIRESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_PRO)
#define MIDR_APPLE_M1_ICESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_MAX)
#define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
+#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
#define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX
#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
-/* FREE! ( 3*32+17) */
+#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
* Output:
* rax original destination
*/
-SYM_FUNC_START(__memcpy)
+SYM_TYPED_FUNC_START(__memcpy)
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
"jmp memcpy_erms", X86_FEATURE_ERMS
libbpf-bpf_prog_load \
libbpf-bpf_object__next_program \
libbpf-bpf_object__next_map \
+ libbpf-bpf_program__set_insns \
libbpf-bpf_create_map \
libpfm4 \
libdebuginfod \
test-libbpf-bpf_map_create.bin \
test-libbpf-bpf_object__next_program.bin \
test-libbpf-bpf_object__next_map.bin \
+ test-libbpf-bpf_program__set_insns.bin \
test-libbpf-btf__raw_data.bin \
test-get_cpuid.bin \
test-sdt.bin \
$(OUTPUT)test-libbpf-bpf_object__next_map.bin:
$(BUILD) -lbpf
+$(OUTPUT)test-libbpf-bpf_program__set_insns.bin:
+ $(BUILD) -lbpf
+
$(OUTPUT)test-libbpf-btf__raw_data.bin:
$(BUILD) -lbpf
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+#include <bpf/libbpf.h>
+
+int main(void)
+{
+ bpf_program__set_insns(NULL /* prog */, NULL /* new_insns */, 0 /* new_insn_cnt */);
+ return 0;
+}
{
int count = 0;
+ /* It takes a digit to represent zero */
+ if (!num)
+ return 1;
+
while (num != 0) {
num /= 10;
count++;
int memcmp(const void *s1, const void *s2, size_t n)
{
size_t ofs = 0;
- char c1 = 0;
+ int c1 = 0;
- while (ofs < n && !(c1 = ((char *)s1)[ofs] - ((char *)s2)[ofs])) {
+ while (ofs < n && !(c1 = ((unsigned char *)s1)[ofs] - ((unsigned char *)s2)[ofs])) {
ofs++;
}
return c1;
}
/* this function is only used with arguments that are not constants or when
- * it's not known because optimizations are disabled.
+ * it's not known because optimizations are disabled. Note that gcc 12
+ * recognizes an strlen() pattern and replaces it with a jump to strlen(),
+ * thus itself, hence the asm() statement below that's meant to disable this
+ * confusing practice.
*/
static __attribute__((unused))
-size_t nolibc_strlen(const char *str)
+size_t strlen(const char *str)
{
size_t len;
- for (len = 0; str[len]; len++);
+ for (len = 0; str[len]; len++)
+ asm("");
return len;
}
* the two branches, then will rely on an external definition of strlen().
*/
#if defined(__OPTIMIZE__)
+#define nolibc_strlen(x) strlen(x)
#define strlen(str) ({ \
__builtin_constant_p((str)) ? \
__builtin_strlen((str)) : \
nolibc_strlen((str)); \
})
-#else
-#define strlen(str) nolibc_strlen((str))
#endif
static __attribute__((unused))
#define IPPROTO_PIM IPPROTO_PIM
IPPROTO_COMP = 108, /* Compression Header Protocol */
#define IPPROTO_COMP IPPROTO_COMP
+ IPPROTO_L2TP = 115, /* Layer 2 Tunnelling Protocol */
+#define IPPROTO_L2TP IPPROTO_L2TP
IPPROTO_SCTP = 132, /* Stream Control Transport Protocol */
#define IPPROTO_SCTP IPPROTO_SCTP
IPPROTO_UDPLITE = 136, /* UDP-Lite (RFC 3828) */
};
struct ip_msfilter {
+ __be32 imsf_multiaddr;
+ __be32 imsf_interface;
+ __u32 imsf_fmode;
+ __u32 imsf_numsrc;
union {
- struct {
- __be32 imsf_multiaddr_aux;
- __be32 imsf_interface_aux;
- __u32 imsf_fmode_aux;
- __u32 imsf_numsrc_aux;
- __be32 imsf_slist[1];
- };
- struct {
- __be32 imsf_multiaddr;
- __be32 imsf_interface;
- __u32 imsf_fmode;
- __u32 imsf_numsrc;
- __be32 imsf_slist_flex[];
- };
+ __be32 imsf_slist[1];
+ __DECLARE_FLEX_ARRAY(__be32, imsf_slist_flex);
};
};
#define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
#define KVM_CAP_S390_ZPCI_OP 221
#define KVM_CAP_S390_CPU_TOPOLOGY 222
+#define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
#ifdef KVM_CAP_IRQ_ROUTING
PERF_SAMPLE_WEIGHT_STRUCT = 1U << 24,
PERF_SAMPLE_MAX = 1U << 25, /* non-ABI */
-
- __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /* non-ABI; internal use */
};
#define PERF_SAMPLE_WEIGHT_TYPE (PERF_SAMPLE_WEIGHT | PERF_SAMPLE_WEIGHT_STRUCT)
PERF_BR_MAX,
};
+/*
+ * Common branch speculation outcome classification
+ */
+enum {
+ PERF_BR_SPEC_NA = 0, /* Not available */
+ PERF_BR_SPEC_WRONG_PATH = 1, /* Speculative but on wrong path */
+ PERF_BR_NON_SPEC_CORRECT_PATH = 2, /* Non-speculative but on correct path */
+ PERF_BR_SPEC_CORRECT_PATH = 3, /* Speculative and on correct path */
+ PERF_BR_SPEC_MAX,
+};
+
enum {
PERF_BR_NEW_FAULT_ALGN = 0, /* Alignment fault */
PERF_BR_NEW_FAULT_DATA = 1, /* Data fault */
PERF_BR_PRIV_HV = 3,
};
-#define PERF_BR_ARM64_FIQ PERF_BR_NEW_ARCH_1
-#define PERF_BR_ARM64_DEBUG_HALT PERF_BR_NEW_ARCH_2
-#define PERF_BR_ARM64_DEBUG_EXIT PERF_BR_NEW_ARCH_3
-#define PERF_BR_ARM64_DEBUG_INST PERF_BR_NEW_ARCH_4
-#define PERF_BR_ARM64_DEBUG_DATA PERF_BR_NEW_ARCH_5
+#define PERF_BR_ARM64_FIQ PERF_BR_NEW_ARCH_1
+#define PERF_BR_ARM64_DEBUG_HALT PERF_BR_NEW_ARCH_2
+#define PERF_BR_ARM64_DEBUG_EXIT PERF_BR_NEW_ARCH_3
+#define PERF_BR_ARM64_DEBUG_INST PERF_BR_NEW_ARCH_4
+#define PERF_BR_ARM64_DEBUG_DATA PERF_BR_NEW_ARCH_5
#define PERF_SAMPLE_BRANCH_PLM_ALL \
(PERF_SAMPLE_BRANCH_USER|\
* abort: aborting a hardware transaction
* cycles: cycles from last branch (or 0 if not supported)
* type: branch type
+ * spec: branch speculation info (or 0 if not supported)
*/
struct perf_branch_entry {
__u64 from;
abort:1, /* transaction abort */
cycles:16, /* cycle count to last branch */
type:4, /* branch type */
+ spec:2, /* branch speculation info */
new_type:4, /* additional branch type */
priv:3, /* privilege level */
- reserved:33;
+ reserved:31;
};
union perf_sample_weight {
__u32 stx_dev_minor;
/* 0x90 */
__u64 stx_mnt_id;
- __u64 __spare2;
+ __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
+ __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
/* 0xa0 */
__u64 __spare3[12]; /* Spare space for future expansion */
/* 0x100 */
#define STATX_BASIC_STATS 0x000007ffU /* The stuff in the normal stat struct */
#define STATX_BTIME 0x00000800U /* Want/got stx_btime */
#define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */
+#define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
* Advanced Linux Sound Architecture - ALSA - Driver
* Copyright (c) 1994-2003 by Jaroslav Kysela <perex@perex.cz>,
* Abramo Bagnara <abramo@alsa-project.org>
- *
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
*/
#ifndef _UAPI__SOUND_ASOUND_H
ifeq ($(feature-libbpf-bpf_object__next_map), 1)
CFLAGS += -DHAVE_LIBBPF_BPF_OBJECT__NEXT_MAP
endif
+ $(call feature_check,libbpf-bpf_program__set_insns)
+ ifeq ($(feature-libbpf-bpf_program__set_insns), 1)
+ CFLAGS += -DHAVE_LIBBPF_BPF_PROGRAM__SET_INSNS
+ endif
$(call feature_check,libbpf-btf__raw_data)
ifeq ($(feature-libbpf-btf__raw_data), 1)
CFLAGS += -DHAVE_LIBBPF_BTF__RAW_DATA
CFLAGS += -DHAVE_LIBBPF_BPF_PROG_LOAD
CFLAGS += -DHAVE_LIBBPF_BPF_OBJECT__NEXT_PROGRAM
CFLAGS += -DHAVE_LIBBPF_BPF_OBJECT__NEXT_MAP
+ CFLAGS += -DHAVE_LIBBPF_BPF_PROGRAM__SET_INSNS
CFLAGS += -DHAVE_LIBBPF_BTF__RAW_DATA
CFLAGS += -DHAVE_LIBBPF_BPF_MAP_CREATE
endif
176 64 rt_sigtimedwait sys_rt_sigtimedwait
177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
-179 common pread64 sys_pread64 compat_sys_ppc_pread64
-180 common pwrite64 sys_pwrite64 compat_sys_ppc_pwrite64
+179 32 pread64 sys_ppc_pread64 compat_sys_ppc_pread64
+179 64 pread64 sys_pread64
+180 32 pwrite64 sys_ppc_pwrite64 compat_sys_ppc_pwrite64
+180 64 pwrite64 sys_pwrite64
181 common chown sys_chown
182 common getcwd sys_getcwd
183 common capget sys_capget
188 common putpmsg sys_ni_syscall
189 nospu vfork sys_vfork
190 common ugetrlimit sys_getrlimit compat_sys_getrlimit
-191 common readahead sys_readahead compat_sys_ppc_readahead
+191 32 readahead sys_ppc_readahead compat_sys_ppc_readahead
+191 64 readahead sys_readahead
192 32 mmap2 sys_mmap2 compat_sys_mmap2
-193 32 truncate64 sys_truncate64 compat_sys_ppc_truncate64
-194 32 ftruncate64 sys_ftruncate64 compat_sys_ppc_ftruncate64
+193 32 truncate64 sys_ppc_truncate64 compat_sys_ppc_truncate64
+194 32 ftruncate64 sys_ppc_ftruncate64 compat_sys_ppc_ftruncate64
195 32 stat64 sys_stat64
196 32 lstat64 sys_lstat64
197 32 fstat64 sys_fstat64
230 common io_submit sys_io_submit compat_sys_io_submit
231 common io_cancel sys_io_cancel
232 nospu set_tid_address sys_set_tid_address
-233 common fadvise64 sys_fadvise64 compat_sys_ppc32_fadvise64
+233 32 fadvise64 sys_ppc32_fadvise64 compat_sys_ppc32_fadvise64
+233 64 fadvise64 sys_fadvise64
234 nospu exit_group sys_exit_group
235 nospu lookup_dcookie sys_lookup_dcookie compat_sys_lookup_dcookie
236 common epoll_create sys_epoll_create
static volatile int signr = -1;
static volatile int child_finished;
#ifdef HAVE_EVENTFD_SUPPORT
-static int done_fd = -1;
+static volatile int done_fd = -1;
#endif
static void sig_handler(int sig)
done = 1;
#ifdef HAVE_EVENTFD_SUPPORT
-{
- u64 tmp = 1;
- /*
- * It is possible for this signal handler to run after done is checked
- * in the main loop, but before the perf counter fds are polled. If this
- * happens, the poll() will continue to wait even though done is set,
- * and will only break out if either another signal is received, or the
- * counters are ready for read. To ensure the poll() doesn't sleep when
- * done is set, use an eventfd (done_fd) to wake up the poll().
- */
- if (write(done_fd, &tmp, sizeof(tmp)) < 0)
- pr_err("failed to signal wakeup fd, error: %m\n");
-}
+ if (done_fd >= 0) {
+ u64 tmp = 1;
+ int orig_errno = errno;
+
+ /*
+ * It is possible for this signal handler to run after done is
+ * checked in the main loop, but before the perf counter fds are
+ * polled. If this happens, the poll() will continue to wait
+ * even though done is set, and will only break out if either
+ * another signal is received, or the counters are ready for
+ * read. To ensure the poll() doesn't sleep when done is set,
+ * use an eventfd (done_fd) to wake up the poll().
+ */
+ if (write(done_fd, &tmp, sizeof(tmp)) < 0)
+ pr_err("failed to signal wakeup fd, error: %m\n");
+
+ errno = orig_errno;
+ }
#endif // HAVE_EVENTFD_SUPPORT
}
out_delete_session:
#ifdef HAVE_EVENTFD_SUPPORT
- if (done_fd >= 0)
- close(done_fd);
+ if (done_fd >= 0) {
+ fd = done_fd;
+ done_fd = -1;
+
+ close(fd);
+ }
#endif
zstd_fini(&session->zstd_data);
perf_session__delete(session);
done
# diff with extra ignore lines
-check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))"'
+check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))" -I"^#include <linux/cfi_types.h>"'
check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"'
check arch/x86/include/asm/amd-ibs.h '-I "^#include [<\"]\(asm/\)*msr-index.h"'
check arch/arm64/include/asm/cputype.h '-I "^#include [<\"]\(asm/\)*sysreg.h"'
"MetricName": "indirect_branch"
},
{
- "MetricExpr": "(armv8_pmuv3_0@event\\=0x1014@ + armv8_pmuv3_0@event\\=0x1018@) / BR_MIS_PRED",
+ "MetricExpr": "(armv8_pmuv3_0@event\\=0x1013@ + armv8_pmuv3_0@event\\=0x1016@) / BR_MIS_PRED",
"PublicDescription": "Push branch L3 topdown metric",
"BriefDescription": "Push branch L3 topdown metric",
"MetricGroup": "TopDownL3",
"MetricName": "push_branch"
},
{
- "MetricExpr": "armv8_pmuv3_0@event\\=0x100c@ / BR_MIS_PRED",
+ "MetricExpr": "armv8_pmuv3_0@event\\=0x100d@ / BR_MIS_PRED",
"PublicDescription": "Pop branch L3 topdown metric",
"BriefDescription": "Pop branch L3 topdown metric",
"MetricGroup": "TopDownL3",
"MetricName": "pop_branch"
},
{
- "MetricExpr": "(BR_MIS_PRED - armv8_pmuv3_0@event\\=0x1010@ - armv8_pmuv3_0@event\\=0x1014@ - armv8_pmuv3_0@event\\=0x1018@ - armv8_pmuv3_0@event\\=0x100c@) / BR_MIS_PRED",
+ "MetricExpr": "(BR_MIS_PRED - armv8_pmuv3_0@event\\=0x1010@ - armv8_pmuv3_0@event\\=0x1013@ - armv8_pmuv3_0@event\\=0x1016@ - armv8_pmuv3_0@event\\=0x100d@) / BR_MIS_PRED",
"PublicDescription": "Other branch L3 topdown metric",
"BriefDescription": "Other branch L3 topdown metric",
"MetricGroup": "TopDownL3",
[
{
"MetricName": "VEC_GROUP_PUMP_RETRY_RATIO_P01",
- "MetricExpr": "(hv_24x7@PM_PB_RTY_VG_PUMP01\\,chip\\=?@ / hv_24x7@PM_PB_VG_PUMP01\\,chip\\=?@) * 100",
+ "MetricExpr": "(hv_24x7@PM_PB_RTY_VG_PUMP01\\,chip\\=?@ / (1 + hv_24x7@PM_PB_VG_PUMP01\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "VEC_GROUP_PUMP_RETRY_RATIO_P23",
- "MetricExpr": "(hv_24x7@PM_PB_RTY_VG_PUMP23\\,chip\\=?@ / hv_24x7@PM_PB_VG_PUMP23\\,chip\\=?@) * 100",
+ "MetricExpr": "(hv_24x7@PM_PB_RTY_VG_PUMP23\\,chip\\=?@ / (1 + hv_24x7@PM_PB_VG_PUMP23\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
},
{
"MetricName": "REMOTE_NODE_PUMPS_RETRIES_RATIO_P01",
- "MetricExpr": "(hv_24x7@PM_PB_RTY_RNS_PUMP01\\,chip\\=?@ / hv_24x7@PM_PB_RNS_PUMP01\\,chip\\=?@) * 100",
+ "MetricExpr": "(hv_24x7@PM_PB_RTY_RNS_PUMP01\\,chip\\=?@ / (1 + hv_24x7@PM_PB_RNS_PUMP01\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "REMOTE_NODE_PUMPS_RETRIES_RATIO_P23",
- "MetricExpr": "(hv_24x7@PM_PB_RTY_RNS_PUMP23\\,chip\\=?@ / hv_24x7@PM_PB_RNS_PUMP23\\,chip\\=?@) * 100",
+ "MetricExpr": "(hv_24x7@PM_PB_RTY_RNS_PUMP23\\,chip\\=?@ / (1 + hv_24x7@PM_PB_RNS_PUMP23\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
},
{
"MetricName": "XLINK0_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK0_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK0_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK1_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK1_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK1_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK2_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK2_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK2_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK3_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK3_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK3_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK4_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK4_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK4_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK5_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK5_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK5_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK6_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK6_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK6_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK7_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK7_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_XLINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK7_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK0_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK0_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK0_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK1_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK1_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK1_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK2_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK2_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK2_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK3_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK3_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK3_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK4_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK4_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK4_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK5_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK5_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK5_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK6_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK6_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK6_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "XLINK7_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_XLINK7_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_XLINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_XLINK7_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_XLINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_XLINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK0_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK0_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK0_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK1_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK1_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK1_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK2_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK2_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK2_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK3_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK3_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK3_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK4_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK4_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK4_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK5_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK5_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK5_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK6_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK6_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK6_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK7_OUT_TOTAL_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK7_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (hv_24x7@PM_ALINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK7_OUT_ODD_TOTAL_UTIL\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_TOTAL_UTIL\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK0_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK0_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK0_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK0_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK0_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK1_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK1_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK1_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK1_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK1_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK2_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK2_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK2_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK2_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK2_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK3_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK3_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK3_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK3_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK3_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK4_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK4_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK4_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK4_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK4_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK5_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK5_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK5_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK5_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK5_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK6_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK6_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK6_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK6_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK6_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
{
"MetricName": "ALINK7_OUT_DATA_UTILIZATION",
- "MetricExpr": "((hv_24x7@PM_ALINK7_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_DATA\\,chip\\=?@) / (hv_24x7@PM_ALINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
+ "MetricExpr": "((hv_24x7@PM_ALINK7_OUT_ODD_DATA\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_DATA\\,chip\\=?@) / (1 + hv_24x7@PM_ALINK7_OUT_ODD_AVLBL_CYCLES\\,chip\\=?@ + hv_24x7@PM_ALINK7_OUT_EVEN_AVLBL_CYCLES\\,chip\\=?@)) * 100",
"ScaleUnit": "1.063%",
"AggregationMode": "PerChip"
},
test_virtual_lbr()
{
echo "--- Test virtual LBR ---"
+ # Check if python script is supported
+ libpython=$(perf version --build-options | grep python | grep -cv OFF)
+ if [ "${libpython}" != "1" ] ; then
+ echo "SKIP: python scripting is not supported"
+ return 2
+ fi
# Python script to determine the maximum size of branch stacks
cat << "_end_of_file_" > "${maxbrstack}"
P_FLAG(BLOCKS);
P_FLAG(BTIME);
P_FLAG(MNT_ID);
+ P_FLAG(DIOALIGN);
#undef P_FLAG
bool near;
};
+static bool kern_sym_name_match(const char *kname, const char *name)
+{
+ size_t n = strlen(name);
+
+ return !strcmp(kname, name) ||
+ (!strncmp(kname, name, n) && kname[n] == '\t');
+}
+
static bool kern_sym_match(struct sym_args *args, const char *name, char type)
{
/* A function with the same name, and global or the n'th found or any */
return kallsyms__is_function(type) &&
- !strcmp(name, args->name) &&
+ kern_sym_name_match(name, args->name) &&
((args->global && isupper(type)) ||
(args->selected && ++(args->cnt) == args->idx) ||
(!args->global && !args->selected));
#endif
#ifndef HAVE_LIBBPF_BPF_PROG_LOAD
+LIBBPF_API int bpf_load_program(enum bpf_prog_type type,
+ const struct bpf_insn *insns, size_t insns_cnt,
+ const char *license, __u32 kern_version,
+ char *log_buf, size_t log_buf_sz);
+
int bpf_prog_load(enum bpf_prog_type prog_type,
const char *prog_name __maybe_unused,
const char *license,
#include <internal/xyarray.h>
+#ifndef HAVE_LIBBPF_BPF_PROGRAM__SET_INSNS
+int bpf_program__set_insns(struct bpf_program *prog __maybe_unused,
+ struct bpf_insn *new_insns __maybe_unused, size_t new_insn_cnt __maybe_unused)
+{
+ pr_err("%s: not support, update libbpf\n", __func__);
+ return -ENOTSUP;
+}
+
+int libbpf_register_prog_handler(const char *sec __maybe_unused,
+ enum bpf_prog_type prog_type __maybe_unused,
+ enum bpf_attach_type exp_attach_type __maybe_unused,
+ const struct libbpf_prog_handler_opts *opts __maybe_unused)
+{
+ pr_err("%s: not support, update libbpf\n", __func__);
+ return -ENOTSUP;
+}
+#endif
+
/* temporarily disable libbpf deprecation warnings */
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
SYM_ALIAS(alias, name, SYM_T_FUNC, SYM_L_WEAK)
#endif
+// In the kernel sources (include/linux/cfi_types.h), this has a different
+// definition when CONFIG_CFI_CLANG is used, for tools/ just use the !clang
+// definition:
+#ifndef SYM_TYPED_START
+#define SYM_TYPED_START(name, linkage, align...) \
+ SYM_START(name, linkage, align)
+#endif
+
+#ifndef SYM_TYPED_FUNC_START
+#define SYM_TYPED_FUNC_START(name) \
+ SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
#endif /* PERF_LINUX_LINKAGE_H_ */
|_| |___/ |_|
pm-graph: suspend/resume/boot timing analysis tools
- Version: 5.9
+ Version: 5.10
Author: Todd Brandt <todd.e.brandt@intel.com>
- Home Page: https://01.org/pm-graph
+ Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html
Report bugs/issues at bugzilla.kernel.org Tools/pm-graph
- https://bugzilla.kernel.org/buglist.cgi?component=pm-graph&product=Tools
Full documentation available online & in man pages
- Getting Started:
- https://01.org/pm-graph/documentation/getting-started
+ https://www.intel.com/content/www/us/en/developer/articles/technical/usage.html
- - Config File Format:
- https://01.org/pm-graph/documentation/3-config-file-format
+ - Feature Summary:
+ https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/features.html
- upstream version in git:
- https://github.com/intel/pm-graph/
+ git clone https://github.com/intel/pm-graph/
Table of Contents
- Overview
If a wifi connection is available, check that it reconnects after resume. Include
the reconnect time in the total resume time calculation and treat wifi timeouts
as resume failures.
+.TP
+\fB-wifitrace\fR
+Trace through the wifi reconnect time and include it in the timeline.
.SS "advanced"
.TP
# store system values and test parameters
class SystemValues:
title = 'SleepGraph'
- version = '5.9'
+ version = '5.10'
ansi = False
rs = 0
display = ''
ftracelog = False
acpidebug = True
tstat = True
+ wifitrace = False
mindevlen = 0.0001
mincglen = 0.0
cgphase = ''
epath = '/sys/kernel/debug/tracing/events/power/'
pmdpath = '/sys/power/pm_debug_messages'
s0ixpath = '/sys/module/intel_pmc_core/parameters/warn_on_s0ix_failures'
+ s0ixres = '/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us'
acpipath='/sys/module/acpi/parameters/debug_level'
traceevents = [
'suspend_resume',
tmstart = 'SUSPEND START %Y%m%d-%H:%M:%S.%f'
tmend = 'RESUME COMPLETE %Y%m%d-%H:%M:%S.%f'
tracefuncs = {
+ 'async_synchronize_full': {},
'sys_sync': {},
'ksys_sync': {},
'__pm_notifier_call_chain': {},
[2, 'suspendstats', 'sh', '-c', 'grep -v invalid /sys/power/suspend_stats/*'],
[2, 'cpuidle', 'sh', '-c', 'grep -v invalid /sys/devices/system/cpu/cpu*/cpuidle/state*/s2idle/*'],
[2, 'battery', 'sh', '-c', 'grep -v invalid /sys/class/power_supply/*/*'],
+ [2, 'thermal', 'sh', '-c', 'grep . /sys/class/thermal/thermal_zone*/temp'],
]
cgblacklist = []
kprobes = dict()
return
if not quiet:
sysvals.printSystemInfo(False)
- pprint('INITIALIZING FTRACE...')
+ pprint('INITIALIZING FTRACE')
# turn trace off
self.fsetVal('0', 'tracing_on')
self.cleanupFtrace()
for name in self.dev_tracefuncs:
self.defaultKprobe(name, self.dev_tracefuncs[name])
if not quiet:
- pprint('INITIALIZING KPROBES...')
+ pprint('INITIALIZING KPROBES')
self.addKprobes(self.verbose)
if(self.usetraceevents):
# turn trace events on
self.cfgdef[file] = fp.read().strip()
fp.write(value)
fp.close()
+ def s0ixSupport(self):
+ if not os.path.exists(self.s0ixres) or not os.path.exists(self.mempowerfile):
+ return False
+ fp = open(sysvals.mempowerfile, 'r')
+ data = fp.read().strip()
+ fp.close()
+ if '[s2idle]' in data:
+ return True
+ return False
def haveTurbostat(self):
if not self.tstat:
return False
self.vprint(out)
return True
return False
- def turbostat(self):
+ def turbostat(self, s0ixready):
cmd = self.getExec('turbostat')
rawout = keyline = valline = ''
fullcmd = '%s -q -S echo freeze > %s' % (cmd, self.powerfile)
for key in keyline:
idx = keyline.index(key)
val = valline[idx]
+ if key == 'SYS%LPI' and not s0ixready and re.match('^[0\.]*$', val):
+ continue
out.append('%s=%s' % (key, val))
return '|'.join(out)
def netfixon(self, net='both'):
out = ascii(fp.read()).strip()
fp.close()
return out
- def wifiRepair(self):
- out = self.netfixon('wifi')
- if not out or 'error' in out.lower():
- return ''
- m = re.match('WIFI \S* ONLINE (?P<action>\S*)', out)
- if not m:
- return 'dead'
- return m.group('action')
def wifiDetails(self, dev):
try:
info = open('/sys/class/net/%s/device/uevent' % dev, 'r').read().strip()
return '%s reconnected %.2f' % \
(self.wifiDetails(dev), max(0, time.time() - start))
time.sleep(0.01)
- if self.netfix:
- res = self.wifiRepair()
- if res:
- timeout = max(0, time.time() - start)
- return '%s %s %d' % (self.wifiDetails(dev), res, timeout)
return '%s timeout %d' % (self.wifiDetails(dev), timeout)
def errorSummary(self, errinfo, msg):
found = False
for i in self.rslist:
self.setVal(self.rstgt, i)
pprint('runtime suspend settings restored on %d devices' % len(self.rslist))
+ def start(self, pm):
+ if self.useftrace:
+ self.dlog('start ftrace tracing')
+ self.fsetVal('1', 'tracing_on')
+ if self.useprocmon:
+ self.dlog('start the process monitor')
+ pm.start()
+ def stop(self, pm):
+ if self.useftrace:
+ if self.useprocmon:
+ self.dlog('stop the process monitor')
+ pm.stop()
+ self.dlog('stop ftrace tracing')
+ self.fsetVal('0', 'tracing_on')
sysvals = SystemValues()
switchvalues = ['enable', 'disable', 'on', 'off', 'true', 'false', '1', '0']
ubiquitous = False
if kprobename in dtf and 'ub' in dtf[kprobename]:
ubiquitous = True
- title = cdata+' '+rdata
- mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)'
- m = re.match(mstr, title)
- if m:
- c = m.group('caller')
- a = m.group('args').strip()
- r = m.group('ret')
+ mc = re.match('\(.*\) *(?P<args>.*)', cdata)
+ mr = re.match('\((?P<caller>\S*).* arg1=(?P<ret>.*)', rdata)
+ if mc and mr:
+ c = mr.group('caller').split('+')[0]
+ a = mc.group('args').strip()
+ r = mr.group('ret')
if len(r) > 6:
r = ''
else:
r = 'ret=%s ' % r
if ubiquitous and c in dtf and 'ub' in dtf[c]:
return False
+ else:
+ return False
color = sysvals.kprobeColor(kprobename)
e = DevFunction(displayname, a, c, r, start, end, ubiquitous, proc, pid, color)
tgtdev['src'].append(e)
e.time = self.trimTimeVal(e.time, t0, dT, left)
e.end = self.trimTimeVal(e.end, t0, dT, left)
e.length = e.end - e.time
+ if('cpuexec' in d):
+ cpuexec = dict()
+ for e in d['cpuexec']:
+ c0, cN = e
+ c0 = self.trimTimeVal(c0, t0, dT, left)
+ cN = self.trimTimeVal(cN, t0, dT, left)
+ cpuexec[(c0, cN)] = d['cpuexec'][e]
+ d['cpuexec'] = cpuexec
for dir in ['suspend', 'resume']:
list = []
for e in self.errorinfo[dir]:
return d
def addProcessUsageEvent(self, name, times):
# get the start and end times for this process
- maxC = 0
- tlast = 0
- start = -1
- end = -1
+ cpuexec = dict()
+ tlast = start = end = -1
for t in sorted(times):
- if tlast == 0:
+ if tlast < 0:
tlast = t
continue
- if name in self.pstl[t]:
- if start == -1 or tlast < start:
+ if name in self.pstl[t] and self.pstl[t][name] > 0:
+ if start < 0:
start = tlast
- if end == -1 or t > end:
- end = t
+ end, key = t, (tlast, t)
+ maxj = (t - tlast) * 1024.0
+ cpuexec[key] = min(1.0, float(self.pstl[t][name]) / maxj)
tlast = t
- if start == -1 or end == -1:
- return 0
+ if start < 0 or end < 0:
+ return
# add a new action for this process and get the object
out = self.newActionGlobal(name, start, end, -3)
- if not out:
- return 0
- phase, devname = out
- dev = self.dmesg[phase]['list'][devname]
- # get the cpu exec data
- tlast = 0
- clast = 0
- cpuexec = dict()
- for t in sorted(times):
- if tlast == 0 or t <= start or t > end:
- tlast = t
- continue
- list = self.pstl[t]
- c = 0
- if name in list:
- c = list[name]
- if c > maxC:
- maxC = c
- if c != clast:
- key = (tlast, t)
- cpuexec[key] = c
- tlast = t
- clast = c
- dev['cpuexec'] = cpuexec
- return maxC
+ if out:
+ phase, devname = out
+ dev = self.dmesg[phase]['list'][devname]
+ dev['cpuexec'] = cpuexec
def createProcessUsageEvents(self):
- # get an array of process names
- proclist = []
- for t in sorted(self.pstl):
- pslist = self.pstl[t]
- for ps in sorted(pslist):
- if ps not in proclist:
- proclist.append(ps)
- # get a list of data points for suspend and resume
- tsus = []
- tres = []
+ # get an array of process names and times
+ proclist = {'sus': dict(), 'res': dict()}
+ tdata = {'sus': [], 'res': []}
for t in sorted(self.pstl):
- if t < self.tSuspended:
- tsus.append(t)
- else:
- tres.append(t)
+ dir = 'sus' if t < self.tSuspended else 'res'
+ for ps in sorted(self.pstl[t]):
+ if ps not in proclist[dir]:
+ proclist[dir][ps] = 0
+ tdata[dir].append(t)
# process the events for suspend and resume
- if len(proclist) > 0:
+ if len(proclist['sus']) > 0 or len(proclist['res']) > 0:
sysvals.vprint('Process Execution:')
- for ps in proclist:
- c = self.addProcessUsageEvent(ps, tsus)
- if c > 0:
- sysvals.vprint('%25s (sus): %d' % (ps, c))
- c = self.addProcessUsageEvent(ps, tres)
- if c > 0:
- sysvals.vprint('%25s (res): %d' % (ps, c))
+ for dir in ['sus', 'res']:
+ for ps in sorted(proclist[dir]):
+ self.addProcessUsageEvent(ps, tdata[dir])
def handleEndMarker(self, time, msg=''):
dm = self.dmesg
self.setEnd(time, msg)
# markers, and/or kprobes required for primary parsing.
def doesTraceLogHaveTraceEvents():
kpcheck = ['_cal: (', '_ret: (']
- techeck = ['suspend_resume', 'device_pm_callback']
+ techeck = ['suspend_resume', 'device_pm_callback', 'tracing_mark_write']
tmcheck = ['SUSPEND START', 'RESUME COMPLETE']
sysvals.usekprobes = False
fp = sysvals.openlog(sysvals.ftracefile, 'r')
check.remove(i)
tmcheck = check
fp.close()
- sysvals.usetraceevents = True if len(techeck) < 2 else False
+ sysvals.usetraceevents = True if len(techeck) < 3 else False
sysvals.usetracemarkers = True if len(tmcheck) == 0 else False
# Function: appendIncompleteTraceLog
continue
# process cpu exec line
if t.type == 'tracing_mark_write':
+ if t.name == 'CMD COMPLETE' and data.tKernRes == 0:
+ data.tKernRes = t.time
m = re.match(tp.procexecfmt, t.name)
if(m):
parts, msg = 1, m.group('ps')
e = next((x for x in reversed(tp.ktemp[key]) if x['end'] < 0), 0)
if not e:
continue
+ if (t.time - e['begin']) * 1000 < sysvals.mindevlen:
+ tp.ktemp[key].pop()
+ continue
e['end'] = t.time
e['rdata'] = kprobedata
# end of kernel resume
fmt = '<n>(%.3f ms @ '+sv.timeformat+')</n>'
flen = fmt % (line.length*1000, line.time)
if line.isLeaf():
+ if line.length * 1000 < sv.mincglen:
+ continue
hf.write(html_func_leaf.format(line.name, flen))
elif line.freturn:
hf.write(html_func_end)
if('cpuexec' in dev):
for t in sorted(dev['cpuexec']):
start, end = t
- j = float(dev['cpuexec'][t]) / 5
- if j > 1.0:
- j = 1.0
height = '%.3f' % (rowheight/3)
top = '%.3f' % (rowtop + devtl.scaleH + 2*rowheight/3)
left = '%f' % (((start-m0)*100)/mTotal)
width = '%f' % ((end-start)*100/mTotal)
- color = 'rgba(255, 0, 0, %f)' % j
+ color = 'rgba(255, 0, 0, %f)' % dev['cpuexec'][t]
devtl.html += \
html_cpuexec.format(left, top, height, width, color)
if('src' not in dev):
call('sync', shell=True)
sv.dlog('read dmesg')
sv.initdmesg()
- # start ftrace
- if sv.useftrace:
- if not quiet:
- pprint('START TRACING')
- sv.dlog('start ftrace tracing')
- sv.fsetVal('1', 'tracing_on')
- if sv.useprocmon:
- sv.dlog('start the process monitor')
- pm.start()
- sv.dlog('run the cmdinfo list before')
+ sv.dlog('cmdinfo before')
sv.cmdinfo(True)
+ sv.start(pm)
# execute however many s/r runs requested
for count in range(1,sv.execcount+1):
# x2delay in between test runs
if res != 0:
tdata['error'] = 'cmd returned %d' % res
else:
+ s0ixready = sv.s0ixSupport()
mode = sv.suspendmode
if sv.memmode and os.path.exists(sv.mempowerfile):
mode = 'mem'
sv.testVal(sv.diskpowerfile, 'radio', sv.diskmode)
if sv.acpidebug:
sv.testVal(sv.acpipath, 'acpi', '0xe')
- if mode == 'freeze' and sv.haveTurbostat():
+ if ((mode == 'freeze') or (sv.memmode == 's2idle')) \
+ and sv.haveTurbostat():
# execution will pause here
- turbo = sv.turbostat()
+ turbo = sv.turbostat(s0ixready)
if turbo:
tdata['turbo'] = turbo
else:
pf.close()
except Exception as e:
tdata['error'] = str(e)
- sv.dlog('system returned from resume')
+ sv.fsetVal('CMD COMPLETE', 'trace_marker')
+ sv.dlog('system returned')
# reset everything
sv.testVal('restoreall')
if(sv.rtcwake):
sv.fsetVal('WAIT END', 'trace_marker')
# return from suspend
pprint('RESUME COMPLETE')
- sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
+ if(count < sv.execcount):
+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
+ elif(not sv.wifitrace):
+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
+ sv.stop(pm)
if sv.wifi and wifi:
tdata['wifi'] = sv.pollWifi(wifi)
sv.dlog('wifi check, %s' % tdata['wifi'])
- if sv.netfix:
- netfixout = sv.netfixon('wired')
- elif sv.netfix:
- netfixout = sv.netfixon()
- if sv.netfix and netfixout:
- tdata['netfix'] = netfixout
+ if(count == sv.execcount and sv.wifitrace):
+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
+ sv.stop(pm)
+ if sv.netfix:
+ tdata['netfix'] = sv.netfixon()
sv.dlog('netfix, %s' % tdata['netfix'])
if(sv.suspendmode == 'mem' or sv.suspendmode == 'command'):
sv.dlog('read the ACPI FPDT')
tdata['fw'] = getFPDT(False)
testdata.append(tdata)
- sv.dlog('run the cmdinfo list after')
+ sv.dlog('cmdinfo after')
cmdafter = sv.cmdinfo(False)
- # stop ftrace
- if sv.useftrace:
- if sv.useprocmon:
- sv.dlog('stop the process monitor')
- pm.stop()
- sv.fsetVal('0', 'tracing_on')
# grab a copy of the dmesg output
if not quiet:
pprint('CAPTURING DMESG')
- sysvals.dlog('EXECUTION TRACE END')
sv.getdmesg(testdata)
# grab a copy of the ftrace output
if sv.useftrace:
if not m:
continue
name, time, phase = m.group('n'), m.group('t'), m.group('p')
+ if name == 'async_synchronize_full':
+ continue
if ' async' in name or ' sync' in name:
name = ' '.join(name.split(' ')[:-1])
if phase.startswith('suspend'):
' -skiphtml Run the test and capture the trace logs, but skip the timeline (default: disabled)\n'\
' -result fn Export a results table to a text file for parsing.\n'\
' -wifi If a wifi connection is available, check that it reconnects after resume.\n'\
+ ' -wifitrace Trace kernel execution through wifi reconnect.\n'\
' -netfix Use netfix to reset the network in the event it fails to resume.\n'\
' [testprep]\n'\
' -sync Sync the filesystems before starting the test\n'\
sysvals.sync = True
elif(arg == '-wifi'):
sysvals.wifi = True
+ elif(arg == '-wifitrace'):
+ sysvals.wifitrace = True
elif(arg == '-netfix'):
sysvals.netfix = True
elif(arg == '-gzip'):
#include "mock.h"
#define NR_CXL_HOST_BRIDGES 2
+#define NR_CXL_SINGLE_HOST 1
#define NR_CXL_ROOT_PORTS 2
#define NR_CXL_SWITCH_PORTS 2
#define NR_CXL_PORT_DECODERS 8
static struct platform_device *cxl_acpi;
static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES];
-static struct platform_device
- *cxl_root_port[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
-static struct platform_device
- *cxl_switch_uport[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
-static struct platform_device
- *cxl_switch_dport[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS *
- NR_CXL_SWITCH_PORTS];
-struct platform_device
- *cxl_mem[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS * NR_CXL_SWITCH_PORTS];
+#define NR_MULTI_ROOT (NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS)
+static struct platform_device *cxl_root_port[NR_MULTI_ROOT];
+static struct platform_device *cxl_switch_uport[NR_MULTI_ROOT];
+#define NR_MEM_MULTI \
+ (NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS * NR_CXL_SWITCH_PORTS)
+static struct platform_device *cxl_switch_dport[NR_MEM_MULTI];
+
+static struct platform_device *cxl_hb_single[NR_CXL_SINGLE_HOST];
+static struct platform_device *cxl_root_single[NR_CXL_SINGLE_HOST];
+static struct platform_device *cxl_swu_single[NR_CXL_SINGLE_HOST];
+#define NR_MEM_SINGLE (NR_CXL_SINGLE_HOST * NR_CXL_SWITCH_PORTS)
+static struct platform_device *cxl_swd_single[NR_MEM_SINGLE];
+
+struct platform_device *cxl_mem[NR_MEM_MULTI];
+struct platform_device *cxl_mem_single[NR_MEM_SINGLE];
+
+
+static inline bool is_multi_bridge(struct device *dev)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)
+ if (&cxl_host_bridge[i]->dev == dev)
+ return true;
+ return false;
+}
+
+static inline bool is_single_bridge(struct device *dev)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++)
+ if (&cxl_hb_single[i]->dev == dev)
+ return true;
+ return false;
+}
static struct acpi_device acpi0017_mock;
-static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {
+static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES + NR_CXL_SINGLE_HOST] = {
[0] = {
.handle = &host_bridge[0],
},
[1] = {
.handle = &host_bridge[1],
},
+ [2] = {
+ .handle = &host_bridge[2],
+ },
+
};
static bool is_mock_dev(struct device *dev)
for (i = 0; i < ARRAY_SIZE(cxl_mem); i++)
if (dev == &cxl_mem[i]->dev)
return true;
+ for (i = 0; i < ARRAY_SIZE(cxl_mem_single); i++)
+ if (dev == &cxl_mem_single[i]->dev)
+ return true;
if (dev == &cxl_acpi->dev)
return true;
return false;
static struct {
struct acpi_table_cedt cedt;
- struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES];
+ struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES + NR_CXL_SINGLE_HOST];
struct {
struct acpi_cedt_cfmws cfmws;
u32 target[1];
struct acpi_cedt_cfmws cfmws;
u32 target[2];
} cfmws3;
+ struct {
+ struct acpi_cedt_cfmws cfmws;
+ u32 target[1];
+ } cfmws4;
} __packed mock_cedt = {
.cedt = {
.header = {
.uid = 1,
.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
},
+ .chbs[2] = {
+ .header = {
+ .type = ACPI_CEDT_TYPE_CHBS,
+ .length = sizeof(mock_cedt.chbs[0]),
+ },
+ .uid = 2,
+ .cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
+ },
.cfmws0 = {
.cfmws = {
.header = {
},
.target = { 0, 1, },
},
+ .cfmws4 = {
+ .cfmws = {
+ .header = {
+ .type = ACPI_CEDT_TYPE_CFMWS,
+ .length = sizeof(mock_cedt.cfmws4),
+ },
+ .interleave_ways = 0,
+ .granularity = 4,
+ .restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
+ ACPI_CEDT_CFMWS_RESTRICT_PMEM,
+ .qtg_id = 4,
+ .window_size = SZ_256M * 4UL,
+ },
+ .target = { 2 },
+ },
};
-struct acpi_cedt_cfmws *mock_cfmws[4] = {
+struct acpi_cedt_cfmws *mock_cfmws[] = {
[0] = &mock_cedt.cfmws0.cfmws,
[1] = &mock_cedt.cfmws1.cfmws,
[2] = &mock_cedt.cfmws2.cfmws,
[3] = &mock_cedt.cfmws3.cfmws,
+ [4] = &mock_cedt.cfmws4.cfmws,
};
struct cxl_mock_res {
for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)
if (dev == &cxl_host_bridge[i]->dev)
return true;
+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++)
+ if (dev == &cxl_hb_single[i]->dev)
+ return true;
return false;
}
if (dev == &cxl_switch_dport[i]->dev)
return true;
+ for (i = 0; i < ARRAY_SIZE(cxl_root_single); i++)
+ if (dev == &cxl_root_single[i]->dev)
+ return true;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_swu_single); i++)
+ if (dev == &cxl_swu_single[i]->dev)
+ return true;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_swd_single); i++)
+ if (dev == &cxl_swd_single[i]->dev)
+ return true;
+
if (is_cxl_memdev(dev))
return is_mock_dev(dev->parent);
int i, array_size;
if (port->depth == 1) {
- array_size = ARRAY_SIZE(cxl_root_port);
- array = cxl_root_port;
+ if (is_multi_bridge(port->uport)) {
+ array_size = ARRAY_SIZE(cxl_root_port);
+ array = cxl_root_port;
+ } else if (is_single_bridge(port->uport)) {
+ array_size = ARRAY_SIZE(cxl_root_single);
+ array = cxl_root_single;
+ } else {
+ dev_dbg(&port->dev, "%s: unknown bridge type\n",
+ dev_name(port->uport));
+ return -ENXIO;
+ }
} else if (port->depth == 2) {
- array_size = ARRAY_SIZE(cxl_switch_dport);
- array = cxl_switch_dport;
+ struct cxl_port *parent = to_cxl_port(port->dev.parent);
+
+ if (is_multi_bridge(parent->uport)) {
+ array_size = ARRAY_SIZE(cxl_switch_dport);
+ array = cxl_switch_dport;
+ } else if (is_single_bridge(parent->uport)) {
+ array_size = ARRAY_SIZE(cxl_swd_single);
+ array = cxl_swd_single;
+ } else {
+ dev_dbg(&port->dev, "%s: unknown bridge type\n",
+ dev_name(port->uport));
+ return -ENXIO;
+ }
} else {
dev_WARN_ONCE(&port->dev, 1, "unexpected depth %d\n",
port->depth);
struct platform_device *pdev = array[i];
struct cxl_dport *dport;
- if (pdev->dev.parent != port->uport)
+ if (pdev->dev.parent != port->uport) {
+ dev_dbg(&port->dev, "%s: mismatch parent %s\n",
+ dev_name(port->uport),
+ dev_name(pdev->dev.parent));
continue;
+ }
dport = devm_cxl_add_dport(port, &pdev->dev, pdev->id,
CXL_RESOURCE_NONE);
#define SZ_512G (SZ_64G * 8)
#endif
+static __init int cxl_single_init(void)
+{
+ int i, rc;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++) {
+ struct acpi_device *adev =
+ &host_bridge[NR_CXL_HOST_BRIDGES + i];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_host_bridge",
+ NR_CXL_HOST_BRIDGES + i);
+ if (!pdev)
+ goto err_bridge;
+
+ mock_companion(adev, &pdev->dev);
+ rc = platform_device_add(pdev);
+ if (rc) {
+ platform_device_put(pdev);
+ goto err_bridge;
+ }
+
+ cxl_hb_single[i] = pdev;
+ rc = sysfs_create_link(&pdev->dev.kobj, &pdev->dev.kobj,
+ "physical_node");
+ if (rc)
+ goto err_bridge;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(cxl_root_single); i++) {
+ struct platform_device *bridge =
+ cxl_hb_single[i % ARRAY_SIZE(cxl_hb_single)];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_root_port",
+ NR_MULTI_ROOT + i);
+ if (!pdev)
+ goto err_port;
+ pdev->dev.parent = &bridge->dev;
+
+ rc = platform_device_add(pdev);
+ if (rc) {
+ platform_device_put(pdev);
+ goto err_port;
+ }
+ cxl_root_single[i] = pdev;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(cxl_swu_single); i++) {
+ struct platform_device *root_port = cxl_root_single[i];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_switch_uport",
+ NR_MULTI_ROOT + i);
+ if (!pdev)
+ goto err_uport;
+ pdev->dev.parent = &root_port->dev;
+
+ rc = platform_device_add(pdev);
+ if (rc) {
+ platform_device_put(pdev);
+ goto err_uport;
+ }
+ cxl_swu_single[i] = pdev;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(cxl_swd_single); i++) {
+ struct platform_device *uport =
+ cxl_swu_single[i % ARRAY_SIZE(cxl_swu_single)];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_switch_dport",
+ i + NR_MEM_MULTI);
+ if (!pdev)
+ goto err_dport;
+ pdev->dev.parent = &uport->dev;
+
+ rc = platform_device_add(pdev);
+ if (rc) {
+ platform_device_put(pdev);
+ goto err_dport;
+ }
+ cxl_swd_single[i] = pdev;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(cxl_mem_single); i++) {
+ struct platform_device *dport = cxl_swd_single[i];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_mem", NR_MEM_MULTI + i);
+ if (!pdev)
+ goto err_mem;
+ pdev->dev.parent = &dport->dev;
+ set_dev_node(&pdev->dev, i % 2);
+
+ rc = platform_device_add(pdev);
+ if (rc) {
+ platform_device_put(pdev);
+ goto err_mem;
+ }
+ cxl_mem_single[i] = pdev;
+ }
+
+ return 0;
+
+err_mem:
+ for (i = ARRAY_SIZE(cxl_mem_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_mem_single[i]);
+err_dport:
+ for (i = ARRAY_SIZE(cxl_swd_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_swd_single[i]);
+err_uport:
+ for (i = ARRAY_SIZE(cxl_swu_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_swu_single[i]);
+err_port:
+ for (i = ARRAY_SIZE(cxl_root_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_root_single[i]);
+err_bridge:
+ for (i = ARRAY_SIZE(cxl_hb_single) - 1; i >= 0; i--) {
+ struct platform_device *pdev = cxl_hb_single[i];
+
+ if (!pdev)
+ continue;
+ sysfs_remove_link(&pdev->dev.kobj, "physical_node");
+ platform_device_unregister(cxl_hb_single[i]);
+ }
+
+ return rc;
+}
+
+static void cxl_single_exit(void)
+{
+ int i;
+
+ for (i = ARRAY_SIZE(cxl_mem_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_mem_single[i]);
+ for (i = ARRAY_SIZE(cxl_swd_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_swd_single[i]);
+ for (i = ARRAY_SIZE(cxl_swu_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_swu_single[i]);
+ for (i = ARRAY_SIZE(cxl_root_single) - 1; i >= 0; i--)
+ platform_device_unregister(cxl_root_single[i]);
+ for (i = ARRAY_SIZE(cxl_hb_single) - 1; i >= 0; i--) {
+ struct platform_device *pdev = cxl_hb_single[i];
+
+ if (!pdev)
+ continue;
+ sysfs_remove_link(&pdev->dev.kobj, "physical_node");
+ platform_device_unregister(cxl_hb_single[i]);
+ }
+}
+
static __init int cxl_test_init(void)
{
int rc, i;
pdev = platform_device_alloc("cxl_switch_uport", i);
if (!pdev)
- goto err_port;
+ goto err_uport;
pdev->dev.parent = &root_port->dev;
rc = platform_device_add(pdev);
pdev = platform_device_alloc("cxl_switch_dport", i);
if (!pdev)
- goto err_port;
+ goto err_dport;
pdev->dev.parent = &uport->dev;
rc = platform_device_add(pdev);
cxl_switch_dport[i] = pdev;
}
- BUILD_BUG_ON(ARRAY_SIZE(cxl_mem) != ARRAY_SIZE(cxl_switch_dport));
for (i = 0; i < ARRAY_SIZE(cxl_mem); i++) {
struct platform_device *dport = cxl_switch_dport[i];
struct platform_device *pdev;
cxl_mem[i] = pdev;
}
+ rc = cxl_single_init();
+ if (rc)
+ goto err_mem;
+
cxl_acpi = platform_device_alloc("cxl_acpi", 0);
if (!cxl_acpi)
- goto err_mem;
+ goto err_single;
mock_companion(&acpi0017_mock, &cxl_acpi->dev);
acpi0017_mock.dev.bus = &platform_bus_type;
err_add:
platform_device_put(cxl_acpi);
+err_single:
+ cxl_single_exit();
err_mem:
for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--)
platform_device_unregister(cxl_mem[i]);
int i;
platform_device_unregister(cxl_acpi);
+ cxl_single_exit();
for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--)
platform_device_unregister(cxl_mem[i]);
for (i = ARRAY_SIZE(cxl_switch_dport) - 1; i >= 0; i--)
TARGETS += net/af_unix
TARGETS += net/forwarding
TARGETS += net/mptcp
+TARGETS += net/openvswitch
TARGETS += netfilter
TARGETS += nsfs
TARGETS += pidfd
.btf_load_err = true,
.err_str = "Invalid type_id",
},
+{
+ .descr = "decl_tag test #16, func proto, return type",
+ .raw_types = {
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
+ BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DECL_TAG, 0, 0), 2), (-1), /* [3] */
+ BTF_FUNC_PROTO_ENC(3, 0), /* [4] */
+ BTF_END_RAW,
+ },
+ BTF_STR_SEC("\0local\0tag1"),
+ .btf_load_err = true,
+ .err_str = "Invalid return type",
+},
{
.descr = "type_tag test #1",
.raw_types = {
if (status) {
bpf_printk("bpf_dynptr_read() failed: %d\n", status);
err = 1;
- return 0;
+ return 1;
}
} else {
sample = bpf_dynptr_data(dynptr, 0, sizeof(*sample));
if (!sample) {
bpf_printk("Unexpectedly failed to get sample\n");
err = 2;
- return 0;
+ return 1;
}
stack_sample = *sample;
}
bond-lladdr-target.sh \
dev_addr_lists.sh
-TEST_FILES := lag_lib.sh
+TEST_FILES := \
+ lag_lib.sh \
+ net_forwarding_lib.sh
include ../../../lib.mk
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
-source "$lib_dir"/../../../net/forwarding/lib.sh
+source "$lib_dir"/net_forwarding_lib.sh
source "$lib_dir"/lag_lib.sh
--- /dev/null
+../../../net/forwarding/lib.sh
\ No newline at end of file
REQUIRE_JQ="no"
REQUIRE_MZ="no"
NETIF_CREATE="no"
-lib_dir=$(dirname $0)/../../../net/forwarding
-source $lib_dir/lib.sh
+lib_dir=$(dirname "$0")
+source "$lib_dir"/lib.sh
cleanup() {
echo "Cleaning up"
TEST_PROGS := dev_addr_lists.sh
+TEST_FILES := \
+ lag_lib.sh \
+ net_forwarding_lib.sh
+
include ../../../lib.mk
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
-source "$lib_dir"/../../../net/forwarding/lib.sh
+source "$lib_dir"/net_forwarding_lib.sh
-source "$lib_dir"/../bonding/lag_lib.sh
+source "$lib_dir"/lag_lib.sh
destroy()
{
- local ifnames=(dummy0 dummy1 team0 mv0)
+ local ifnames=(dummy1 dummy2 team0 mv0)
local ifname
for ifname in "${ifnames[@]}"; do
--- /dev/null
+../bonding/lag_lib.sh
\ No newline at end of file
--- /dev/null
+../../../net/forwarding/lib.sh
\ No newline at end of file
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Generic dynamic event - check if duplicate events are caught
-# requires: dynamic_events "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README
+# requires: dynamic_events "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README
echo 0 > events/enable
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: event trigger - test inter-event histogram trigger eprobe on synthetic event
-# requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README
+# requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README
echo 0 > events/enable
CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES) $(KHDR_INCLUDES)
LDLIBS := -lpthread -lrt
-HEADERS := \
+LOCAL_HDRS := \
../include/futextest.h \
../include/atomic.h \
../include/logging.h
-TEST_GEN_FILES := \
+TEST_GEN_PROGS := \
futex_wait_timeout \
futex_wait_wouldblock \
futex_requeue_pi \
top_srcdir = ../../../../..
DEFAULT_INSTALL_HDR_PATH := 1
include ../../lib.mk
-
-$(TEST_GEN_FILES): $(HEADERS)
CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE
LDLIBS += -lm
-uname_M := $(shell uname -m 2>/dev/null || echo not)
-ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+ARCH ?= $(shell uname -m 2>/dev/null || echo not)
+ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
-ifeq (x86,$(ARCH))
+ifeq (x86,$(ARCH_PROCESSED))
TEST_GEN_FILES := msr aperf
endif
# SPDX-License-Identifier: GPL-2.0-only
# Makefile for kexec tests
-uname_M := $(shell uname -m 2>/dev/null || echo not)
-ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+ARCH ?= $(shell uname -m 2>/dev/null || echo not)
+ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
-ifeq ($(ARCH),$(filter $(ARCH),x86 ppc64le))
+ifeq ($(ARCH_PROCESSED),$(filter $(ARCH_PROCESSED),x86 ppc64le))
TEST_PROGS := test_kexec_load.sh test_kexec_file_load.sh
TEST_FILES := kexec_common_lib.sh
: KVM_DEV_TYPE_ARM_VGIC_V2;
if (!__kvm_test_create_device(v.vm, other)) {
- ret = __kvm_test_create_device(v.vm, other);
- TEST_ASSERT(ret && (errno == EINVAL || errno == EEXIST),
+ ret = __kvm_create_device(v.vm, other);
+ TEST_ASSERT(ret < 0 && (errno == EINVAL || errno == EEXIST),
"create GIC device while other version exists");
}
static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
uint64_t nr_modifications)
{
- const uint64_t pages = 1;
+ uint64_t pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size;
uint64_t gpa;
int i;
#include <time.h>
#include <sched.h>
#include <signal.h>
+#include <pthread.h>
#include <sys/eventfd.h>
+/* Defined in include/linux/kvm_types.h */
+#define GPA_INVALID (~(ulong)0)
+
#define SHINFO_REGION_GVA 0xc0000000ULL
#define SHINFO_REGION_GPA 0xc0000000ULL
#define SHINFO_REGION_SLOT 10
#define MIN_STEAL_TIME 50000
+#define SHINFO_RACE_TIMEOUT 2 /* seconds */
+
#define __HYPERVISOR_set_timer_op 15
#define __HYPERVISOR_sched_op 29
#define __HYPERVISOR_event_channel_op 32
struct kvm_irq_routing_entry entries[2];
} irq_routes;
-bool guest_saw_irq;
+static volatile bool guest_saw_irq;
static void evtchn_handler(struct ex_regs *regs)
{
static void guest_code(void)
{
struct vcpu_runstate_info *rs = (void *)RUNSTATE_VADDR;
+ int i;
__asm__ __volatile__(
"sti\n"
guest_wait_for_irq();
GUEST_SYNC(21);
+ /* Racing host ioctls */
+
+ guest_wait_for_irq();
+
+ GUEST_SYNC(22);
+ /* Racing vmcall against host ioctl */
+
+ ports[0] = 0;
+
+ p = (struct sched_poll) {
+ .ports = ports,
+ .nr_ports = 1,
+ .timeout = 0
+ };
+
+wait_for_timer:
+ /*
+ * Poll for a timer wake event while the worker thread is mucking with
+ * the shared info. KVM XEN drops timer IRQs if the shared info is
+ * invalid when the timer expires. Arbitrarily poll 100 times before
+ * giving up and asking the VMM to re-arm the timer. 100 polls should
+ * consume enough time to beat on KVM without taking too long if the
+ * timer IRQ is dropped due to an invalid event channel.
+ */
+ for (i = 0; i < 100 && !guest_saw_irq; i++)
+ asm volatile("vmcall"
+ : "=a" (rax)
+ : "a" (__HYPERVISOR_sched_op),
+ "D" (SCHEDOP_poll),
+ "S" (&p)
+ : "memory");
+
+ /*
+ * Re-send the timer IRQ if it was (likely) dropped due to the timer
+ * expiring while the event channel was invalid.
+ */
+ if (!guest_saw_irq) {
+ GUEST_SYNC(23);
+ goto wait_for_timer;
+ }
+ guest_saw_irq = false;
+
+ GUEST_SYNC(24);
}
static int cmp_timespec(struct timespec *a, struct timespec *b)
TEST_FAIL("IRQ delivery timed out");
}
+static void *juggle_shinfo_state(void *arg)
+{
+ struct kvm_vm *vm = (struct kvm_vm *)arg;
+
+ struct kvm_xen_hvm_attr cache_init = {
+ .type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
+ .u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE
+ };
+
+ struct kvm_xen_hvm_attr cache_destroy = {
+ .type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
+ .u.shared_info.gfn = GPA_INVALID
+ };
+
+ for (;;) {
+ __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_init);
+ __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_destroy);
+ pthread_testcancel();
+ };
+
+ return NULL;
+}
+
int main(int argc, char *argv[])
{
struct timespec min_ts, max_ts, vm_ts;
struct kvm_vm *vm;
+ pthread_t thread;
bool verbose;
+ int ret;
verbose = argc > 1 && (!strncmp(argv[1], "-v", 3) ||
!strncmp(argv[1], "--verbose", 10));
case 21:
TEST_ASSERT(!evtchn_irq_expected,
"Expected event channel IRQ but it didn't happen");
+ alarm(0);
+
+ if (verbose)
+ printf("Testing shinfo lock corruption (KVM_XEN_HVM_EVTCHN_SEND)\n");
+
+ ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm);
+ TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret));
+
+ struct kvm_irq_routing_xen_evtchn uxe = {
+ .port = 1,
+ .vcpu = vcpu->id,
+ .priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL
+ };
+
+ evtchn_irq_expected = true;
+ for (time_t t = time(NULL) + SHINFO_RACE_TIMEOUT; time(NULL) < t;)
+ __vm_ioctl(vm, KVM_XEN_HVM_EVTCHN_SEND, &uxe);
+ break;
+
+ case 22:
+ TEST_ASSERT(!evtchn_irq_expected,
+ "Expected event channel IRQ but it didn't happen");
+
+ if (verbose)
+ printf("Testing shinfo lock corruption (SCHEDOP_poll)\n");
+
+ shinfo->evtchn_pending[0] = 1;
+
+ evtchn_irq_expected = true;
+ tmr.u.timer.expires_ns = rs->state_entry_time +
+ SHINFO_RACE_TIMEOUT * 1000000000ULL;
+ vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr);
+ break;
+
+ case 23:
+ /*
+ * Optional and possibly repeated sync point.
+ * Injecting the timer IRQ may fail if the
+ * shinfo is invalid when the timer expires.
+ * If the timer has expired but the IRQ hasn't
+ * been delivered, rearm the timer and retry.
+ */
+ vcpu_ioctl(vcpu, KVM_XEN_VCPU_GET_ATTR, &tmr);
+
+ /* Resume the guest if the timer is still pending. */
+ if (tmr.u.timer.expires_ns)
+ break;
+
+ /* All done if the IRQ was delivered. */
+ if (!evtchn_irq_expected)
+ break;
+
+ tmr.u.timer.expires_ns = rs->state_entry_time +
+ SHINFO_RACE_TIMEOUT * 1000000000ULL;
+ vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr);
+ break;
+ case 24:
+ TEST_ASSERT(!evtchn_irq_expected,
+ "Expected event channel IRQ but it didn't happen");
+
+ ret = pthread_cancel(thread);
+ TEST_ASSERT(ret == 0, "pthread_cancel() failed: %s", strerror(ret));
+
+ ret = pthread_join(thread, 0);
+ TEST_ASSERT(ret == 0, "pthread_join() failed: %s", strerror(ret));
goto done;
case 0x20:
# First run: make -C ../../../.. headers_install
CFLAGS += -Wall -O2 $(KHDR_INCLUDES)
-LDLIBS += -lcap
LOCAL_HDRS += common.h
TEST_GEN_PROGS_EXTENDED := true
-# Static linking for short targets:
+# Short targets:
+$(TEST_GEN_PROGS): LDLIBS += -lcap
$(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
include ../lib.mk
-# Static linking for targets with $(OUTPUT)/ prefix:
+# Targets with $(OUTPUT)/ prefix:
+$(TEST_GEN_PROGS): LDLIBS += -lcap
$(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
run_tests: all
ifdef building_out_of_srctree
@if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \
- rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
+ rsync -aLq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
fi
@if [ "X$(TEST_PROGS)" != "X" ]; then \
$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
define INSTALL_SINGLE_RULE
$(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH))
- $(if $(INSTALL_LIST),rsync -a $(INSTALL_LIST) $(INSTALL_PATH)/)
+ $(if $(INSTALL_LIST),rsync -aL $(INSTALL_LIST) $(INSTALL_PATH)/)
endef
define INSTALL_RULE
{
for memory in `hotpluggable_offline_memory`; do
if ! online_memory_expect_success $memory; then
- echo "$FUNCNAME $memory: unexpected fail" >&2
retval=1
fi
done
TEST_GEN_FILES += bind_bhash
TEST_GEN_PROGS += sk_bind_sendto_listen
TEST_GEN_PROGS += sk_connect_zero_addr
+TEST_PROGS += test_ingress_egress_chaining.sh
TEST_FILES := settings
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0
+
+top_srcdir = ../../../../..
+
+CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+TEST_PROGS := openvswitch.sh
+
+TEST_FILES := ovs-dpctl.py
+
+EXTRA_CLEAN := test_netlink_checks
+
+include ../../lib.mk
--- /dev/null
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# OVS kernel module self tests
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+PAUSE_ON_FAIL=no
+VERBOSE=0
+TRACING=0
+
+tests="
+ netlink_checks ovsnl: validate netlink attrs and settings"
+
+info() {
+ [ $VERBOSE = 0 ] || echo $*
+}
+
+ovs_base=`pwd`
+sbxs=
+sbx_add () {
+ info "adding sandbox '$1'"
+
+ sbxs="$sbxs $1"
+
+ NO_BIN=0
+
+ # Create sandbox.
+ local d="$ovs_base"/$1
+ if [ -e $d ]; then
+ info "removing $d"
+ rm -rf "$d"
+ fi
+ mkdir "$d" || return 1
+ ovs_setenv $1
+}
+
+ovs_exit_sig() {
+ [ -e ${ovs_dir}/cleanup ] && . "$ovs_dir/cleanup"
+}
+
+on_exit() {
+ echo "$1" > ${ovs_dir}/cleanup.tmp
+ cat ${ovs_dir}/cleanup >> ${ovs_dir}/cleanup.tmp
+ mv ${ovs_dir}/cleanup.tmp ${ovs_dir}/cleanup
+}
+
+ovs_setenv() {
+ sandbox=$1
+
+ ovs_dir=$ovs_base${1:+/$1}; export ovs_dir
+
+ test -e ${ovs_dir}/cleanup || : > ${ovs_dir}/cleanup
+}
+
+ovs_sbx() {
+ if test "X$2" != X; then
+ (ovs_setenv $1; shift; "$@" >> ${ovs_dir}/debug.log)
+ else
+ ovs_setenv $1
+ fi
+}
+
+ovs_add_dp () {
+ info "Adding DP/Bridge IF: sbx:$1 dp:$2 {$3, $4, $5}"
+ sbxname="$1"
+ shift
+ ovs_sbx "$sbxname" python3 $ovs_base/ovs-dpctl.py add-dp $*
+ on_exit "ovs_sbx $sbxname python3 $ovs_base/ovs-dpctl.py del-dp $1;"
+}
+
+usage() {
+ echo
+ echo "$0 [OPTIONS] [TEST]..."
+ echo "If no TEST argument is given, all tests will be run."
+ echo
+ echo "Options"
+ echo " -t: capture traffic via tcpdump"
+ echo " -v: verbose"
+ echo " -p: pause on failure"
+ echo
+ echo "Available tests${tests}"
+ exit 1
+}
+
+# netlink_validation
+# - Create a dp
+# - check no warning with "old version" simulation
+test_netlink_checks () {
+ sbx_add "test_netlink_checks" || return 1
+
+ info "setting up new DP"
+ ovs_add_dp "test_netlink_checks" nv0 || return 1
+ # now try again
+ PRE_TEST=$(dmesg | grep -E "RIP: [0-9a-fA-Fx]+:ovs_dp_cmd_new\+")
+ ovs_add_dp "test_netlink_checks" nv0 -V 0 || return 1
+ POST_TEST=$(dmesg | grep -E "RIP: [0-9a-fA-Fx]+:ovs_dp_cmd_new\+")
+ if [ "$PRE_TEST" != "$POST_TEST" ]; then
+ info "failed - gen warning"
+ return 1
+ fi
+
+ return 0
+}
+
+run_test() {
+ (
+ tname="$1"
+ tdesc="$2"
+
+ if ! lsmod | grep openvswitch >/dev/null 2>&1; then
+ stdbuf -o0 printf "TEST: %-60s [NOMOD]\n" "${tdesc}"
+ return $ksft_skip
+ fi
+
+ if python3 ovs-dpctl.py -h 2>&1 | \
+ grep "Need to install the python" >/dev/null 2>&1; then
+ stdbuf -o0 printf "TEST: %-60s [PYLIB]\n" "${tdesc}"
+ return $ksft_skip
+ fi
+ printf "TEST: %-60s [START]\n" "${tname}"
+
+ unset IFS
+
+ eval test_${tname}
+ ret=$?
+
+ if [ $ret -eq 0 ]; then
+ printf "TEST: %-60s [ OK ]\n" "${tdesc}"
+ ovs_exit_sig
+ rm -rf "$ovs_dir"
+ elif [ $ret -eq 1 ]; then
+ printf "TEST: %-60s [FAIL]\n" "${tdesc}"
+ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+ echo
+ echo "Pausing. Logs in $ovs_dir/. Hit enter to continue"
+ read a
+ fi
+ ovs_exit_sig
+ [ "${PAUSE_ON_FAIL}" = "yes" ] || rm -rf "$ovs_dir"
+ exit 1
+ elif [ $ret -eq $ksft_skip ]; then
+ printf "TEST: %-60s [SKIP]\n" "${tdesc}"
+ elif [ $ret -eq 2 ]; then
+ rm -rf test_${tname}
+ run_test "$1" "$2"
+ fi
+
+ return $ret
+ )
+ ret=$?
+ case $ret in
+ 0)
+ [ $all_skipped = true ] && [ $exitcode=$ksft_skip ] && exitcode=0
+ all_skipped=false
+ ;;
+ $ksft_skip)
+ [ $all_skipped = true ] && exitcode=$ksft_skip
+ ;;
+ *)
+ all_skipped=false
+ exitcode=1
+ ;;
+ esac
+
+ return $ret
+}
+
+
+exitcode=0
+desc=0
+all_skipped=true
+
+while getopts :pvt o
+do
+ case $o in
+ p) PAUSE_ON_FAIL=yes;;
+ v) VERBOSE=1;;
+ t) if which tcpdump > /dev/null 2>&1; then
+ TRACING=1
+ else
+ echo "=== tcpdump not available, tracing disabled"
+ fi
+ ;;
+ *) usage;;
+ esac
+done
+shift $(($OPTIND-1))
+
+IFS="
+"
+
+for arg do
+ # Check first that all requested tests are available before running any
+ command -v > /dev/null "test_${arg}" || { echo "=== Test ${arg} not found"; usage; }
+done
+
+name=""
+desc=""
+for t in ${tests}; do
+ [ "${name}" = "" ] && name="${t}" && continue
+ [ "${desc}" = "" ] && desc="${t}"
+
+ run_this=1
+ for arg do
+ [ "${arg}" != "${arg#--*}" ] && continue
+ [ "${arg}" = "${name}" ] && run_this=1 && break
+ run_this=0
+ done
+ if [ $run_this -eq 1 ]; then
+ run_test "${name}" "${desc}"
+ fi
+ name=""
+ desc=""
+done
+
+exit ${exitcode}
--- /dev/null
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+# Controls the openvswitch module. Part of the kselftest suite, but
+# can be used for some diagnostic purpose as well.
+
+import argparse
+import errno
+import sys
+
+try:
+ from pyroute2 import NDB
+
+ from pyroute2.netlink import NLM_F_ACK
+ from pyroute2.netlink import NLM_F_REQUEST
+ from pyroute2.netlink import genlmsg
+ from pyroute2.netlink import nla
+ from pyroute2.netlink.exceptions import NetlinkError
+ from pyroute2.netlink.generic import GenericNetlinkSocket
+except ModuleNotFoundError:
+ print("Need to install the python pyroute2 package.")
+ sys.exit(0)
+
+
+OVS_DATAPATH_FAMILY = "ovs_datapath"
+OVS_VPORT_FAMILY = "ovs_vport"
+OVS_FLOW_FAMILY = "ovs_flow"
+OVS_PACKET_FAMILY = "ovs_packet"
+OVS_METER_FAMILY = "ovs_meter"
+OVS_CT_LIMIT_FAMILY = "ovs_ct_limit"
+
+OVS_DATAPATH_VERSION = 2
+OVS_DP_CMD_NEW = 1
+OVS_DP_CMD_DEL = 2
+OVS_DP_CMD_GET = 3
+OVS_DP_CMD_SET = 4
+
+OVS_VPORT_CMD_NEW = 1
+OVS_VPORT_CMD_DEL = 2
+OVS_VPORT_CMD_GET = 3
+OVS_VPORT_CMD_SET = 4
+
+
+class ovs_dp_msg(genlmsg):
+ # include the OVS version
+ # We need a custom header rather than just being able to rely on
+ # genlmsg because fields ends up not expressing everything correctly
+ # if we use the canonical example of setting fields = (('customfield',),)
+ fields = genlmsg.fields + (("dpifindex", "I"),)
+
+
+class OvsDatapath(GenericNetlinkSocket):
+
+ OVS_DP_F_VPORT_PIDS = 1 << 1
+ OVS_DP_F_DISPATCH_UPCALL_PER_CPU = 1 << 3
+
+ class dp_cmd_msg(ovs_dp_msg):
+ """
+ Message class that will be used to communicate with the kernel module.
+ """
+
+ nla_map = (
+ ("OVS_DP_ATTR_UNSPEC", "none"),
+ ("OVS_DP_ATTR_NAME", "asciiz"),
+ ("OVS_DP_ATTR_UPCALL_PID", "uint32"),
+ ("OVS_DP_ATTR_STATS", "dpstats"),
+ ("OVS_DP_ATTR_MEGAFLOW_STATS", "megaflowstats"),
+ ("OVS_DP_ATTR_USER_FEATURES", "uint32"),
+ ("OVS_DP_ATTR_PAD", "none"),
+ ("OVS_DP_ATTR_MASKS_CACHE_SIZE", "uint32"),
+ ("OVS_DP_ATTR_PER_CPU_PIDS", "array(uint32)"),
+ )
+
+ class dpstats(nla):
+ fields = (
+ ("hit", "=Q"),
+ ("missed", "=Q"),
+ ("lost", "=Q"),
+ ("flows", "=Q"),
+ )
+
+ class megaflowstats(nla):
+ fields = (
+ ("mask_hit", "=Q"),
+ ("masks", "=I"),
+ ("padding", "=I"),
+ ("cache_hits", "=Q"),
+ ("pad1", "=Q"),
+ )
+
+ def __init__(self):
+ GenericNetlinkSocket.__init__(self)
+ self.bind(OVS_DATAPATH_FAMILY, OvsDatapath.dp_cmd_msg)
+
+ def info(self, dpname, ifindex=0):
+ msg = OvsDatapath.dp_cmd_msg()
+ msg["cmd"] = OVS_DP_CMD_GET
+ msg["version"] = OVS_DATAPATH_VERSION
+ msg["reserved"] = 0
+ msg["dpifindex"] = ifindex
+ msg["attrs"].append(["OVS_DP_ATTR_NAME", dpname])
+
+ try:
+ reply = self.nlm_request(
+ msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST
+ )
+ reply = reply[0]
+ except NetlinkError as ne:
+ if ne.code == errno.ENODEV:
+ reply = None
+ else:
+ raise ne
+
+ return reply
+
+ def create(self, dpname, shouldUpcall=False, versionStr=None):
+ msg = OvsDatapath.dp_cmd_msg()
+ msg["cmd"] = OVS_DP_CMD_NEW
+ if versionStr is None:
+ msg["version"] = OVS_DATAPATH_VERSION
+ else:
+ msg["version"] = int(versionStr.split(":")[0], 0)
+ msg["reserved"] = 0
+ msg["dpifindex"] = 0
+ msg["attrs"].append(["OVS_DP_ATTR_NAME", dpname])
+
+ dpfeatures = 0
+ if versionStr is not None and versionStr.find(":") != -1:
+ dpfeatures = int(versionStr.split(":")[1], 0)
+ else:
+ dpfeatures = OvsDatapath.OVS_DP_F_VPORT_PIDS
+
+ msg["attrs"].append(["OVS_DP_ATTR_USER_FEATURES", dpfeatures])
+ if not shouldUpcall:
+ msg["attrs"].append(["OVS_DP_ATTR_UPCALL_PID", 0])
+
+ try:
+ reply = self.nlm_request(
+ msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK
+ )
+ reply = reply[0]
+ except NetlinkError as ne:
+ if ne.code == errno.EEXIST:
+ reply = None
+ else:
+ raise ne
+
+ return reply
+
+ def destroy(self, dpname):
+ msg = OvsDatapath.dp_cmd_msg()
+ msg["cmd"] = OVS_DP_CMD_DEL
+ msg["version"] = OVS_DATAPATH_VERSION
+ msg["reserved"] = 0
+ msg["dpifindex"] = 0
+ msg["attrs"].append(["OVS_DP_ATTR_NAME", dpname])
+
+ try:
+ reply = self.nlm_request(
+ msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK
+ )
+ reply = reply[0]
+ except NetlinkError as ne:
+ if ne.code == errno.ENODEV:
+ reply = None
+ else:
+ raise ne
+
+ return reply
+
+
+class OvsVport(GenericNetlinkSocket):
+ class ovs_vport_msg(ovs_dp_msg):
+ nla_map = (
+ ("OVS_VPORT_ATTR_UNSPEC", "none"),
+ ("OVS_VPORT_ATTR_PORT_NO", "uint32"),
+ ("OVS_VPORT_ATTR_TYPE", "uint32"),
+ ("OVS_VPORT_ATTR_NAME", "asciiz"),
+ ("OVS_VPORT_ATTR_OPTIONS", "none"),
+ ("OVS_VPORT_ATTR_UPCALL_PID", "array(uint32)"),
+ ("OVS_VPORT_ATTR_STATS", "vportstats"),
+ ("OVS_VPORT_ATTR_PAD", "none"),
+ ("OVS_VPORT_ATTR_IFINDEX", "uint32"),
+ ("OVS_VPORT_ATTR_NETNSID", "uint32"),
+ )
+
+ class vportstats(nla):
+ fields = (
+ ("rx_packets", "=Q"),
+ ("tx_packets", "=Q"),
+ ("rx_bytes", "=Q"),
+ ("tx_bytes", "=Q"),
+ ("rx_errors", "=Q"),
+ ("tx_errors", "=Q"),
+ ("rx_dropped", "=Q"),
+ ("tx_dropped", "=Q"),
+ )
+
+ def type_to_str(vport_type):
+ if vport_type == 1:
+ return "netdev"
+ elif vport_type == 2:
+ return "internal"
+ elif vport_type == 3:
+ return "gre"
+ elif vport_type == 4:
+ return "vxlan"
+ elif vport_type == 5:
+ return "geneve"
+ return "unknown:%d" % vport_type
+
+ def __init__(self):
+ GenericNetlinkSocket.__init__(self)
+ self.bind(OVS_VPORT_FAMILY, OvsVport.ovs_vport_msg)
+
+ def info(self, vport_name, dpifindex=0, portno=None):
+ msg = OvsVport.ovs_vport_msg()
+
+ msg["cmd"] = OVS_VPORT_CMD_GET
+ msg["version"] = OVS_DATAPATH_VERSION
+ msg["reserved"] = 0
+ msg["dpifindex"] = dpifindex
+
+ if portno is None:
+ msg["attrs"].append(["OVS_VPORT_ATTR_NAME", vport_name])
+ else:
+ msg["attrs"].append(["OVS_VPORT_ATTR_PORT_NO", portno])
+
+ try:
+ reply = self.nlm_request(
+ msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST
+ )
+ reply = reply[0]
+ except NetlinkError as ne:
+ if ne.code == errno.ENODEV:
+ reply = None
+ else:
+ raise ne
+ return reply
+
+
+def print_ovsdp_full(dp_lookup_rep, ifindex, ndb=NDB()):
+ dp_name = dp_lookup_rep.get_attr("OVS_DP_ATTR_NAME")
+ base_stats = dp_lookup_rep.get_attr("OVS_DP_ATTR_STATS")
+ megaflow_stats = dp_lookup_rep.get_attr("OVS_DP_ATTR_MEGAFLOW_STATS")
+ user_features = dp_lookup_rep.get_attr("OVS_DP_ATTR_USER_FEATURES")
+ masks_cache_size = dp_lookup_rep.get_attr("OVS_DP_ATTR_MASKS_CACHE_SIZE")
+
+ print("%s:" % dp_name)
+ print(
+ " lookups: hit:%d missed:%d lost:%d"
+ % (base_stats["hit"], base_stats["missed"], base_stats["lost"])
+ )
+ print(" flows:%d" % base_stats["flows"])
+ pkts = base_stats["hit"] + base_stats["missed"]
+ avg = (megaflow_stats["mask_hit"] / pkts) if pkts != 0 else 0.0
+ print(
+ " masks: hit:%d total:%d hit/pkt:%f"
+ % (megaflow_stats["mask_hit"], megaflow_stats["masks"], avg)
+ )
+ print(" caches:")
+ print(" masks-cache: size:%d" % masks_cache_size)
+
+ if user_features is not None:
+ print(" features: 0x%X" % user_features)
+
+ # port print out
+ vpl = OvsVport()
+ for iface in ndb.interfaces:
+ rep = vpl.info(iface.ifname, ifindex)
+ if rep is not None:
+ print(
+ " port %d: %s (%s)"
+ % (
+ rep.get_attr("OVS_VPORT_ATTR_PORT_NO"),
+ rep.get_attr("OVS_VPORT_ATTR_NAME"),
+ OvsVport.type_to_str(rep.get_attr("OVS_VPORT_ATTR_TYPE")),
+ )
+ )
+
+
+def main(argv):
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "-v",
+ "--verbose",
+ action="count",
+ help="Increment 'verbose' output counter.",
+ )
+ subparsers = parser.add_subparsers()
+
+ showdpcmd = subparsers.add_parser("show")
+ showdpcmd.add_argument(
+ "showdp", metavar="N", type=str, nargs="?", help="Datapath Name"
+ )
+
+ adddpcmd = subparsers.add_parser("add-dp")
+ adddpcmd.add_argument("adddp", help="Datapath Name")
+ adddpcmd.add_argument(
+ "-u",
+ "--upcall",
+ action="store_true",
+ help="Leave open a reader for upcalls",
+ )
+ adddpcmd.add_argument(
+ "-V",
+ "--versioning",
+ required=False,
+ help="Specify a custom version / feature string",
+ )
+
+ deldpcmd = subparsers.add_parser("del-dp")
+ deldpcmd.add_argument("deldp", help="Datapath Name")
+
+ args = parser.parse_args()
+
+ ovsdp = OvsDatapath()
+ ndb = NDB()
+
+ if hasattr(args, "showdp"):
+ found = False
+ for iface in ndb.interfaces:
+ rep = None
+ if args.showdp is None:
+ rep = ovsdp.info(iface.ifname, 0)
+ elif args.showdp == iface.ifname:
+ rep = ovsdp.info(iface.ifname, 0)
+
+ if rep is not None:
+ found = True
+ print_ovsdp_full(rep, iface.index, ndb)
+
+ if not found:
+ msg = "No DP found"
+ if args.showdp is not None:
+ msg += ":'%s'" % args.showdp
+ print(msg)
+ elif hasattr(args, "adddp"):
+ rep = ovsdp.create(args.adddp, args.upcall, args.versioning)
+ if rep is None:
+ print("DP '%s' already exists" % args.adddp)
+ else:
+ print("DP '%s' added" % args.adddp)
+ elif hasattr(args, "deldp"):
+ ovsdp.destroy(args.deldp)
+
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main(sys.argv))
--- /dev/null
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test runs a simple ingress tc setup between two veth pairs,
+# and chains a single egress rule to test ingress chaining to egress.
+#
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+ exit $ksft_skip
+fi
+
+needed_mods="act_mirred cls_flower sch_ingress"
+for mod in $needed_mods; do
+ modinfo $mod &>/dev/null || { echo "SKIP: Need act_mirred module"; exit $ksft_skip; }
+done
+
+ns="ns$((RANDOM%899+100))"
+veth1="veth1$((RANDOM%899+100))"
+veth2="veth2$((RANDOM%899+100))"
+peer1="peer1$((RANDOM%899+100))"
+peer2="peer2$((RANDOM%899+100))"
+ip_peer1=198.51.100.5
+ip_peer2=198.51.100.6
+
+function fail() {
+ echo "FAIL: $@" >> /dev/stderr
+ exit 1
+}
+
+function cleanup() {
+ killall -q -9 udpgso_bench_rx
+ ip link del $veth1 &> /dev/null
+ ip link del $veth2 &> /dev/null
+ ip netns del $ns &> /dev/null
+}
+trap cleanup EXIT
+
+function config() {
+ echo "Setup veth pairs [$veth1, $peer1], and veth pair [$veth2, $peer2]"
+ ip link add $veth1 type veth peer name $peer1
+ ip link add $veth2 type veth peer name $peer2
+ ip addr add $ip_peer1/24 dev $peer1
+ ip link set $peer1 up
+ ip netns add $ns
+ ip link set dev $peer2 netns $ns
+ ip netns exec $ns ip addr add $ip_peer2/24 dev $peer2
+ ip netns exec $ns ip link set $peer2 up
+ ip link set $veth1 up
+ ip link set $veth2 up
+
+ echo "Add tc filter ingress->egress forwarding $veth1 <-> $veth2"
+ tc qdisc add dev $veth2 ingress
+ tc qdisc add dev $veth1 ingress
+ tc filter add dev $veth2 ingress prio 1 proto all flower \
+ action mirred egress redirect dev $veth1
+ tc filter add dev $veth1 ingress prio 1 proto all flower \
+ action mirred egress redirect dev $veth2
+
+ echo "Add tc filter egress->ingress forwarding $peer1 -> $veth1, bypassing the veth pipe"
+ tc qdisc add dev $peer1 clsact
+ tc filter add dev $peer1 egress prio 20 proto ip flower \
+ action mirred ingress redirect dev $veth1
+}
+
+function test_run() {
+ echo "Run tcp traffic"
+ ./udpgso_bench_rx -t &
+ sleep 1
+ ip netns exec $ns timeout -k 2 10 ./udpgso_bench_tx -t -l 2 -4 -D $ip_peer1 || fail "traffic failed"
+ echo "Test passed"
+}
+
+config
+test_run
+trap - EXIT
+cleanup
.remove_on_exec = 1, /* Required by sigtrap. */
.sigtrap = 1, /* Request synchronous SIGTRAP on event. */
.sig_data = TEST_SIG_DATA(addr, id),
+ .exclude_kernel = 1, /* To allow */
+ .exclude_hv = 1, /* running as !root */
};
return attr;
}
__atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
iter = ctx.iterate_on; /* read */
- for (i = 0; i < iter - 1; i++) {
- __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
- ctx.iterate_on = iter; /* idempotent write */
+ if (iter >= 0) {
+ for (i = 0; i < iter - 1; i++) {
+ __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
+ ctx.iterate_on = iter; /* idempotent write */
+ }
+ } else {
+ while (ctx.iterate_on);
}
return NULL;
EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0));
}
+TEST_F(sigtrap_threads, signal_stress_with_disable)
+{
+ const int target_count = NUM_THREADS * 3000;
+ int i;
+
+ ctx.iterate_on = -1;
+
+ EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0);
+ pthread_barrier_wait(&self->barrier);
+ while (__atomic_load_n(&ctx.signal_count, __ATOMIC_RELAXED) < target_count) {
+ EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0);
+ EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0);
+ }
+ ctx.iterate_on = 0;
+ for (i = 0; i < NUM_THREADS; i++)
+ ASSERT_EQ(pthread_join(self->threads[i], NULL), 0);
+ EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0);
+
+ EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on);
+ EXPECT_EQ(ctx.first_siginfo.si_perf_type, PERF_TYPE_BREAKPOINT);
+ EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0));
+}
+
TEST_HARNESS_MAIN
# SPDX-License-Identifier: GPL-2.0-only
-CFLAGS += -g -I../../../../usr/include/ -pthread
+CFLAGS += -g -I../../../../usr/include/ -pthread -Wall
TEST_GEN_PROGS := pidfd_test pidfd_fdinfo_test pidfd_open_test \
pidfd_poll_test pidfd_wait pidfd_getfd_test pidfd_setns_test
c = epoll_wait(epoll_fd, events, MAX_EVENTS, 5000);
if (c != 1 || !(events[0].events & EPOLLIN))
- ksft_exit_fail_msg("%s test: Unexpected epoll_wait result (c=%d, events=%x) ",
+ ksft_exit_fail_msg("%s test: Unexpected epoll_wait result (c=%d, events=%x) "
"(errno %d)\n",
test_name, c, events[0].events, errno);
*/
while (1)
sleep(1);
+
+ return 0;
}
static void test_pidfd_poll_exec(int use_waitpid)
.flags = CLONE_PIDFD | CLONE_PARENT_SETTID,
.exit_signal = SIGCHLD,
};
+ int pfd[2];
pid_t pid;
siginfo_t info = {
.si_signo = 0,
};
+ ASSERT_EQ(pipe(pfd), 0);
pid = sys_clone3(&args);
ASSERT_GE(pid, 0);
if (pid == 0) {
+ char buf[2];
+
+ close(pfd[1]);
kill(getpid(), SIGSTOP);
+ ASSERT_EQ(read(pfd[0], buf, 1), 1);
+ close(pfd[0]);
kill(getpid(), SIGSTOP);
exit(EXIT_SUCCESS);
}
+ close(pfd[0]);
ASSERT_EQ(sys_waitid(P_PIDFD, pidfd, &info, WSTOPPED, NULL), 0);
ASSERT_EQ(info.si_signo, SIGCHLD);
ASSERT_EQ(info.si_code, CLD_STOPPED);
ASSERT_EQ(sys_pidfd_send_signal(pidfd, SIGCONT, NULL, 0), 0);
ASSERT_EQ(sys_waitid(P_PIDFD, pidfd, &info, WCONTINUED, NULL), 0);
+ ASSERT_EQ(write(pfd[1], "C", 1), 1);
+ close(pfd[1]);
ASSERT_EQ(info.si_signo, SIGCHLD);
ASSERT_EQ(info.si_code, CLD_CONTINUED);
ASSERT_EQ(info.si_pid, parent_tid);
TEST(wait_nonblock)
{
- int pidfd, status = 0;
+ int pidfd;
unsigned int flags = 0;
pid_t parent_tid = -1;
struct clone_args args = {
def format_aut_init_header(self):
buff = []
- buff.append("struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def))
+ buff.append("static struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def))
return buff
def __get_string_vector_per_line_content(self, buff):
}
case KVM_CAP_DIRTY_LOG_RING:
case KVM_CAP_DIRTY_LOG_RING_ACQ_REL:
+ if (!kvm_vm_ioctl_check_extension_generic(kvm, cap->cap))
+ return -EINVAL;
+
return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
default:
return kvm_vm_ioctl_enable_cap(kvm, cap);
};
};
+long __weak kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+ unsigned long arg)
+{
+ return -ENOTTY;
+}
+
static long kvm_vm_compat_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
if (kvm->mm != current->mm || kvm->vm_dead)
return -EIO;
+
+ r = kvm_arch_vm_compat_ioctl(filp, ioctl, arg);
+ if (r != -ENOTTY)
+ return r;
+
switch (ioctl) {
#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
case KVM_CLEAR_DIRTY_LOG: {
int (*get)(void *, u64 *), int (*set)(void *, u64),
const char *fmt)
{
+ int ret;
struct kvm_stat_data *stat_data = (struct kvm_stat_data *)
inode->i_private;
if (!kvm_get_kvm_safe(stat_data->kvm))
return -ENOENT;
- if (simple_attr_open(inode, file, get,
- kvm_stats_debugfs_mode(stat_data->desc) & 0222
- ? set : NULL,
- fmt)) {
+ ret = simple_attr_open(inode, file, get,
+ kvm_stats_debugfs_mode(stat_data->desc) & 0222
+ ? set : NULL, fmt);
+ if (ret)
kvm_put_kvm(stat_data->kvm);
- return -ENOMEM;
- }
- return 0;
+ return ret;
}
static int kvm_debugfs_release(struct inode *inode, struct file *file)
{
struct kvm_memslots *slots = kvm_memslots(kvm);
+ if (!gpc->active)
+ return false;
+
if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
return false;
{
struct kvm_memslots *slots = kvm_memslots(kvm);
unsigned long page_offset = gpa & ~PAGE_MASK;
- kvm_pfn_t old_pfn, new_pfn;
+ bool unmap_old = false;
unsigned long old_uhva;
+ kvm_pfn_t old_pfn;
void *old_khva;
- int ret = 0;
+ int ret;
/*
* If must fit within a single page. The 'len' argument is
write_lock_irq(&gpc->lock);
+ if (!gpc->active) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
old_pfn = gpc->pfn;
old_khva = gpc->khva - offset_in_page(gpc->khva);
old_uhva = gpc->uhva;
/* If the HVA→PFN mapping was already valid, don't unmap it. */
old_pfn = KVM_PFN_ERR_FAULT;
old_khva = NULL;
+ ret = 0;
}
out:
gpc->khva = NULL;
}
- /* Snapshot the new pfn before dropping the lock! */
- new_pfn = gpc->pfn;
+ /* Detect a pfn change before dropping the lock! */
+ unmap_old = (old_pfn != gpc->pfn);
+out_unlock:
write_unlock_irq(&gpc->lock);
mutex_unlock(&gpc->refresh_lock);
- if (old_pfn != new_pfn)
+ if (unmap_old)
gpc_unmap_khva(kvm, old_pfn, old_khva);
return ret;
}
EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+void kvm_gpc_init(struct gfn_to_pfn_cache *gpc)
+{
+ rwlock_init(&gpc->lock);
+ mutex_init(&gpc->refresh_lock);
+}
+EXPORT_SYMBOL_GPL(kvm_gpc_init);
-int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
- struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
- gpa_t gpa, unsigned long len)
+int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+ gpa_t gpa, unsigned long len)
{
WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage);
if (!gpc->active) {
- rwlock_init(&gpc->lock);
- mutex_init(&gpc->refresh_lock);
-
gpc->khva = NULL;
gpc->pfn = KVM_PFN_ERR_FAULT;
gpc->uhva = KVM_HVA_ERR_BAD;
gpc->vcpu = vcpu;
gpc->usage = usage;
gpc->valid = false;
- gpc->active = true;
spin_lock(&kvm->gpc_lock);
list_add(&gpc->list, &kvm->gpc_list);
spin_unlock(&kvm->gpc_lock);
+
+ /*
+ * Activate the cache after adding it to the list, a concurrent
+ * refresh must not establish a mapping until the cache is
+ * reachable by mmu_notifier events.
+ */
+ write_lock_irq(&gpc->lock);
+ gpc->active = true;
+ write_unlock_irq(&gpc->lock);
}
return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len);
}
-EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init);
+EXPORT_SYMBOL_GPL(kvm_gpc_activate);
-void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
{
if (gpc->active) {
+ /*
+ * Deactivate the cache before removing it from the list, KVM
+ * must stall mmu_notifier events until all users go away, i.e.
+ * until gpc->lock is dropped and refresh is guaranteed to fail.
+ */
+ write_lock_irq(&gpc->lock);
+ gpc->active = false;
+ write_unlock_irq(&gpc->lock);
+
spin_lock(&kvm->gpc_lock);
list_del(&gpc->list);
spin_unlock(&kvm->gpc_lock);
kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
- gpc->active = false;
}
}
-EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy);
+EXPORT_SYMBOL_GPL(kvm_gpc_deactivate);