From: Linus Torvalds Date: Tue, 14 Aug 2018 04:56:50 +0000 (-0700) Subject: Merge tag 'locks-v4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton... X-Git-Tag: 4.19-rc-smb3~134 X-Git-Url: http://git.samba.org/samba.git/?p=sfrench%2Fcifs-2.6.git;a=commitdiff_plain;h=575b94386bd539a7d803aee9fd4a8d275844c40f;hp=da33a871ba178dbe81da7d755818d3c2088cae32 Merge tag 'locks-v4.19-1' of git://git./linux/kernel/git/jlayton/linux Pull file locking updates from Jeff Layton: "Just a couple of patches from Konstantin to fix /proc/locks when the process that set the lock has exited, and a new tracepoint for the flock() codepath. Also threw in mailmap entries for my addresses and a comment cleanup" * tag 'locks-v4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux: locks: remove misleading obsolete comment mailmap: remap some of my email addresses to kernel.org address locks: add tracepoint in flock codepath fs/lock: show locks taken by processes from another pidns fs/lock: skip lock owner pid translation in case we are in init_pid_ns --- diff --git a/Documentation/ABI/obsolete/sysfs-gpio b/Documentation/ABI/obsolete/sysfs-gpio index 32513dc2eec9..40d41ea1a3f5 100644 --- a/Documentation/ABI/obsolete/sysfs-gpio +++ b/Documentation/ABI/obsolete/sysfs-gpio @@ -11,7 +11,7 @@ Description: Kernel code may export it for complete or partial access. GPIOs are identified as they are inside the kernel, using integers in - the range 0..INT_MAX. See Documentation/gpio/gpio.txt for more information. + the range 0..INT_MAX. See Documentation/gpio for more information. /sys/class/gpio /export ... asks the kernel to export a GPIO to userspace diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index bd4975e132d3..9c5e7732d249 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -238,9 +238,6 @@ Description: Discover and change clock speed of CPUs See files in Documentation/cpu-freq/ for more information. - In particular, read Documentation/cpu-freq/user-guide.txt - to learn how to control the knobs. - What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus Date: June 2013 diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html index 6c06e10bd04b..f5120a00f511 100644 --- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html +++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html @@ -380,31 +380,26 @@ and therefore need no protection. as follows:
-  1   unsigned long gpnum;
-  2   unsigned long completed;
+  1   unsigned long gp_seq;
 

RCU grace periods are numbered, and -the ->gpnum field contains the number of the grace -period that started most recently. -The ->completed field contains the number of the -grace period that completed most recently. -If the two fields are equal, the RCU grace period that most recently -started has already completed, and therefore the corresponding -flavor of RCU is idle. -If ->gpnum is one greater than ->completed, -then ->gpnum gives the number of the current RCU -grace period, which has not yet completed. -Any other combination of values indicates that something is broken. -These two fields are protected by the root rcu_node's +the ->gp_seq field contains the current grace-period +sequence number. +The bottom two bits are the state of the current grace period, +which can be zero for not yet started or one for in progress. +In other words, if the bottom two bits of ->gp_seq are +zero, the corresponding flavor of RCU is idle. +Any other value in the bottom two bits indicates that something is broken. +This field is protected by the root rcu_node structure's ->lock field. -

There are ->gpnum and ->completed fields +

There are ->gp_seq fields in the rcu_node and rcu_data structures as well. The fields in the rcu_state structure represent the -most current values, and those of the other structures are compared -in order to detect the start of a new grace period in a distributed +most current value, and those of the other structures are compared +in order to detect the beginnings and ends of grace periods in a distributed fashion. The values flow from rcu_state to rcu_node (down the tree from the root to the leaves) to rcu_data. @@ -512,27 +507,47 @@ than to be heisenbugged out of existence. as follows:

-  1   unsigned long gpnum;
-  2   unsigned long completed;
+  1   unsigned long gp_seq;
+  2   unsigned long gp_seq_needed;
 
-

These fields are the counterparts of the fields of the same name in -the rcu_state structure. -They each may lag up to one behind their rcu_state -counterparts. -If a given rcu_node structure's ->gpnum and -->complete fields are equal, then this rcu_node +

The rcu_node structures' ->gp_seq fields are +the counterparts of the field of the same name in the rcu_state +structure. +They each may lag up to one step behind their rcu_state +counterpart. +If the bottom two bits of a given rcu_node structure's +->gp_seq field is zero, then this rcu_node structure believes that RCU is idle. -Otherwise, as with the rcu_state structure, -the ->gpnum field will be one greater than the -->complete fields, with ->gpnum -indicating which grace period this rcu_node believes -is still being waited for. +

The >gp_seq field of each rcu_node +structure is updated at the beginning and the end +of each grace period. + +

The ->gp_seq_needed fields record the +furthest-in-the-future grace period request seen by the corresponding +rcu_node structure. The request is considered fulfilled when +the value of the ->gp_seq field equals or exceeds that of +the ->gp_seq_needed field. -

The >gpnum field of each rcu_node -structure is updated at the beginning -of each grace period, and the ->completed fields are -updated at the end of each grace period. + + + + + + + +
 
Quick Quiz:
+ Suppose that this rcu_node structure doesn't see + a request for a very long time. + Won't wrapping of the ->gp_seq field cause + problems? +
Answer:
+ No, because if the ->gp_seq_needed field lags behind the + ->gp_seq field, the ->gp_seq_needed field + will be updated at the end of the grace period. + Modulo-arithmetic comparisons therefore will always get the + correct answer, even with wrapping. +
 

Quiescent-State Tracking
@@ -626,9 +641,8 @@ normal and expedited grace periods, respectively.

So the locking is absolutely required in - order to coordinate - clearing of the bits with the grace-period numbers in - ->gpnum and ->completed. + order to coordinate clearing of the bits with updating of the + grace-period sequence number in ->gp_seq.   @@ -1038,15 +1052,15 @@ out any rcu_data structure for which this flag is not set. as follows:

-  1   unsigned long completed;
-  2   unsigned long gpnum;
+  1   unsigned long gp_seq;
+  2   unsigned long gp_seq_needed;
   3   bool cpu_no_qs;
   4   bool core_needs_qs;
   5   bool gpwrap;
   6   unsigned long rcu_qs_ctr_snap;
 
-

The completed and gpnum +

The ->gp_seq and ->gp_seq_needed fields are the counterparts of the fields of the same name in the rcu_state and rcu_node structures. They may each lag up to one behind their rcu_node @@ -1054,15 +1068,9 @@ counterparts, but in CONFIG_NO_HZ_IDLE and CONFIG_NO_HZ_FULL kernels can lag arbitrarily far behind for CPUs in dyntick-idle mode (but these counters will catch up upon exit from dyntick-idle mode). -If a given rcu_data structure's ->gpnum and -->complete fields are equal, then this rcu_data +If the lower two bits of a given rcu_data structure's +->gp_seq are zero, then this rcu_data structure believes that RCU is idle. -Otherwise, as with the rcu_state and rcu_node -structure, -the ->gpnum field will be one greater than the -->complete fields, with ->gpnum -indicating which grace period this rcu_data believes -is still being waited for. @@ -1070,13 +1078,13 @@ is still being waited for. @@ -609,10 +609,8 @@ states outstanding from other CPUs.

Grace-Period Cleanup

Grace-period cleanup first scans the rcu_node tree -breadth-first setting all the ->completed fields equal -to the number of the newly completed grace period, then it sets -the rcu_state structure's ->completed field, -again to the number of the newly completed grace period. +breadth-first advancing all the ->gp_seq fields, then it +advances the rcu_state structure's ->gp_seq field. The ordering effects are shown below:

TreeRCU-gp-cleanup.svg @@ -634,7 +632,7 @@ grace-period cleanup is complete, the next grace period can begin. CPU has reported its quiescent state, but it may be some milliseconds before RCU becomes aware of this. The latest reasonable candidate is once the rcu_state - structure's ->completed field has been updated, + structure's ->gp_seq field has been updated, but it is quite possible that some CPUs have already completed phase two of their updates by that time. In short, if you are going to work with RCU, you need to @@ -647,7 +645,7 @@ grace-period cleanup is complete, the next grace period can begin.

Callback Invocation

Once a given CPU's leaf rcu_node structure's -->completed field has been updated, that CPU can begin +->gp_seq field has been updated, that CPU can begin invoking its RCU callbacks that were waiting for this grace period to end. These callbacks are identified by rcu_advance_cbs(), diff --git a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-cleanup.svg b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-cleanup.svg index 754f426b297a..bf84fbab27ee 100644 --- a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-cleanup.svg +++ b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-cleanup.svg @@ -384,11 +384,11 @@ inkscape:window-height="1144" id="namedview208" showgrid="true" - inkscape:zoom="0.70710678" - inkscape:cx="617.89017" - inkscape:cy="542.52419" - inkscape:window-x="86" - inkscape:window-y="28" + inkscape:zoom="0.78716603" + inkscape:cx="513.06403" + inkscape:cy="623.1214" + inkscape:window-x="102" + inkscape:window-y="38" inkscape:window-maximized="0" inkscape:current-layer="g3188-3" fit-margin-top="5" @@ -417,13 +417,15 @@ id="g3188"> ->completed = ->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;font-family:Courier">rcu_seq_end(&rnp->gp_seq) @@ -502,13 +504,15 @@ ->completed = ->gpnum + id="text202-36-7" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf - ->completed = ->gpnum - rsp->completed = @@ -607,15 +593,6 @@ sodipodi:linespacing="125%">Root - rnp->completed + id="flowPara3362" /> rcu_seq_end(&rsp->gp_seq) + + rcu_seq_end(&rnp->gp_seq) ->completed = ->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf ->completed = ->gpnum + id="text202-36-2" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf ->completed = ->gpnum + id="text202-36-3" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) - ->completed = ->gpnum + rcu_seq_end(&rnp->gp_seq) diff --git a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-1.svg b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-1.svg index 0161262904ec..8c207550818f 100644 --- a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-1.svg +++ b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-1.svg @@ -272,13 +272,13 @@ inkscape:window-height="1144" id="namedview208" showgrid="true" - inkscape:zoom="0.70710678" - inkscape:cx="617.89019" - inkscape:cy="636.57143" - inkscape:window-x="697" + inkscape:zoom="2.6330492" + inkscape:cx="524.82797" + inkscape:cy="519.31194" + inkscape:window-x="79" inkscape:window-y="28" inkscape:window-maximized="0" - inkscape:current-layer="svg2" + inkscape:current-layer="g3188" fit-margin-top="5" fit-margin-right="5" fit-margin-left="5" @@ -305,13 +305,15 @@ id="g3188"> rsp->gpnum++ + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;font-family:Courier">rcu_seq_start(rsp->gp_seq) diff --git a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-3.svg b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-3.svg index de6ecc51b00e..d24d7d555dbc 100644 --- a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-3.svg +++ b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-init-3.svg @@ -19,7 +19,7 @@ id="svg2" version="1.1" inkscape:version="0.48.4 r9939" - sodipodi:docname="TreeRCU-gp-init-2.svg"> + sodipodi:docname="TreeRCU-gp-init-3.svg"> @@ -257,18 +257,22 @@ inkscape:window-width="1087" inkscape:window-height="1144" id="namedview208" - showgrid="false" - inkscape:zoom="0.70710678" + showgrid="true" + inkscape:zoom="0.68224756" inkscape:cx="617.89019" inkscape:cy="625.84293" - inkscape:window-x="697" + inkscape:window-x="54" inkscape:window-y="28" inkscape:window-maximized="0" - inkscape:current-layer="svg2" + inkscape:current-layer="g3153" fit-margin-top="5" fit-margin-right="5" fit-margin-left="5" - fit-margin-bottom="5" /> + fit-margin-bottom="5"> + + ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;font-family:Courier">->gp_seq = rsp->gp_seq @@ -366,13 +370,13 @@ ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq ->gpnum = rsp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq diff --git a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp.svg b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp.svg index b13b7b01bb3a..acd73c7ad0f4 100644 --- a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp.svg +++ b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp.svg @@ -1070,13 +1070,13 @@ inkscape:window-height="1144" id="namedview208" showgrid="true" - inkscape:zoom="0.6004608" - inkscape:cx="826.65969" - inkscape:cy="483.3047" - inkscape:window-x="66" - inkscape:window-y="28" + inkscape:zoom="0.81932583" + inkscape:cx="840.45848" + inkscape:cy="5052.4242" + inkscape:window-x="787" + inkscape:window-y="24" inkscape:window-maximized="0" - inkscape:current-layer="svg2" + inkscape:current-layer="g4" fit-margin-top="5" fit-margin-right="5" fit-margin-left="5" @@ -1543,15 +1543,6 @@ style="fill:none;stroke-width:0.025in" transform="translate(1749.0282,658.72243)" id="g3188"> - rsp->gpnum++ @@ -1584,6 +1575,17 @@ sodipodi:linespacing="125%">Root + rcu_seq_start(rsp->gp_seq) - ->gpnum = rsp->gpnum @@ -2359,6 +2352,15 @@ sodipodi:linespacing="125%">Root + ->gp_seq = rsp->gp_seq ->gpnum = rsp->gpnum + id="text202-92" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + id="text202-759" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq ->gpnum = rsp->gpnum + id="text202-87" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + id="text202-2" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq Leaf ->gpnum = rsp->gpnum + id="text202-0" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">->gp_seq = rsp->gp_seq - ->gpnum = rsp->gpnum rdp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rdp->gp_seq - ->completed = ->gpnum @@ -4325,6 +4309,17 @@ sodipodi:linespacing="125%">Root + rcu_seq_end(&rnp->gp_seq) ->completed = ->gpnum + id="text202-36-7" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf - ->completed = ->gpnum - rsp->completed = Root - rnp->completed ->completed = ->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf ->completed = ->gpnum + id="text202-36-9" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) Leaf ->completed = ->gpnum + id="text202-36-35" + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_seq_end(&rnp->gp_seq) - ->completed = ->gpnum rcu_do_batch() + rcu_seq_end(&rnp->gp_seq) + rcu_seq_end(&rnp->gp_seq) + rcu_seq_end(&rsp->gp_seq) + ->gp_seq = rsp->gp_seq diff --git a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-qs.svg b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-qs.svg index de3992f4cbe1..149bec2a4493 100644 --- a/Documentation/RCU/Design/Memory-Ordering/TreeRCU-qs.svg +++ b/Documentation/RCU/Design/Memory-Ordering/TreeRCU-qs.svg @@ -300,13 +300,13 @@ inkscape:window-height="1144" id="namedview208" showgrid="true" - inkscape:zoom="0.70710678" - inkscape:cx="616.47598" - inkscape:cy="595.41964" - inkscape:window-x="813" + inkscape:zoom="0.96484375" + inkscape:cx="507.0191" + inkscape:cy="885.62207" + inkscape:window-x="47" inkscape:window-y="28" inkscape:window-maximized="0" - inkscape:current-layer="g4405" + inkscape:current-layer="g3115" fit-margin-top="5" fit-margin-right="5" fit-margin-left="5" @@ -710,7 +710,7 @@ font-weight="bold" font-size="192" id="text202-6-6" - style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rdp->gpnum + style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rdp->gp_seq state=0x1 + kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5 Starving the grace-period kthreads of CPU time can of course result in RCU CPU stall warnings even when all CPUs and tasks have passed -through the required quiescent states. The "g" and "c" numbers flag the -number of the last grace period started and completed, respectively, -the "f" precedes the ->gp_flags command to the grace-period kthread, -the "RCU_GP_WAIT_FQS" indicates that the kthread is waiting for a short -timeout, and the "state" precedes value of the task_struct ->state field. +through the required quiescent states. The "g" number shows the current +grace-period sequence number, the "f" precedes the ->gp_flags command +to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the +kthread is waiting for a short timeout, the "state" precedes value of the +task_struct ->state field, and the "cpu" indicates that the grace-period +kthread last ran on CPU 5. Multiple Warnings From One Stall diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt index 65eb856526b7..c2a7facf7ff9 100644 --- a/Documentation/RCU/whatisRCU.txt +++ b/Documentation/RCU/whatisRCU.txt @@ -588,6 +588,7 @@ It is extremely simple: void synchronize_rcu(void) { write_lock(&rcu_gp_mutex); + smp_mb__after_spinlock(); write_unlock(&rcu_gp_mutex); } @@ -609,12 +610,15 @@ don't forget about them when submitting patches making use of RCU!] The rcu_read_lock() and rcu_read_unlock() primitive read-acquire and release a global reader-writer lock. The synchronize_rcu() -primitive write-acquires this same lock, then immediately releases -it. This means that once synchronize_rcu() exits, all RCU read-side -critical sections that were in progress before synchronize_rcu() was -called are guaranteed to have completed -- there is no way that -synchronize_rcu() would have been able to write-acquire the lock -otherwise. +primitive write-acquires this same lock, then releases it. This means +that once synchronize_rcu() exits, all RCU read-side critical sections +that were in progress before synchronize_rcu() was called are guaranteed +to have completed -- there is no way that synchronize_rcu() would have +been able to write-acquire the lock otherwise. The smp_mb__after_spinlock() +promotes synchronize_rcu() to a full memory barrier in compliance with +the "Memory-Barrier Guarantees" listed in: + + Documentation/RCU/Design/Requirements/Requirements.html. It is possible to nest rcu_read_lock(), since reader-writer locks may be recursively acquired. Note also that rcu_read_lock() is immune @@ -816,11 +820,13 @@ RCU list traversal: list_next_rcu list_for_each_entry_rcu list_for_each_entry_continue_rcu + list_for_each_entry_from_rcu hlist_first_rcu hlist_next_rcu hlist_pprev_rcu hlist_for_each_entry_rcu hlist_for_each_entry_rcu_bh + hlist_for_each_entry_from_rcu hlist_for_each_entry_continue_rcu hlist_for_each_entry_continue_rcu_bh hlist_nulls_first_rcu diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 638342d0a095..5cde1ff32ff3 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -256,7 +256,7 @@ (may crash computer or cause data corruption) ALSA [HW,ALSA] - See Documentation/sound/alsa/alsa-parameters.txt + See Documentation/sound/alsa-configuration.rst alignment= [KNL,ARM] Allow the default userspace alignment fault handler @@ -2835,8 +2835,6 @@ nosync [HW,M68K] Disables sync negotiation for all devices. - notsc [BUGS=X86-32] Disable Time Stamp Counter - nowatchdog [KNL] Disable both lockup detectors, i.e. soft-lockup and NMI watchdog (hard-lockup). @@ -2926,9 +2924,6 @@ This will also cause panics on machine check exceptions. Useful together with panic=30 to trigger a reboot. - OSS [HW,OSS] - See Documentation/sound/oss/oss-parameters.txt - page_owner= [KNL] Boot-time page_owner enabling option. Storage of the information about who allocated each page is disabled in default. With this switch, @@ -3635,8 +3630,8 @@ Set time (s) after boot for CPU-hotplug testing. rcutorture.onoff_interval= [KNL] - Set time (s) between CPU-hotplug operations, or - zero to disable CPU-hotplug testing. + Set time (jiffies) between CPU-hotplug operations, + or zero to disable CPU-hotplug testing. rcutorture.shuffle_interval= [KNL] Set task-shuffle interval (s). Shuffling tasks @@ -4335,7 +4330,7 @@ [FTRACE] Set and start specified trace events in order to facilitate early boot debugging. The event-list is a comma separated list of trace events to enable. See - also Documentation/trace/events.txt + also Documentation/trace/events.rst trace_options=[option-list] [FTRACE] Enable or disable tracer options at boot. @@ -4350,7 +4345,7 @@ trace_options=stacktrace - See also Documentation/trace/ftrace.txt "trace options" + See also Documentation/trace/ftrace.rst "trace options" section. tp_printk[FTRACE] @@ -4849,3 +4844,8 @@ xirc2ps_cs= [NET,PCMCIA] Format: ,,,,,[,[,[,]]] + + xhci-hcd.quirks [USB,KNL] + A hex value specifying bitmask with supplemental xhci + host controller quirks. Meaning of each bit can be + consulted in header drivers/usb/host/xhci.h. diff --git a/Documentation/admin-guide/pm/intel_pstate.rst b/Documentation/admin-guide/pm/intel_pstate.rst index ab2fe0eda1d7..8f1d3de449b5 100644 --- a/Documentation/admin-guide/pm/intel_pstate.rst +++ b/Documentation/admin-guide/pm/intel_pstate.rst @@ -324,8 +324,7 @@ Global Attributes ``intel_pstate`` exposes several global attributes (files) in ``sysfs`` to control its functionality at the system level. They are located in the -``/sys/devices/system/cpu/cpufreq/intel_pstate/`` directory and affect all -CPUs. +``/sys/devices/system/cpu/intel_pstate/`` directory and affect all CPUs. Some of them are not present if the ``intel_pstate=per_cpu_perf_limits`` argument is passed to the kernel in the command line. @@ -379,6 +378,17 @@ argument is passed to the kernel in the command line. but it affects the maximum possible value of per-policy P-state limits (see `Interpretation of Policy Attributes`_ below for details). +``hwp_dynamic_boost`` + This attribute is only present if ``intel_pstate`` works in the + `active mode with the HWP feature enabled `_ in + the processor. If set (equal to 1), it causes the minimum P-state limit + to be increased dynamically for a short time whenever a task previously + waiting on I/O is selected to run on a given logical CPU (the purpose + of this mechanism is to improve performance). + + This setting has no effect on logical CPUs whose minimum P-state limit + is directly set to the highest non-turbo P-state or above it. + .. _status_attr: ``status`` @@ -410,7 +420,7 @@ argument is passed to the kernel in the command line. That only is supported in some configurations, though (for example, if the `HWP feature is enabled in the processor `_, the operation mode of the driver cannot be changed), and if it is not - supported in the current configuration, writes to this attribute with + supported in the current configuration, writes to this attribute will fail with an appropriate error. Interpretation of Policy Attributes diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt index 86927029a52d..207eca58efaa 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.txt @@ -752,18 +752,6 @@ completion of the request to the block layer. This means ending tag operations before calling end_that_request_last()! For an example of a user of these helpers, see the IDE tagged command queueing support. -Certain hardware conditions may dictate a need to invalidate the block tag -queue. For instance, on IDE any tagged request error needs to clear both -the hardware and software block queue and enable the driver to sanely restart -all the outstanding requests. There's a third helper to do that: - - blk_queue_invalidate_tags(struct request_queue *q) - - Clear the internal block tag queue and re-add all the pending requests - to the request queue. The driver will receive them again on the - next request_fn run, just like it did the first time it encountered - them. - 3.2.5.2 Tag info Some block functions exist to query current tag status or to go from a @@ -805,8 +793,7 @@ Internally, block manages tags in the blk_queue_tag structure: Most of the above is simple and straight forward, however busy_list may need a bit of explaining. Normally we don't care too much about request ordering, but in the event of any barrier requests in the tag queue we need to ensure -that requests are restarted in the order they were queue. This may happen -if the driver needs to use blk_queue_invalidate_tags(). +that requests are restarted in the order they were queue. 3.3 I/O Submission diff --git a/Documentation/core-api/atomic_ops.rst b/Documentation/core-api/atomic_ops.rst index 2e7165f86f55..724583453e1f 100644 --- a/Documentation/core-api/atomic_ops.rst +++ b/Documentation/core-api/atomic_ops.rst @@ -29,7 +29,7 @@ updated by one CPU, local_t is probably more appropriate. Please see local_t. The first operations to implement for atomic_t's are the initializers and -plain reads. :: +plain writes. :: #define ATOMIC_INIT(i) { (i) } #define atomic_set(v, i) ((v)->counter = (i)) diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 8e44aea366c2..76fe2d0f5e7d 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -284,7 +284,7 @@ Resources Management MTRR Handling ------------- -.. kernel-doc:: arch/x86/kernel/cpu/mtrr/main.c +.. kernel-doc:: arch/x86/kernel/cpu/mtrr/mtrr.c :export: Security Framework diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst index 8272ac92a14f..1d56221dfe35 100644 --- a/Documentation/crypto/crypto_engine.rst +++ b/Documentation/crypto/crypto_engine.rst @@ -8,11 +8,13 @@ The crypto engine API (CE), is a crypto queue manager. Requirement ----------- -You have to put at start of your tfm_ctx the struct crypto_engine_ctx -struct your_tfm_ctx { +You have to put at start of your tfm_ctx the struct crypto_engine_ctx:: + + struct your_tfm_ctx { struct crypto_engine_ctx enginectx; ... -}; + }; + Why: Since CE manage only crypto_async_request, it cannot know the underlying request_type and so have access only on the TFM. So using container_of for accessing __ctx is impossible. diff --git a/Documentation/device-mapper/writecache.txt b/Documentation/device-mapper/writecache.txt index 4424fa2c67d7..01532b3008ae 100644 --- a/Documentation/device-mapper/writecache.txt +++ b/Documentation/device-mapper/writecache.txt @@ -15,6 +15,8 @@ Constructor parameters: size) 5. the number of optional parameters (the parameters with an argument count as two) + start_sector n (default: 0) + offset from the start of cache device in 512-byte sectors high_watermark n (default: 50) start writeback when the number of used blocks reach this watermark diff --git a/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt b/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt index bdadc3da9556..6970f30a3770 100644 --- a/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt +++ b/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt @@ -66,7 +66,7 @@ Required root node properties: - "insignal,arndale-octa" - for Exynos5420-based Insignal Arndale Octa board. - "insignal,origen" - for Exynos4210-based Insignal Origen board. - - "insignal,origen4412 - for Exynos4412-based Insignal Origen board. + - "insignal,origen4412" - for Exynos4412-based Insignal Origen board. Optional nodes: diff --git a/Documentation/devicetree/bindings/clock/st/st,clkgen.txt b/Documentation/devicetree/bindings/clock/st/st,clkgen.txt index 7364953d0d0b..45ac19bfa0a9 100644 --- a/Documentation/devicetree/bindings/clock/st/st,clkgen.txt +++ b/Documentation/devicetree/bindings/clock/st/st,clkgen.txt @@ -31,10 +31,10 @@ This binding uses the common clock binding[1]. Each subnode should use the binding described in [2]..[7] [1] Documentation/devicetree/bindings/clock/clock-bindings.txt -[3] Documentation/devicetree/bindings/clock/st,clkgen-mux.txt -[4] Documentation/devicetree/bindings/clock/st,clkgen-pll.txt -[7] Documentation/devicetree/bindings/clock/st,quadfs.txt -[8] Documentation/devicetree/bindings/clock/st,flexgen.txt +[3] Documentation/devicetree/bindings/clock/st/st,clkgen-mux.txt +[4] Documentation/devicetree/bindings/clock/st/st,clkgen-pll.txt +[7] Documentation/devicetree/bindings/clock/st/st,quadfs.txt +[8] Documentation/devicetree/bindings/clock/st/st,flexgen.txt Required properties: diff --git a/Documentation/devicetree/bindings/clock/ti/gate.txt b/Documentation/devicetree/bindings/clock/ti/gate.txt index 03f8fdee62a7..56d603c1f716 100644 --- a/Documentation/devicetree/bindings/clock/ti/gate.txt +++ b/Documentation/devicetree/bindings/clock/ti/gate.txt @@ -10,7 +10,7 @@ will be controlled instead and the corresponding hw-ops for that is used. [1] Documentation/devicetree/bindings/clock/clock-bindings.txt -[2] Documentation/devicetree/bindings/clock/gate-clock.txt +[2] Documentation/devicetree/bindings/clock/gpio-gate-clock.txt [3] Documentation/devicetree/bindings/clock/ti/clockdomain.txt Required properties: diff --git a/Documentation/devicetree/bindings/clock/ti/interface.txt b/Documentation/devicetree/bindings/clock/ti/interface.txt index 3111a409fea6..3f4704040140 100644 --- a/Documentation/devicetree/bindings/clock/ti/interface.txt +++ b/Documentation/devicetree/bindings/clock/ti/interface.txt @@ -9,7 +9,7 @@ companion clock finding (match corresponding functional gate clock) and hardware autoidle enable / disable. [1] Documentation/devicetree/bindings/clock/clock-bindings.txt -[2] Documentation/devicetree/bindings/clock/gate-clock.txt +[2] Documentation/devicetree/bindings/clock/gpio-gate-clock.txt Required properties: - compatible : shall be one of: diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt index d36f07e0a2bb..0551c78619de 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt @@ -8,7 +8,7 @@ Required properties: "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock source (usually MAINPLL) when the original CPU PLL is under transition and not stable yet. - Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for + Please refer to Documentation/devicetree/bindings/clock/clock-bindings.txt for generic clock consumer properties. - operating-points-v2: Please refer to Documentation/devicetree/bindings/opp/opp.txt for detail. diff --git a/Documentation/devicetree/bindings/devfreq/rk3399_dmc.txt b/Documentation/devicetree/bindings/devfreq/rk3399_dmc.txt index d6d2833482c9..fc2bcbe26b1e 100644 --- a/Documentation/devicetree/bindings/devfreq/rk3399_dmc.txt +++ b/Documentation/devicetree/bindings/devfreq/rk3399_dmc.txt @@ -12,7 +12,7 @@ Required properties: - clocks: Phandles for clock specified in "clock-names" property - clock-names : The name of clock used by the DFI, must be "pclk_ddr_mon"; -- operating-points-v2: Refer to Documentation/devicetree/bindings/power/opp.txt +- operating-points-v2: Refer to Documentation/devicetree/bindings/opp/opp.txt for details. - center-supply: DMC supply node. - status: Marks the node enabled/disabled. diff --git a/Documentation/devicetree/bindings/display/bridge/tda998x.txt b/Documentation/devicetree/bindings/display/bridge/tda998x.txt index 1a4eaca40d94..f5a02f61dd36 100644 --- a/Documentation/devicetree/bindings/display/bridge/tda998x.txt +++ b/Documentation/devicetree/bindings/display/bridge/tda998x.txt @@ -30,7 +30,7 @@ Optional properties: - nxp,calib-gpios: calibration GPIO, which must correspond with the gpio used for the TDA998x interrupt pin. -[1] Documentation/sound/alsa/soc/DAI.txt +[1] Documentation/sound/soc/dai.rst [2] include/dt-bindings/display/tda998x.h Example: diff --git a/Documentation/devicetree/bindings/display/tilcdc/tilcdc.txt b/Documentation/devicetree/bindings/display/tilcdc/tilcdc.txt index 6fddb4f4f71a..3055d5c2c04e 100644 --- a/Documentation/devicetree/bindings/display/tilcdc/tilcdc.txt +++ b/Documentation/devicetree/bindings/display/tilcdc/tilcdc.txt @@ -36,7 +36,7 @@ Optional nodes: - port/ports: to describe a connection to an external encoder. The binding follows Documentation/devicetree/bindings/graph.txt and - suppors a single port with a single endpoint. + supports a single port with a single endpoint. - See also Documentation/devicetree/bindings/display/tilcdc/panel.txt and Documentation/devicetree/bindings/display/tilcdc/tfp410.txt for connecting diff --git a/Documentation/devicetree/bindings/gpio/nintendo,hollywood-gpio.txt b/Documentation/devicetree/bindings/gpio/nintendo,hollywood-gpio.txt index 20fc72d9e61e..45a61b462287 100644 --- a/Documentation/devicetree/bindings/gpio/nintendo,hollywood-gpio.txt +++ b/Documentation/devicetree/bindings/gpio/nintendo,hollywood-gpio.txt @@ -1,7 +1,7 @@ Nintendo Wii (Hollywood) GPIO controller Required properties: -- compatible: "nintendo,hollywood-gpio +- compatible: "nintendo,hollywood-gpio" - reg: Physical base address and length of the controller's registers. - gpio-controller: Marks the device node as a GPIO controller. - #gpio-cells: Should be <2>. The first cell is the pin number and the diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt b/Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt index 039219df05c5..18a2cde2e5f3 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt +++ b/Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt @@ -34,7 +34,7 @@ Optional properties: - mali-supply : Phandle to regulator for the Mali device. Refer to Documentation/devicetree/bindings/regulator/regulator.txt for details. -- operating-points-v2 : Refer to Documentation/devicetree/bindings/power/opp.txt +- operating-points-v2 : Refer to Documentation/devicetree/bindings/opp/opp.txt for details. diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt index c1f65d1dac1d..63cd91176a68 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt +++ b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt @@ -44,7 +44,7 @@ Optional properties: - memory-region: Memory region to allocate from, as defined in - Documentation/devicetree/bindi/reserved-memory/reserved-memory.txt + Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt - mali-supply: Phandle to regulator for the Mali device, as defined in diff --git a/Documentation/devicetree/bindings/hwmon/npcm750-pwm-fan.txt b/Documentation/devicetree/bindings/hwmon/npcm750-pwm-fan.txt new file mode 100644 index 000000000000..28f43e929f6d --- /dev/null +++ b/Documentation/devicetree/bindings/hwmon/npcm750-pwm-fan.txt @@ -0,0 +1,84 @@ +Nuvoton NPCM7xx PWM and Fan Tacho controller device + +The Nuvoton BMC NPCM7XX supports 8 Pulse-width modulation (PWM) +controller outputs and 16 Fan tachometer controller inputs. + +Required properties for pwm-fan node +- #address-cells : should be 1. +- #size-cells : should be 0. +- compatible : "nuvoton,npcm750-pwm-fan" for Poleg NPCM7XX. +- reg : specifies physical base address and size of the registers. +- reg-names : must contain: + * "pwm" for the PWM registers. + * "fan" for the Fan registers. +- clocks : phandle of reference clocks. +- clock-names : must contain + * "pwm" for PWM controller operating clock. + * "fan" for Fan controller operating clock. +- interrupts : contain the Fan interrupts with flags for falling edge. +- pinctrl-names : a pinctrl state named "default" must be defined. +- pinctrl-0 : phandle referencing pin configuration of the PWM and Fan + controller ports. + +fan subnode format: +=================== +Under fan subnode can be upto 8 child nodes, each child node representing a fan. +Each fan subnode must have one PWM channel and atleast one Fan tach channel. + +For PWM channel can be configured cooling-levels to create cooling device. +Cooling device could be bound to a thermal zone for the thermal control. + +Required properties for each child node: +- reg : specify the PWM output channel. + integer value in the range 0 through 7, that represent + the PWM channel number that used. + +- fan-tach-ch : specify the Fan tach input channel. + integer value in the range 0 through 15, that represent + the fan tach channel number that used. + + At least one Fan tach input channel is required + +Optional property for each child node: +- cooling-levels: PWM duty cycle values in a range from 0 to 255 + which correspond to thermal cooling states. + +Examples: + +pwm_fan:pwm-fan-controller@103000 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "nuvoton,npcm750-pwm-fan"; + reg = <0x103000 0x2000>, + <0x180000 0x8000>; + reg-names = "pwm", "fan"; + clocks = <&clk NPCM7XX_CLK_APB3>, + <&clk NPCM7XX_CLK_APB4>; + clock-names = "pwm","fan"; + interrupts = , + , + , + , + , + , + , + ; + pinctrl-names = "default"; + pinctrl-0 = <&pwm0_pins &pwm1_pins &pwm2_pins + &fanin0_pins &fanin1_pins &fanin2_pins + &fanin3_pins &fanin4_pins>; + fan@0 { + reg = <0x00>; + fan-tach-ch = /bits/ 8 <0x00 0x01>; + cooling-levels = <127 255>; + }; + fan@1 { + reg = <0x01>; + fan-tach-ch = /bits/ 8 <0x02 0x03>; + }; + fan@2 { + reg = <0x02>; + fan-tach-ch = /bits/ 8 <0x04>; + }; + +}; diff --git a/Documentation/devicetree/bindings/input/rmi4/rmi_2d_sensor.txt b/Documentation/devicetree/bindings/input/rmi4/rmi_2d_sensor.txt index f2c30c8b725d..9afffbdf6e28 100644 --- a/Documentation/devicetree/bindings/input/rmi4/rmi_2d_sensor.txt +++ b/Documentation/devicetree/bindings/input/rmi4/rmi_2d_sensor.txt @@ -12,7 +12,7 @@ Additional documentation for F11 can be found at: http://www.synaptics.com/sites/default/files/511-000136-01-Rev-E-RMI4-Interfacing-Guide.pdf Optional Touch Properties: -Description in Documentation/devicetree/bindings/input/touch +Description in Documentation/devicetree/bindings/input/touchscreen - touchscreen-inverted-x - touchscreen-inverted-y - touchscreen-swapped-x-y diff --git a/Documentation/devicetree/bindings/input/rotary-encoder.txt b/Documentation/devicetree/bindings/input/rotary-encoder.txt index f99fe5cdeaec..a644408b33b8 100644 --- a/Documentation/devicetree/bindings/input/rotary-encoder.txt +++ b/Documentation/devicetree/bindings/input/rotary-encoder.txt @@ -28,7 +28,7 @@ Deprecated properties: This property is deprecated. Instead, a 'steps-per-period ' value should be used, such as "rotary-encoder,steps-per-period = <2>". -See Documentation/input/rotary-encoder.txt for more information. +See Documentation/input/devices/rotary-encoder.rst for more information. Example: diff --git a/Documentation/devicetree/bindings/input/sprd,sc27xx-vibra.txt b/Documentation/devicetree/bindings/input/sprd,sc27xx-vibra.txt new file mode 100644 index 000000000000..f2ec0d4f2dff --- /dev/null +++ b/Documentation/devicetree/bindings/input/sprd,sc27xx-vibra.txt @@ -0,0 +1,23 @@ +Spreadtrum SC27xx PMIC Vibrator + +Required properties: +- compatible: should be "sprd,sc2731-vibrator". +- reg: address of vibrator control register. + +Example : + + sc2731_pmic: pmic@0 { + compatible = "sprd,sc2731"; + reg = <0>; + spi-max-frequency = <26000000>; + interrupts = ; + interrupt-controller; + #interrupt-cells = <2>; + #address-cells = <1>; + #size-cells = <0>; + + vibrator@eb4 { + compatible = "sprd,sc2731-vibrator"; + reg = <0xeb4>; + }; + }; diff --git a/Documentation/devicetree/bindings/input/touchscreen/hideep.txt b/Documentation/devicetree/bindings/input/touchscreen/hideep.txt index 121d9b7c79a2..1063c30d53f7 100644 --- a/Documentation/devicetree/bindings/input/touchscreen/hideep.txt +++ b/Documentation/devicetree/bindings/input/touchscreen/hideep.txt @@ -32,7 +32,7 @@ i2c@00000000 { reg = <0x6c>; interrupt-parent = <&gpx1>; interrupts = <2 IRQ_TYPE_LEVEL_LOW>; - vdd-supply = <&ldo15_reg>"; + vdd-supply = <&ldo15_reg>; vid-supply = <&ldo18_reg>; reset-gpios = <&gpx1 5 0>; touchscreen-size-x = <1080>; diff --git a/Documentation/devicetree/bindings/interrupt-controller/ingenic,intc.txt b/Documentation/devicetree/bindings/interrupt-controller/ingenic,intc.txt index 5f89fb635a1b..f97fd8ab5e45 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/ingenic,intc.txt +++ b/Documentation/devicetree/bindings/interrupt-controller/ingenic,intc.txt @@ -4,6 +4,7 @@ Required properties: - compatible : should be "ingenic,-intc". Valid strings are: ingenic,jz4740-intc + ingenic,jz4725b-intc ingenic,jz4770-intc ingenic,jz4775-intc ingenic,jz4780-intc diff --git a/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt b/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt index 1099fe0788fa..f246ccbf8838 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt +++ b/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt @@ -15,7 +15,7 @@ Required properties: include "nvidia,tegra30-ictlr". - reg : Specifies base physical address and size of the registers. Each controller must be described separately (Tegra20 has 4 of them, - whereas Tegra30 and later have 5" + whereas Tegra30 and later have 5). - interrupt-controller : Identifies the node as an interrupt controller. - #interrupt-cells : Specifies the number of cells needed to encode an interrupt source. The value must be 3. diff --git a/Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt b/Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt index 20f121daa910..697ca2f26d1b 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt +++ b/Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt @@ -7,6 +7,7 @@ Required properties: - "renesas,irqc-r8a73a4" (R-Mobile APE6) - "renesas,irqc-r8a7743" (RZ/G1M) - "renesas,irqc-r8a7745" (RZ/G1E) + - "renesas,irqc-r8a77470" (RZ/G1C) - "renesas,irqc-r8a7790" (R-Car H2) - "renesas,irqc-r8a7791" (R-Car M2-W) - "renesas,irqc-r8a7792" (R-Car V2H) @@ -16,6 +17,7 @@ Required properties: - "renesas,intc-ex-r8a7796" (R-Car M3-W) - "renesas,intc-ex-r8a77965" (R-Car M3-N) - "renesas,intc-ex-r8a77970" (R-Car V3M) + - "renesas,intc-ex-r8a77980" (R-Car V3H) - "renesas,intc-ex-r8a77995" (R-Car D3) - #interrupt-cells: has to be <2>: an interrupt index and flags, as defined in interrupts.txt in this directory diff --git a/Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt b/Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt index 136bd612bd83..6a36bf66d932 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt +++ b/Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt @@ -12,7 +12,7 @@ Required properties: specifier, shall be 2 - interrupts: interrupts references to primary interrupt controller (only needed for exti controller with multiple exti under - same parent interrupt: st,stm32-exti and st,stm32h7-exti") + same parent interrupt: st,stm32-exti and st,stm32h7-exti) Example: diff --git a/Documentation/devicetree/bindings/media/stih407-c8sectpfe.txt b/Documentation/devicetree/bindings/media/stih407-c8sectpfe.txt index c7888d6f6408..880d4d70c9fd 100644 --- a/Documentation/devicetree/bindings/media/stih407-c8sectpfe.txt +++ b/Documentation/devicetree/bindings/media/stih407-c8sectpfe.txt @@ -28,7 +28,7 @@ See: Documentation/devicetree/bindings/clock/clock-bindings.txt - pinctrl-names : a pinctrl state named tsin%d-serial or tsin%d-parallel (where %d is tsin-num) must be defined for each tsin child node. - pinctrl-0 : phandle referencing pin configuration for this tsin configuration -See: Documentation/devicetree/bindings/pinctrl/pinctrl-binding.txt +See: Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt Required properties (tsin (child) node): diff --git a/Documentation/devicetree/bindings/mfd/as3722.txt b/Documentation/devicetree/bindings/mfd/as3722.txt index 0b2a6099aa20..5297b2210704 100644 --- a/Documentation/devicetree/bindings/mfd/as3722.txt +++ b/Documentation/devicetree/bindings/mfd/as3722.txt @@ -46,7 +46,7 @@ is required: Following properties are require if pin control setting is required at boot. - pinctrl-names: A pinctrl state named "default" be defined, using the - bindings in pinctrl/pinctrl-binding.txt. + bindings in pinctrl/pinctrl-bindings.txt. - pinctrl[0...n]: Properties to contain the phandle that refer to different nodes of pin control settings. These nodes represents the pin control setting of state 0 to state n. Each of these diff --git a/Documentation/devicetree/bindings/mfd/mt6397.txt b/Documentation/devicetree/bindings/mfd/mt6397.txt index d1df77f4d655..0ebd08af777d 100644 --- a/Documentation/devicetree/bindings/mfd/mt6397.txt +++ b/Documentation/devicetree/bindings/mfd/mt6397.txt @@ -12,7 +12,7 @@ MT6397/MT6323 is a multifunction device with the following sub modules: It is interfaced to host controller using SPI interface by a proprietary hardware called PMIC wrapper or pwrap. MT6397/MT6323 MFD is a child device of pwrap. See the following for pwarp node definitions: -Documentation/devicetree/bindings/soc/pwrap.txt +Documentation/devicetree/bindings/soc/mediatek/pwrap.txt This document describes the binding for MFD device and its sub module. diff --git a/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt b/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt index dd2c06540485..daa091c2e67b 100644 --- a/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt +++ b/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt @@ -8,8 +8,8 @@ Required properties: - reg: The PRCM registers range The prcm node may contain several subdevices definitions: - - see Documentation/devicetree/clk/sunxi.txt for clock devices - - see Documentation/devicetree/reset/allwinner,sunxi-clock-reset.txt for reset + - see Documentation/devicetree/bindings/clock/sunxi.txt for clock devices + - see Documentation/devicetree/bindings/reset/allwinner,sunxi-clock-reset.txt for reset controller devices diff --git a/Documentation/devicetree/bindings/mips/brcm/soc.txt b/Documentation/devicetree/bindings/mips/brcm/soc.txt index 356c29789cf5..3a66d3c483e1 100644 --- a/Documentation/devicetree/bindings/mips/brcm/soc.txt +++ b/Documentation/devicetree/bindings/mips/brcm/soc.txt @@ -152,7 +152,7 @@ Required properties: - compatible : should contain one of: "brcm,bcm7425-timers" "brcm,bcm7429-timers" - "brcm,bcm7435-timers and + "brcm,bcm7435-timers" and "brcm,brcmstb-timers" - reg : the timers register range - interrupts : the interrupt line for this timer block diff --git a/Documentation/devicetree/bindings/mmc/exynos-dw-mshc.txt b/Documentation/devicetree/bindings/mmc/exynos-dw-mshc.txt index a58c173b7ab9..0419a63f73a0 100644 --- a/Documentation/devicetree/bindings/mmc/exynos-dw-mshc.txt +++ b/Documentation/devicetree/bindings/mmc/exynos-dw-mshc.txt @@ -62,7 +62,7 @@ Required properties for a slot (Deprecated - Recommend to use one slot per host) rest of the gpios (depending on the bus-width property) are the data lines in no particular order. The format of the gpio specifier depends on the gpio controller. -(Deprecated - Refer to Documentation/devicetree/binding/pinctrl/samsung-pinctrl.txt) +(Deprecated - Refer to Documentation/devicetree/bindings/pinctrl/samsung-pinctrl.txt) Example: diff --git a/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt b/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt index 3149297b3933..f064528effed 100644 --- a/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt +++ b/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt @@ -12,7 +12,7 @@ Required properties: See: Documentation/devicetree/bindings/clock/clock-bindings.txt - pinctrl-names: A pinctrl state names "default" must be defined. - pinctrl-0: Phandle referencing pin configuration of the SDHCI controller. - See: Documentation/devicetree/bindings/pinctrl/pinctrl-binding.txt + See: Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt Example: diff --git a/Documentation/devicetree/bindings/mmc/sdhci-st.txt b/Documentation/devicetree/bindings/mmc/sdhci-st.txt index 6b3d40ca395e..ccf82b4ee838 100644 --- a/Documentation/devicetree/bindings/mmc/sdhci-st.txt +++ b/Documentation/devicetree/bindings/mmc/sdhci-st.txt @@ -20,7 +20,7 @@ Required properties: - pinctrl-names: A pinctrl state names "default" must be defined. - pinctrl-0: Phandle referencing pin configuration of the sd/emmc controller. - See: Documentation/devicetree/bindings/pinctrl/pinctrl-binding.txt + See: Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt - reg: This must provide the host controller base address and it can also contain the FlashSS Top register for TX/RX delay used by the driver diff --git a/Documentation/devicetree/bindings/net/dsa/ksz.txt b/Documentation/devicetree/bindings/net/dsa/ksz.txt index fd23904ac68e..a700943218ca 100644 --- a/Documentation/devicetree/bindings/net/dsa/ksz.txt +++ b/Documentation/devicetree/bindings/net/dsa/ksz.txt @@ -6,7 +6,7 @@ Required properties: - compatible: For external switch chips, compatible string must be exactly one of: "microchip,ksz9477" -See Documentation/devicetree/bindings/dsa/dsa.txt for a list of additional +See Documentation/devicetree/bindings/net/dsa/dsa.txt for a list of additional required and optional properties. Examples: diff --git a/Documentation/devicetree/bindings/net/dsa/mt7530.txt b/Documentation/devicetree/bindings/net/dsa/mt7530.txt index a9bc27b93ee3..aa3527f71fdc 100644 --- a/Documentation/devicetree/bindings/net/dsa/mt7530.txt +++ b/Documentation/devicetree/bindings/net/dsa/mt7530.txt @@ -31,7 +31,7 @@ Required properties for the child nodes within ports container: - phy-mode: String, must be either "trgmii" or "rgmii" for port labeled "cpu". -See Documentation/devicetree/bindings/dsa/dsa.txt for a list of additional +See Documentation/devicetree/bindings/net/dsa/dsa.txt for a list of additional required, optional properties and how the integrated switch subnodes must be specified. diff --git a/Documentation/devicetree/bindings/net/fsl-fman.txt b/Documentation/devicetree/bindings/net/fsl-fman.txt index df873d1f3b7c..f8c33890bc29 100644 --- a/Documentation/devicetree/bindings/net/fsl-fman.txt +++ b/Documentation/devicetree/bindings/net/fsl-fman.txt @@ -238,7 +238,7 @@ PROPERTIES Must include one of the following: - "fsl,fman-dtsec" for dTSEC MAC - "fsl,fman-xgec" for XGEC MAC - - "fsl,fman-memac for mEMAC MAC + - "fsl,fman-memac" for mEMAC MAC - cell-index Usage: required diff --git a/Documentation/devicetree/bindings/nvmem/zii,rave-sp-eeprom.txt b/Documentation/devicetree/bindings/nvmem/zii,rave-sp-eeprom.txt index d5e22fc67d66..0df79d9e07ec 100644 --- a/Documentation/devicetree/bindings/nvmem/zii,rave-sp-eeprom.txt +++ b/Documentation/devicetree/bindings/nvmem/zii,rave-sp-eeprom.txt @@ -18,7 +18,7 @@ Optional properties: Data cells: Data cells are child nodes of eerpom node, bindings for which are -documented in Documentation/bindings/nvmem/nvmem.txt +documented in Documentation/devicetree/bindings/nvmem/nvmem.txt Example: diff --git a/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt b/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt index 7bf9df047a1e..0dcb87d6554f 100644 --- a/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt +++ b/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt @@ -3,7 +3,7 @@ HiSilicon Hip05 and Hip06 PCIe host bridge DT description HiSilicon PCIe host controller is based on the Synopsys DesignWare PCI core. It shares common functions with the PCIe DesignWare core driver and inherits common properties defined in -Documentation/devicetree/bindings/pci/designware-pci.txt. +Documentation/devicetree/bindings/pci/designware-pcie.txt. Additional properties are described here: diff --git a/Documentation/devicetree/bindings/pci/kirin-pcie.txt b/Documentation/devicetree/bindings/pci/kirin-pcie.txt index 6e217c63123d..6bbe43818ad5 100644 --- a/Documentation/devicetree/bindings/pci/kirin-pcie.txt +++ b/Documentation/devicetree/bindings/pci/kirin-pcie.txt @@ -3,7 +3,7 @@ HiSilicon Kirin SoCs PCIe host DT description Kirin PCIe host controller is based on the Synopsys DesignWare PCI core. It shares common functions with the PCIe DesignWare core driver and inherits common properties defined in -Documentation/devicetree/bindings/pci/designware-pci.txt. +Documentation/devicetree/bindings/pci/designware-pcie.txt. Additional properties are described here: diff --git a/Documentation/devicetree/bindings/pci/pci-keystone.txt b/Documentation/devicetree/bindings/pci/pci-keystone.txt index 7e05487544ed..3d4a209b0fd0 100644 --- a/Documentation/devicetree/bindings/pci/pci-keystone.txt +++ b/Documentation/devicetree/bindings/pci/pci-keystone.txt @@ -3,9 +3,9 @@ TI Keystone PCIe interface Keystone PCI host Controller is based on the Synopsys DesignWare PCI hardware version 3.65. It shares common functions with the PCIe DesignWare core driver and inherits common properties defined in -Documentation/devicetree/bindings/pci/designware-pci.txt +Documentation/devicetree/bindings/pci/designware-pcie.txt -Please refer to Documentation/devicetree/bindings/pci/designware-pci.txt +Please refer to Documentation/devicetree/bindings/pci/designware-pcie.txt for the details of DesignWare DT bindings. Additional properties are described here as well as properties that are not applicable. diff --git a/Documentation/devicetree/bindings/phy/phy-ath79-usb.txt b/Documentation/devicetree/bindings/phy/phy-ath79-usb.txt index cafe2197dad9..c3a29c5feea3 100644 --- a/Documentation/devicetree/bindings/phy/phy-ath79-usb.txt +++ b/Documentation/devicetree/bindings/phy/phy-ath79-usb.txt @@ -3,7 +3,7 @@ Required properties: - compatible: "qca,ar7100-usb-phy" - #phys-cells: should be 0 -- reset-names: "usb-phy"[, "usb-suspend-override"] +- reset-names: "phy"[, "suspend-override"] - resets: references to the reset controllers Example: @@ -11,7 +11,7 @@ Example: usb-phy { compatible = "qca,ar7100-usb-phy"; - reset-names = "usb-phy", "usb-suspend-override"; + reset-names = "phy", "suspend-override"; resets = <&rst 4>, <&rst 3>; #phy-cells = <0>; diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-max77620.txt b/Documentation/devicetree/bindings/pinctrl/pinctrl-max77620.txt index ad4fce3552bb..511fc234558b 100644 --- a/Documentation/devicetree/bindings/pinctrl/pinctrl-max77620.txt +++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-max77620.txt @@ -11,9 +11,9 @@ Optional Pinmux properties: -------------------------- Following properties are required if default setting of pins are required at boot. -- pinctrl-names: A pinctrl state named per . +- pinctrl-names: A pinctrl state named per . - pinctrl[0...n]: Properties to contain the phandle for pinctrl states per - . + . The pin configurations are defined as child of the pinctrl states node. Each sub-node have following properties: diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mcp23s08.txt b/Documentation/devicetree/bindings/pinctrl/pinctrl-mcp23s08.txt index a677145ae6d1..625a22e2f211 100644 --- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mcp23s08.txt +++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mcp23s08.txt @@ -101,9 +101,9 @@ Optional Pinmux properties: -------------------------- Following properties are required if default setting of pins are required at boot. -- pinctrl-names: A pinctrl state named per . +- pinctrl-names: A pinctrl state named per . - pinctrl[0...n]: Properties to contain the phandle for pinctrl states per - . + . The pin configurations are defined as child of the pinctrl states node. Each sub-node have following properties: diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-rk805.txt b/Documentation/devicetree/bindings/pinctrl/pinctrl-rk805.txt index eee3dc260934..cbcbd31e3ce8 100644 --- a/Documentation/devicetree/bindings/pinctrl/pinctrl-rk805.txt +++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-rk805.txt @@ -10,9 +10,9 @@ Optional Pinmux properties: -------------------------- Following properties are required if default setting of pins are required at boot. -- pinctrl-names: A pinctrl state named per . +- pinctrl-names: A pinctrl state named per . - pinctrl[0...n]: Properties to contain the phandle for pinctrl states per - . + . The pin configurations are defined as child of the pinctrl states node. Each sub-node have following properties: diff --git a/Documentation/devicetree/bindings/power/fsl,imx-gpc.txt b/Documentation/devicetree/bindings/power/fsl,imx-gpc.txt index b31d6bbeee16..726ec2875223 100644 --- a/Documentation/devicetree/bindings/power/fsl,imx-gpc.txt +++ b/Documentation/devicetree/bindings/power/fsl,imx-gpc.txt @@ -14,7 +14,7 @@ Required properties: datasheet - interrupts: Should contain one interrupt specifier for the GPC interrupt - clocks: Must contain an entry for each entry in clock-names. - See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details. + See Documentation/devicetree/bindings/clock/clock-bindings.txt for details. - clock-names: Must include the following entries: - ipg diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt index 9b387f861aed..7dec508987c7 100644 --- a/Documentation/devicetree/bindings/power/power_domain.txt +++ b/Documentation/devicetree/bindings/power/power_domain.txt @@ -133,7 +133,7 @@ located inside a PM domain with index 0 of a power controller represented by a node with the label "power". In the second example the consumer device are partitioned across two PM domains, the first with index 0 and the second with index 1, of a power controller that -is represented by a node with the label "power. +is represented by a node with the label "power". Optional properties: - required-opps: This contains phandle to an OPP node in another device's OPP diff --git a/Documentation/devicetree/bindings/power/supply/ab8500/btemp.txt b/Documentation/devicetree/bindings/power/supply/ab8500/btemp.txt index 0ba1bcc7f33a..f181e46d8e07 100644 --- a/Documentation/devicetree/bindings/power/supply/ab8500/btemp.txt +++ b/Documentation/devicetree/bindings/power/supply/ab8500/btemp.txt @@ -13,4 +13,4 @@ Required Properties: }; For information on battery specific node, Ref: -Documentation/devicetree/bindings/power_supply/ab8500/fg.txt +Documentation/devicetree/bindings/power/supply/ab8500/fg.txt diff --git a/Documentation/devicetree/bindings/power/supply/ab8500/chargalg.txt b/Documentation/devicetree/bindings/power/supply/ab8500/chargalg.txt index ef5328371122..56636f927203 100644 --- a/Documentation/devicetree/bindings/power/supply/ab8500/chargalg.txt +++ b/Documentation/devicetree/bindings/power/supply/ab8500/chargalg.txt @@ -13,4 +13,4 @@ ab8500_chargalg { }; For information on battery specific node, Ref: -Documentation/devicetree/bindings/power_supply/ab8500/fg.txt +Documentation/devicetree/bindings/power/supply/ab8500/fg.txt diff --git a/Documentation/devicetree/bindings/power/supply/ab8500/charger.txt b/Documentation/devicetree/bindings/power/supply/ab8500/charger.txt index 6bdbb08ea9e0..24ada03e07b4 100644 --- a/Documentation/devicetree/bindings/power/supply/ab8500/charger.txt +++ b/Documentation/devicetree/bindings/power/supply/ab8500/charger.txt @@ -22,4 +22,4 @@ Required Properties: }; For information on battery specific node, Ref: -Documentation/devicetree/bindings/power_supply/ab8500/fg.txt +Documentation/devicetree/bindings/power/supply/ab8500/fg.txt diff --git a/Documentation/devicetree/bindings/power/wakeup-source.txt b/Documentation/devicetree/bindings/power/wakeup-source.txt index 5d254ab13ebf..cfd74659fbed 100644 --- a/Documentation/devicetree/bindings/power/wakeup-source.txt +++ b/Documentation/devicetree/bindings/power/wakeup-source.txt @@ -22,7 +22,7 @@ List of legacy properties and respective binding document 3. "has-tpo" Documentation/devicetree/bindings/rtc/rtc-opal.txt 4. "linux,wakeup" Documentation/devicetree/bindings/input/gpio-matrix-keypad.txt Documentation/devicetree/bindings/mfd/tc3589x.txt - Documentation/devicetree/bindings/input/ads7846.txt + Documentation/devicetree/bindings/input/touchscreen/ads7846.txt 5. "linux,keypad-wakeup" Documentation/devicetree/bindings/input/qcom,pm8xxx-keypad.txt 6. "linux,input-wakeup" Documentation/devicetree/bindings/input/samsung-keypad.txt 7. "nvidia,wakeup-source" Documentation/devicetree/bindings/input/nvidia,tegra20-kbc.txt diff --git a/Documentation/devicetree/bindings/regulator/tps65090.txt b/Documentation/devicetree/bindings/regulator/tps65090.txt index ca69f5e3040c..ae326f263597 100644 --- a/Documentation/devicetree/bindings/regulator/tps65090.txt +++ b/Documentation/devicetree/bindings/regulator/tps65090.txt @@ -16,7 +16,7 @@ Required properties: Optional properties: - ti,enable-ext-control: This is applicable for DCDC1, DCDC2 and DCDC3. If DCDCs are externally controlled then this property should be there. -- "dcdc-ext-control-gpios: This is applicable for DCDC1, DCDC2 and DCDC3. +- dcdc-ext-control-gpios: This is applicable for DCDC1, DCDC2 and DCDC3. If DCDCs are externally controlled and if it is from GPIO then GPIO number should be provided. If it is externally controlled and no GPIO entry then driver will just configure this rails as external control diff --git a/Documentation/devicetree/bindings/reset/st,sti-softreset.txt b/Documentation/devicetree/bindings/reset/st,sti-softreset.txt index a21658f18fe6..3661e6153a92 100644 --- a/Documentation/devicetree/bindings/reset/st,sti-softreset.txt +++ b/Documentation/devicetree/bindings/reset/st,sti-softreset.txt @@ -15,7 +15,7 @@ Please refer to reset.txt in this directory for common reset controller binding usage. Required properties: -- compatible: Should be st,stih407-softreset"; +- compatible: Should be "st,stih407-softreset"; - #reset-cells: 1, see below example: diff --git a/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt b/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt index 7a34345d0ca3..c8dd440e9747 100644 --- a/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt +++ b/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt @@ -8,7 +8,7 @@ Required properties: See: Documentation/devicetree/bindings/clock/clock-bindings.txt - pinctrl-names: A pinctrl state names "default" must be defined. - pinctrl-0: Phandle referencing pin configuration of the UART peripheral. - See: Documentation/devicetree/bindings/pinctrl/pinctrl-binding.txt + See: Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt Optional properties: - cts-gpios: CTS pin for UART diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.txt index d330c73de9a2..68b7d6207e3d 100644 --- a/Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.txt +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.txt @@ -39,7 +39,7 @@ Required properties: Optional property: - clock-frequency: Desired I2C bus clock frequency in Hz. - When missing default to 400000Hz. + When missing default to 100000Hz. Child nodes should conform to I2C bus binding as described in i2c.txt. diff --git a/Documentation/devicetree/bindings/sound/qcom,apq8016-sbc.txt b/Documentation/devicetree/bindings/sound/qcom,apq8016-sbc.txt index 6a4aadc4ce06..84b28dbe9f15 100644 --- a/Documentation/devicetree/bindings/sound/qcom,apq8016-sbc.txt +++ b/Documentation/devicetree/bindings/sound/qcom,apq8016-sbc.txt @@ -30,7 +30,7 @@ Required properties: Board connectors: * Headset Mic - * Secondary Mic", + * Secondary Mic * DMIC * Ext Spk diff --git a/Documentation/devicetree/bindings/sound/qcom,apq8096.txt b/Documentation/devicetree/bindings/sound/qcom,apq8096.txt index aa54e49fc8a2..c7600a93ab39 100644 --- a/Documentation/devicetree/bindings/sound/qcom,apq8096.txt +++ b/Documentation/devicetree/bindings/sound/qcom,apq8096.txt @@ -35,7 +35,7 @@ This binding describes the APQ8096 sound card, which uses qdsp for audio. "Digital Mic3" Audio pins and MicBias on WCD9335 Codec: - "MIC_BIAS1 + "MIC_BIAS1" "MIC_BIAS2" "MIC_BIAS3" "MIC_BIAS4" diff --git a/Documentation/devicetree/bindings/sound/st,stm32-i2s.txt b/Documentation/devicetree/bindings/sound/st,stm32-i2s.txt index 4bda52042402..58c341300552 100644 --- a/Documentation/devicetree/bindings/sound/st,stm32-i2s.txt +++ b/Documentation/devicetree/bindings/sound/st,stm32-i2s.txt @@ -18,7 +18,7 @@ Required properties: See Documentation/devicetree/bindings/dma/stm32-dma.txt. - dma-names: Identifier for each DMA request line. Must be "tx" and "rx". - pinctrl-names: should contain only value "default" - - pinctrl-0: see Documentation/devicetree/bindings/pinctrl/pinctrl-stm32.txt + - pinctrl-0: see Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.txt Optional properties: - resets: Reference to a reset controller asserting the reset controller diff --git a/Documentation/devicetree/bindings/sound/st,stm32-sai.txt b/Documentation/devicetree/bindings/sound/st,stm32-sai.txt index f301cdf0b7e6..3a3fc506e43a 100644 --- a/Documentation/devicetree/bindings/sound/st,stm32-sai.txt +++ b/Documentation/devicetree/bindings/sound/st,stm32-sai.txt @@ -37,7 +37,7 @@ SAI subnodes required properties: "tx": if sai sub-block is configured as playback DAI "rx": if sai sub-block is configured as capture DAI - pinctrl-names: should contain only value "default" - - pinctrl-0: see Documentation/devicetree/bindings/pinctrl/pinctrl-stm32.txt + - pinctrl-0: see Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.txt SAI subnodes Optional properties: - st,sync: specify synchronization mode. diff --git a/Documentation/devicetree/bindings/spi/spi-st-ssc.txt b/Documentation/devicetree/bindings/spi/spi-st-ssc.txt index fe54959ec957..1bdc4709e474 100644 --- a/Documentation/devicetree/bindings/spi/spi-st-ssc.txt +++ b/Documentation/devicetree/bindings/spi/spi-st-ssc.txt @@ -9,7 +9,7 @@ Required properties: - clocks : Must contain an entry for each name in clock-names See ../clk/* - pinctrl-names : Uses "default", can use "sleep" if provided - See ../pinctrl/pinctrl-binding.txt + See ../pinctrl/pinctrl-bindings.txt Optional properties: - cs-gpios : List of GPIO chip selects diff --git a/Documentation/devicetree/bindings/timer/mediatek,mtk-timer.txt b/Documentation/devicetree/bindings/timer/mediatek,mtk-timer.txt index b1fe7e9de1b4..18d4d0166c76 100644 --- a/Documentation/devicetree/bindings/timer/mediatek,mtk-timer.txt +++ b/Documentation/devicetree/bindings/timer/mediatek,mtk-timer.txt @@ -1,19 +1,25 @@ -Mediatek MT6577, MT6572 and MT6589 Timers ---------------------------------------- +Mediatek Timers +--------------- + +Mediatek SoCs have two different timers on different platforms, +- GPT (General Purpose Timer) +- SYST (System Timer) + +The proper timer will be selected automatically by driver. Required properties: - compatible should contain: - * "mediatek,mt2701-timer" for MT2701 compatible timers - * "mediatek,mt6580-timer" for MT6580 compatible timers - * "mediatek,mt6589-timer" for MT6589 compatible timers - * "mediatek,mt7623-timer" for MT7623 compatible timers - * "mediatek,mt8127-timer" for MT8127 compatible timers - * "mediatek,mt8135-timer" for MT8135 compatible timers - * "mediatek,mt8173-timer" for MT8173 compatible timers - * "mediatek,mt6577-timer" for MT6577 and all above compatible timers -- reg: Should contain location and length for timers register. -- clocks: Clocks driving the timer hardware. This list should include two - clocks. The order is system clock and as second clock the RTC clock. + * "mediatek,mt2701-timer" for MT2701 compatible timers (GPT) + * "mediatek,mt6580-timer" for MT6580 compatible timers (GPT) + * "mediatek,mt6589-timer" for MT6589 compatible timers (GPT) + * "mediatek,mt7623-timer" for MT7623 compatible timers (GPT) + * "mediatek,mt8127-timer" for MT8127 compatible timers (GPT) + * "mediatek,mt8135-timer" for MT8135 compatible timers (GPT) + * "mediatek,mt8173-timer" for MT8173 compatible timers (GPT) + * "mediatek,mt6577-timer" for MT6577 and all above compatible timers (GPT) + * "mediatek,mt6765-timer" for MT6765 compatible timers (SYST) +- reg: Should contain location and length for timer register. +- clocks: Should contain system clock. Examples: @@ -21,5 +27,5 @@ Examples: compatible = "mediatek,mt6577-timer"; reg = <0x10008000 0x80>; interrupts = ; - clocks = <&system_clk>, <&rtc_clk>; + clocks = <&system_clk>; }; diff --git a/Documentation/devicetree/bindings/usb/rockchip,dwc3.txt b/Documentation/devicetree/bindings/usb/rockchip,dwc3.txt index 50a31536e975..c8c4b00ecb94 100644 --- a/Documentation/devicetree/bindings/usb/rockchip,dwc3.txt +++ b/Documentation/devicetree/bindings/usb/rockchip,dwc3.txt @@ -16,7 +16,8 @@ A child node must exist to represent the core DWC3 IP block. The name of the node is not important. The content of the node is defined in dwc3.txt. Phy documentation is provided in the following places: -Documentation/devicetree/bindings/phy/rockchip,dwc3-usb-phy.txt +Documentation/devicetree/bindings/phy/phy-rockchip-inno-usb2.txt - USB2.0 PHY +Documentation/devicetree/bindings/phy/phy-rockchip-typec.txt - Type-C PHY Example device nodes: diff --git a/Documentation/devicetree/bindings/w1/w1-gpio.txt b/Documentation/devicetree/bindings/w1/w1-gpio.txt index 6e09c35d9f1a..37091902a021 100644 --- a/Documentation/devicetree/bindings/w1/w1-gpio.txt +++ b/Documentation/devicetree/bindings/w1/w1-gpio.txt @@ -15,7 +15,7 @@ Optional properties: Examples: - onewire@0 { + onewire { compatible = "w1-gpio"; gpios = <&gpio 126 0>, <&gpio 105 0>; }; diff --git a/Documentation/driver-api/gpio/consumer.rst b/Documentation/driver-api/gpio/consumer.rst index c71a50d85b50..aa03f389d41d 100644 --- a/Documentation/driver-api/gpio/consumer.rst +++ b/Documentation/driver-api/gpio/consumer.rst @@ -57,7 +57,7 @@ device that displays digits), an additional index argument can be specified:: enum gpiod_flags flags) For a more detailed description of the con_id parameter in the DeviceTree case -see Documentation/gpio/board.txt +see Documentation/driver-api/gpio/board.rst The flags parameter is used to optionally specify a direction and initial value for the GPIO. Values can be: diff --git a/Documentation/driver-api/infrastructure.rst b/Documentation/driver-api/infrastructure.rst index bee1b9a1702f..6172f3cc3d0b 100644 --- a/Documentation/driver-api/infrastructure.rst +++ b/Documentation/driver-api/infrastructure.rst @@ -49,10 +49,10 @@ Device Drivers Base Device Drivers DMA Management ----------------------------- -.. kernel-doc:: drivers/base/dma-coherent.c +.. kernel-doc:: kernel/dma/coherent.c :export: -.. kernel-doc:: drivers/base/dma-mapping.c +.. kernel-doc:: kernel/dma/mapping.c :export: Device drivers PnP support diff --git a/Documentation/features/debug/stackprotector/arch-support.txt b/Documentation/features/debug/stackprotector/arch-support.txt index 74b89a9c8b3a..954ac1c95553 100644 --- a/Documentation/features/debug/stackprotector/arch-support.txt +++ b/Documentation/features/debug/stackprotector/arch-support.txt @@ -1,6 +1,6 @@ # # Feature name: stackprotector -# Kconfig: HAVE_CC_STACKPROTECTOR +# Kconfig: HAVE_STACKPROTECTOR # description: arch supports compiler driven stack overflow protection # ----------------------- diff --git a/Documentation/features/sched/membarrier-sync-core/arch-support.txt b/Documentation/features/sched/membarrier-sync-core/arch-support.txt index dbdf62907703..c7858dd1ea8f 100644 --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt @@ -5,10 +5,10 @@ # # Architecture requirements # -# * arm64 +# * arm/arm64 # -# Rely on eret context synchronization when returning from IPI handler, and -# when returning to user-space. +# Rely on implicit context synchronization as a result of exception return +# when returning from IPI handler, and when returning to user-space. # # * x86 # @@ -31,7 +31,7 @@ ----------------------- | alpha: | TODO | | arc: | TODO | - | arm: | TODO | + | arm: | ok | | arm64: | ok | | c6x: | TODO | | h8300: | TODO | diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 2c391338c675..7cca82ed2ed1 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -64,7 +64,7 @@ prototypes: void (*update_time)(struct inode *, struct timespec *, int); int (*atomic_open)(struct inode *, struct dentry *, struct file *, unsigned open_flag, - umode_t create_mode, int *opened); + umode_t create_mode); int (*tmpfile) (struct inode *, struct dentry *, umode_t); locking rules: @@ -441,8 +441,6 @@ prototypes: int (*iterate) (struct file *, struct dir_context *); int (*iterate_shared) (struct file *, struct dir_context *); __poll_t (*poll) (struct file *, struct poll_table_struct *); - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); - __poll_t (*poll_mask) (struct file *, __poll_t); long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); long (*compat_ioctl) (struct file *, unsigned int, unsigned long); int (*mmap) (struct file *, struct vm_area_struct *); @@ -473,7 +471,7 @@ prototypes: }; locking rules: - All except for ->poll_mask may block. + All may block. ->llseek() locking has moved from llseek to the individual llseek implementations. If your fs is not using generic_file_llseek, you @@ -505,9 +503,6 @@ in sys_read() and friends. the lease within the individual filesystem to record the result of the operation -->poll_mask can be called with or without the waitqueue lock for the waitqueue -returned from ->get_poll_head. - --------------------------- dquot_operations ------------------------------- prototypes: int (*write_dquot) (struct dquot *); diff --git a/Documentation/filesystems/ceph.txt b/Documentation/filesystems/ceph.txt index d7f011ddc150..8bf62240e10d 100644 --- a/Documentation/filesystems/ceph.txt +++ b/Documentation/filesystems/ceph.txt @@ -105,15 +105,13 @@ Mount Options address its connection to the monitor originates from. wsize=X - Specify the maximum write size in bytes. By default there is no - maximum. Ceph will normally size writes based on the file stripe - size. + Specify the maximum write size in bytes. Default: 16 MB. rsize=X - Specify the maximum read size in bytes. Default: 64 MB. + Specify the maximum read size in bytes. Default: 16 MB. rasize=X - Specify the maximum readahead. Default: 8 MB. + Specify the maximum readahead size in bytes. Default: 8 MB. mount_timeout=X Specify the timeout value for mount (in seconds), in the case diff --git a/Documentation/filesystems/cifs/AUTHORS b/Documentation/filesystems/cifs/AUTHORS index 9f4f87e16240..75865da2ce14 100644 --- a/Documentation/filesystems/cifs/AUTHORS +++ b/Documentation/filesystems/cifs/AUTHORS @@ -42,9 +42,11 @@ Jeff Layton (many, many fixes, as well as great work on the cifs Kerberos code) Scott Lovenberg Pavel Shilovsky (for great work adding SMB2 support, and various SMB3 features) Aurelien Aptel (for DFS SMB3 work and some key bug fixes) -Ronnie Sahlberg (for SMB3 xattr work and bug fixes) +Ronnie Sahlberg (for SMB3 xattr work, bug fixes, and lots of great work on compounding) Shirish Pargaonkar (for many ACL patches over the years) Sachin Prabhu (many bug fixes, including for reconnect, copy offload and security) +Paulo Alcantara +Long Li (some great work on RDMA, SMB Direct) Test case and Bug Report contributors @@ -58,5 +60,4 @@ mention to the Stanford Checker (SWAT) which pointed out many minor bugs in error paths. Valuable suggestions also have come from Al Viro and Dave Miller. -And thanks to the IBM LTC and Power test teams and SuSE testers for -finding multiple bugs during excellent stress test runs. +And thanks to the IBM LTC and Power test teams and SuSE and Citrix and RedHat testers for finding multiple bugs during excellent stress test runs. diff --git a/Documentation/filesystems/cifs/CHANGES b/Documentation/filesystems/cifs/CHANGES index bc0025cdd1c9..455e1cc494a9 100644 --- a/Documentation/filesystems/cifs/CHANGES +++ b/Documentation/filesystems/cifs/CHANGES @@ -1,3 +1,6 @@ +See https://wiki.samba.org/index.php/LinuxCIFSKernel for +more current information. + Version 1.62 ------------ Add sockopt=TCP_NODELAY mount option. EA (xattr) routines hardened diff --git a/Documentation/filesystems/cifs/TODO b/Documentation/filesystems/cifs/TODO index c5adf149b57f..852499aed64b 100644 --- a/Documentation/filesystems/cifs/TODO +++ b/Documentation/filesystems/cifs/TODO @@ -9,14 +9,14 @@ is a partial list of the known problems and missing features: a) SMB3 (and SMB3.02) missing optional features: - multichannel (started), integration with RDMA - - directory leases (improved metadata caching) - - T10 copy offload (copy chunk, and "Duplicate Extents" ioctl + - directory leases (improved metadata caching), started (root dir only) + - T10 copy offload ie "ODX" (copy chunk, and "Duplicate Extents" ioctl currently the only two server side copy mechanisms supported) b) improved sparse file support c) Directory entry caching relies on a 1 second timer, rather than -using Directory Leases +using Directory Leases, currently only the root file handle is cached longer d) quota support (needs minor kernel change since quota calls to make it to network filesystems or deviceless filesystems) @@ -42,6 +42,8 @@ mount or a per server basis to client UIDs or nobody if no mapping exists. Also better integration with winbind for resolving SID owners k) Add tools to take advantage of more smb3 specific ioctls and features +(passthrough ioctl/fsctl for sending various SMB3 fsctls to the server +is in progress) l) encrypted file support @@ -71,9 +73,8 @@ t) split cifs and smb3 support into separate modules so legacy (and less secure) CIFS dialect can be disabled in environments that don't need it and simplify the code. -u) Finish up SMB3.1.1 dialect support - -v) POSIX Extensions for SMB3.1.1 +v) POSIX Extensions for SMB3.1.1 (started, create and mkdir support added +so far). KNOWN BUGS ==================================== @@ -92,8 +93,8 @@ Misc testing to do 1) check out max path names and max path name components against various server types. Try nested symlinks (8 deep). Return max path name in stat -f information -2) Improve xfstest's cifs enablement and adapt xfstests where needed to test -cifs better +2) Improve xfstest's cifs/smb3 enablement and adapt xfstests where needed to test +cifs/smb3 better 3) Additional performance testing and optimization using iozone and similar - there are some easy changes that can be done to parallelize sequential writes, diff --git a/Documentation/filesystems/index.rst b/Documentation/filesystems/index.rst index 53b89d0edc15..46d1b1be3a51 100644 --- a/Documentation/filesystems/index.rst +++ b/Documentation/filesystems/index.rst @@ -71,6 +71,39 @@ Other Functions .. kernel-doc:: fs/block_dev.c :export: +.. kernel-doc:: fs/anon_inodes.c + :export: + +.. kernel-doc:: fs/attr.c + :export: + +.. kernel-doc:: fs/d_path.c + :export: + +.. kernel-doc:: fs/dax.c + :export: + +.. kernel-doc:: fs/direct-io.c + :export: + +.. kernel-doc:: fs/file_table.c + :export: + +.. kernel-doc:: fs/libfs.c + :export: + +.. kernel-doc:: fs/posix_acl.c + :export: + +.. kernel-doc:: fs/stat.c + :export: + +.. kernel-doc:: fs/sync.c + :export: + +.. kernel-doc:: fs/xattr.c + :export: + The proc filesystem =================== diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting index 17bb4dc28fae..7b7b845c490a 100644 --- a/Documentation/filesystems/porting +++ b/Documentation/filesystems/porting @@ -602,3 +602,23 @@ in your dentry operations instead. dentry separately, and it now has request_mask and query_flags arguments to specify the fields and sync type requested by statx. Filesystems not supporting any statx-specific features may ignore the new arguments. +-- +[mandatory] + ->atomic_open() calling conventions have changed. Gone is int *opened, + along with FILE_OPENED/FILE_CREATED. In place of those we have + FMODE_OPENED/FMODE_CREATED, set in file->f_mode. Additionally, return + value for 'called finish_no_open(), open it yourself' case has become + 0, not 1. Since finish_no_open() itself is returning 0 now, that part + does not need any changes in ->atomic_open() instances. +-- +[mandatory] + alloc_file() has become static now; two wrappers are to be used instead. + alloc_file_pseudo(inode, vfsmount, name, flags, ops) is for the cases + when dentry needs to be created; that's the majority of old alloc_file() + users. Calling conventions: on success a reference to new struct file + is returned and callers reference to inode is subsumed by that. On + failure, ERR_PTR() is returned and no caller's references are affected, + so the caller needs to drop the inode reference it held. + alloc_file_clone(file, flags, ops) does not affect any caller's references. + On success you get a new struct file sharing the mount/dentry with the + original, on failure - ERR_PTR(). diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index 829a7b7857a4..85907d5b9c2c 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -386,7 +386,7 @@ struct inode_operations { ssize_t (*listxattr) (struct dentry *, char *, size_t); void (*update_time)(struct inode *, struct timespec *, int); int (*atomic_open)(struct inode *, struct dentry *, struct file *, - unsigned open_flag, umode_t create_mode, int *opened); + unsigned open_flag, umode_t create_mode); int (*tmpfile) (struct inode *, struct dentry *, umode_t); }; @@ -496,13 +496,15 @@ otherwise noted. atomic_open: called on the last component of an open. Using this optional method the filesystem can look up, possibly create and open the file in - one atomic operation. If it cannot perform this (e.g. the file type - turned out to be wrong) it may signal this by returning 1 instead of - usual 0 or -ve . This method is only called if the last component is - negative or needs lookup. Cached positive dentries are still handled by - f_op->open(). If the file was created, the FILE_CREATED flag should be - set in "opened". In case of O_EXCL the method must only succeed if the - file didn't exist and hence FILE_CREATED shall always be set on success. + one atomic operation. If it wants to leave actual opening to the + caller (e.g. if the file turned out to be a symlink, device, or just + something filesystem won't do atomic open for), it may signal this by + returning finish_no_open(file, dentry). This method is only called if + the last component is negative or needs lookup. Cached positive dentries + are still handled by f_op->open(). If the file was created, + FMODE_CREATED flag should be set in file->f_mode. In case of O_EXCL + the method must only succeed if the file didn't exist and hence FMODE_CREATED + shall always be set on success. tmpfile: called in the end of O_TMPFILE open(). Optional, equivalent to atomically creating, opening and unlinking a file in given directory. @@ -857,8 +859,6 @@ struct file_operations { ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); int (*iterate) (struct file *, struct dir_context *); __poll_t (*poll) (struct file *, struct poll_table_struct *); - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); - __poll_t (*poll_mask) (struct file *, __poll_t); long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); long (*compat_ioctl) (struct file *, unsigned int, unsigned long); int (*mmap) (struct file *, struct vm_area_struct *); @@ -903,17 +903,6 @@ otherwise noted. activity on this file and (optionally) go to sleep until there is activity. Called by the select(2) and poll(2) system calls - get_poll_head: Returns the struct wait_queue_head that callers can - wait on. Callers need to check the returned events using ->poll_mask - once woken. Can return NULL to indicate polling is not supported, - or any error code using the ERR_PTR convention to indicate that a - grave error occured and ->poll_mask shall not be called. - - poll_mask: return the mask of EPOLL* values describing the file descriptor - state. Called either before going to sleep on the waitqueue returned by - get_poll_head, or after it has been woken. If ->get_poll_head and - ->poll_mask are implemented ->poll does not need to be implement. - unlocked_ioctl: called by the ioctl(2) system call. compat_ioctl: called by the ioctl(2) system call when 32 bit system calls diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx index cfd31d94c872..72d16f08e431 100644 --- a/Documentation/hwmon/ina2xx +++ b/Documentation/hwmon/ina2xx @@ -53,7 +53,7 @@ bus supply voltage. The shunt value in micro-ohms can be set via platform data or device tree at compile-time or via the shunt_resistor attribute in sysfs at run-time. Please -refer to the Documentation/devicetree/bindings/i2c/ina2xx.txt for bindings +refer to the Documentation/devicetree/bindings/hwmon/ina2xx.txt for bindings if the device tree is used. Additionally ina226 supports update_interval attribute as described in diff --git a/Documentation/hwmon/max34440 b/Documentation/hwmon/max34440 index 9ba6587b7657..b2de8fa49273 100644 --- a/Documentation/hwmon/max34440 +++ b/Documentation/hwmon/max34440 @@ -16,6 +16,11 @@ Supported chips: Prefixes: 'max34446' Addresses scanned: - Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34446.pdf + * Maxim MAX34451 + PMBus 16-Channel V/I Monitor and 12-Channel Sequencer/Marginer + Prefixes: 'max34451' + Addresses scanned: - + Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34451.pdf * Maxim MAX34460 PMBus 12-Channel Voltage Monitor & Sequencer Prefix: 'max34460' @@ -36,9 +41,10 @@ Description This driver supports hardware monitoring for Maxim MAX34440 PMBus 6-Channel Power-Supply Manager, MAX34441 PMBus 5-Channel Power-Supply Manager and Intelligent Fan Controller, and MAX34446 PMBus Power-Supply Data Logger. -It also supports the MAX34460 and MAX34461 PMBus Voltage Monitor & Sequencers. -The MAX34460 supports 12 voltage channels, and the MAX34461 supports 16 voltage -channels. +It also supports the MAX34451, MAX34460, and MAX34461 PMBus Voltage Monitor & +Sequencers. The MAX34451 supports monitoring voltage or current of 12 channels +based on GIN pins. The MAX34460 supports 12 voltage channels, and the MAX34461 +supports 16 voltage channels. The driver is a client driver to the core PMBus driver. Please see Documentation/hwmon/pmbus for details on PMBus client drivers. @@ -93,7 +99,7 @@ curr[1-6]_max Maximum current. From IOUT_OC_WARN_LIMIT register. curr[1-6]_crit Critical maximum current. From IOUT_OC_FAULT_LIMIT register. curr[1-6]_max_alarm Current high alarm. From IOUT_OC_WARNING status. curr[1-6]_crit_alarm Current critical high alarm. From IOUT_OC_FAULT status. -curr[1-4]_average Historical average current (MAX34446 only). +curr[1-4]_average Historical average current (MAX34446/34451 only). curr[1-6]_highest Historical maximum current. curr[1-6]_reset_history Write any value to reset history. @@ -123,5 +129,7 @@ temp[1-8]_reset_history Write any value to reset history. temp7 and temp8 attributes only exist for MAX34440. MAX34446 only supports temp[1-3]. +MAX34451 supports attribute groups in[1-16] (or curr[1-16] based on input pins) +and temp[1-5]. MAX34460 supports attribute groups in[1-12] and temp[1-5]. MAX34461 supports attribute groups in[1-16] and temp[1-5]. diff --git a/Documentation/hwmon/mlxreg-fan b/Documentation/hwmon/mlxreg-fan new file mode 100644 index 000000000000..fc531c6978d4 --- /dev/null +++ b/Documentation/hwmon/mlxreg-fan @@ -0,0 +1,60 @@ +Kernel driver mlxreg-fan +======================== + +Provides FAN control for the next Mellanox systems: +QMB700, equipped with 40x200GbE InfiniBand ports; +MSN3700, equipped with 32x200GbE or 16x400GbE Ethernet ports; +MSN3410, equipped with 6x400GbE plus 48x50GbE Ethernet ports; +MSN3800, equipped with 64x1000GbE Ethernet ports; +These are the Top of the Rack systems, equipped with Mellanox switch +board with Mellanox Quantum or Spectrume-2 devices. +FAN controller is implemented by the programmable device logic. + +The default registers offsets set within the programmable device is as +following: +- pwm1 0xe3 +- fan1 (tacho1) 0xe4 +- fan2 (tacho2) 0xe5 +- fan3 (tacho3) 0xe6 +- fan4 (tacho4) 0xe7 +- fan5 (tacho5) 0xe8 +- fan6 (tacho6) 0xe9 +- fan7 (tacho7) 0xea +- fan8 (tacho8) 0xeb +- fan9 (tacho9) 0xec +- fan10 (tacho10) 0xed +- fan11 (tacho11) 0xee +- fan12 (tacho12) 0xef +This setup can be re-programmed with other registers. + +Author: Vadim Pasternak + +Description +----------- + +The driver implements a simple interface for driving a fan connected to +a PWM output and tachometer inputs. +This driver obtains PWM and tachometers registers location according to +the system configuration and creates FAN/PWM hwmon objects and a cooling +device. PWM and tachometers are sensed through the on-board programmable +device, which exports its register map. This device could be attached to +any bus type, for which register mapping is supported. +Single instance is created with one PWM control, up to 12 tachometers and +one cooling device. It could be as many instances as programmable device +supports. +The driver exposes the fan to the user space through the hwmon's and +thermal's sysfs interfaces. + +/sys files in hwmon subsystem +----------------------------- + +fan[1-12]_fault - RO files for tachometers TACH1-TACH12 fault indication +fan[1-12]_input - RO files for tachometers TACH1-TACH12 input (in RPM) +pwm1 - RW file for fan[1-12] target duty cycle (0..255) + +/sys files in thermal subsystem +------------------------------- + +cur_state - RW file for current cooling state of the cooling device + (0..max_state) +max_state - RO file for maximum cooling state of the cooling device diff --git a/Documentation/hwmon/npcm750-pwm-fan b/Documentation/hwmon/npcm750-pwm-fan new file mode 100644 index 000000000000..6156ef7398e6 --- /dev/null +++ b/Documentation/hwmon/npcm750-pwm-fan @@ -0,0 +1,22 @@ +Kernel driver npcm750-pwm-fan +============================= + +Supported chips: + NUVOTON NPCM750/730/715/705 + +Authors: + + +Description: +------------ +This driver implements support for NUVOTON NPCM7XX PWM and Fan Tacho +controller. The PWM controller supports up to 8 PWM outputs. The Fan tacho +controller supports up to 16 tachometer inputs. + +The driver provides the following sensor accesses in sysfs: + +fanX_input ro provide current fan rotation value in RPM as reported + by the fan to the device. + +pwmX rw get or set PWM fan control value. This is an integer + value between 0(off) and 255(full speed). diff --git a/Documentation/hwmon/sysfs-interface b/Documentation/hwmon/sysfs-interface index fc337c317c67..2b9e1005d88b 100644 --- a/Documentation/hwmon/sysfs-interface +++ b/Documentation/hwmon/sysfs-interface @@ -171,6 +171,13 @@ in[0-*]_label Suggested voltage channel label. user-space. RO +in[0-*]_enable + Enable or disable the sensors. + When disabled the sensor read will return -ENODATA. + 1: Enable + 0: Disable + RW + cpu[0-*]_vid CPU core reference voltage. Unit: millivolt RO @@ -236,6 +243,13 @@ fan[1-*]_label Suggested fan channel label. In all other cases, the label is provided by user-space. RO +fan[1-*]_enable + Enable or disable the sensors. + When disabled the sensor read will return -ENODATA. + 1: Enable + 0: Disable + RW + Also see the Alarms section for status flags associated with fans. @@ -409,6 +423,13 @@ temp_reset_history Reset temp_lowest and temp_highest for all sensors WO +temp[1-*]_enable + Enable or disable the sensors. + When disabled the sensor read will return -ENODATA. + 1: Enable + 0: Disable + RW + Some chips measure temperature using external thermistors and an ADC, and report the temperature measurement as a voltage. Converting this voltage back to a temperature (or the other way around for limits) requires @@ -468,6 +489,13 @@ curr_reset_history Reset currX_lowest and currX_highest for all sensors WO +curr[1-*]_enable + Enable or disable the sensors. + When disabled the sensor read will return -ENODATA. + 1: Enable + 0: Disable + RW + Also see the Alarms section for status flags associated with currents. ********* @@ -566,6 +594,13 @@ power[1-*]_crit Critical maximum power. Unit: microWatt RW +power[1-*]_enable Enable or disable the sensors. + When disabled the sensor read will return + -ENODATA. + 1: Enable + 0: Disable + RW + Also see the Alarms section for status flags associated with power readings. ********** @@ -576,6 +611,12 @@ energy[1-*]_input Cumulative energy use Unit: microJoule RO +energy[1-*]_enable Enable or disable the sensors. + When disabled the sensor read will return + -ENODATA. + 1: Enable + 0: Disable + RW ************ * Humidity * @@ -586,6 +627,13 @@ humidity[1-*]_input Humidity RO +humidity[1-*]_enable Enable or disable the sensors + When disabled the sensor read will return + -ENODATA. + 1: Enable + 0: Disable + RW + ********** * Alarms * ********** diff --git a/Documentation/kbuild/kbuild.txt b/Documentation/kbuild/kbuild.txt index 6c9c69ec3986..114c7ce7b58d 100644 --- a/Documentation/kbuild/kbuild.txt +++ b/Documentation/kbuild/kbuild.txt @@ -50,6 +50,11 @@ LDFLAGS_MODULE -------------------------------------------------- Additional options used for $(LD) when linking modules. +KBUILD_KCONFIG +-------------------------------------------------- +Set the top-level Kconfig file to the value of this environment +variable. The default name is "Kconfig". + KBUILD_VERBOSE -------------------------------------------------- Set the kbuild verbosity. Can be assigned same values as "V=...". @@ -88,7 +93,8 @@ In most cases the name of the architecture is the same as the directory name found in the arch/ directory. But some architectures such as x86 and sparc have aliases. x86: i386 for 32 bit, x86_64 for 64 bit -sparc: sparc for 32 bit, sparc64 for 64 bit +sh: sh for 32 bit, sh64 for 64 bit +sparc: sparc32 for 32 bit, sparc64 for 64 bit CROSS_COMPILE -------------------------------------------------- @@ -148,15 +154,6 @@ stripped after they are installed. If INSTALL_MOD_STRIP is '1', then the default option --strip-debug will be used. Otherwise, INSTALL_MOD_STRIP value will be used as the options to the strip command. -INSTALL_FW_PATH --------------------------------------------------- -INSTALL_FW_PATH specifies where to install the firmware blobs. -The default value is: - - $(INSTALL_MOD_PATH)/lib/firmware - -The value can be overridden in which case the default value is ignored. - INSTALL_HDR_PATH -------------------------------------------------- INSTALL_HDR_PATH specifies where to install user space headers when diff --git a/Documentation/kbuild/kconfig-language.txt b/Documentation/kbuild/kconfig-language.txt index 3534a84d206c..64e0775a62d4 100644 --- a/Documentation/kbuild/kconfig-language.txt +++ b/Documentation/kbuild/kconfig-language.txt @@ -430,6 +430,12 @@ This sets the config program's title bar if the config program chooses to use it. It should be placed at the top of the configuration, before any other statement. +'#' Kconfig source file comment: + +An unquoted '#' character anywhere in a source file line indicates +the beginning of a source file comment. The remainder of that line +is a comment. + Kconfig hints ------------- diff --git a/Documentation/kbuild/kconfig.txt b/Documentation/kbuild/kconfig.txt index 7233118f3a05..68c82914c0f3 100644 --- a/Documentation/kbuild/kconfig.txt +++ b/Documentation/kbuild/kconfig.txt @@ -2,9 +2,9 @@ This file contains some assistance for using "make *config". Use "make help" to list all of the possible configuration targets. -The xconfig ('qconf') and menuconfig ('mconf') programs also -have embedded help text. Be sure to check it for navigation, -search, and other general help text. +The xconfig ('qconf'), menuconfig ('mconf'), and nconfig ('nconf') +programs also have embedded help text. Be sure to check that for +navigation, search, and other general help text. ====================================================================== General @@ -17,13 +17,16 @@ this happens, using a previously working .config file and running for you, so you may find that you need to see what NEW kernel symbols have been introduced. -To see a list of new config symbols when using "make oldconfig", use +To see a list of new config symbols, use cp user/some/old.config .config make listnewconfig and the config program will list any new symbols, one per line. +Alternatively, you can use the brute force method: + + make oldconfig scripts/diffconfig .config.old .config | less ______________________________________________________________________ @@ -160,7 +163,7 @@ Searching in menuconfig: This lists all config symbols that contain "hotplug", e.g., HOTPLUG_CPU, MEMORY_HOTPLUG. - For search help, enter / followed TAB-TAB-TAB (to highlight + For search help, enter / followed by TAB-TAB (to highlight ) and Enter. This will tell you that you can also use regular expressions (regexes) in the search string, so if you are not interested in MEMORY_HOTPLUG, you could try @@ -202,6 +205,39 @@ Example: make MENUCONFIG_MODE=single_menu menuconfig +====================================================================== +nconfig +-------------------------------------------------- + +nconfig is an alternate text-based configurator. It lists function +keys across the bottom of the terminal (window) that execute commands. +You can also just use the corresponding numeric key to execute the +commands unless you are in a data entry window. E.g., instead of F6 +for Save, you can just press 6. + +Use F1 for Global help or F3 for the Short help menu. + +Searching in nconfig: + + You can search either in the menu entry "prompt" strings + or in the configuration symbols. + + Use / to begin a search through the menu entries. This does + not support regular expressions. Use or for + Next hit and Previous hit, respectively. Use to + terminate the search mode. + + F8 (SymSearch) searches the configuration symbols for the + given string or regular expression (regex). + +NCONFIG_MODE +-------------------------------------------------- +This mode shows all sub-menus in one large tree. + +Example: + make NCONFIG_MODE=single_menu nconfig + + ====================================================================== xconfig -------------------------------------------------- @@ -230,8 +266,7 @@ gconfig Searching in gconfig: - None (gconfig isn't maintained as well as xconfig or menuconfig); - however, gconfig does have a few more viewing choices than - xconfig does. + There is no search command in gconfig. However, gconfig does + have several different viewing choices, modes, and options. ### diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 22208bf2386d..10f4499e677c 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -80,6 +80,26 @@ After the instruction is single-stepped, Kprobes executes the "post_handler," if any, that is associated with the kprobe. Execution then continues with the instruction following the probepoint. +Changing Execution Path +----------------------- + +Since kprobes can probe into a running kernel code, it can change the +register set, including instruction pointer. This operation requires +maximum care, such as keeping the stack frame, recovering the execution +path etc. Since it operates on a running kernel and needs deep knowledge +of computer architecture and concurrent computing, you can easily shoot +your foot. + +If you change the instruction pointer (and set up other related +registers) in pre_handler, you must return !0 so that kprobes stops +single stepping and just returns to the given address. +This also means post_handler should not be called anymore. + +Note that this operation may be harder on some architectures which use +TOC (Table of Contents) for function call, since you have to setup a new +TOC for your function in your module, and recover the old one after +returning from it. + Return Probes ------------- @@ -262,7 +282,7 @@ is optimized, that modification is ignored. Thus, if you want to tweak the kernel's execution path, you need to suppress optimization, using one of the following techniques: -- Specify an empty function for the kprobe's post_handler or break_handler. +- Specify an empty function for the kprobe's post_handler. or @@ -474,7 +494,7 @@ error occurs during registration, all probes in the array, up to the bad probe, are safely unregistered before the register_*probes function returns. -- kps/rps/jps: an array of pointers to ``*probe`` data structures +- kps/rps: an array of pointers to ``*probe`` data structures - num: the number of the array entries. .. note:: @@ -566,12 +586,11 @@ the same handler) may run concurrently on different CPUs. Kprobes does not use mutexes or allocate memory except during registration and unregistration. -Probe handlers are run with preemption disabled. Depending on the -architecture and optimization state, handlers may also run with -interrupts disabled (e.g., kretprobe handlers and optimized kprobe -handlers run without interrupt disabled on x86/x86-64). In any case, -your handler should not yield the CPU (e.g., by attempting to acquire -a semaphore). +Probe handlers are run with preemption disabled or interrupt disabled, +which depends on the architecture and optimization state. (e.g., +kretprobe handlers and optimized kprobe handlers run without interrupt +disabled on x86/x86-64). In any case, your handler should not yield +the CPU (e.g., by attempting to acquire a semaphore, or waiting I/O). Since a return probe is implemented by replacing the return address with the trampoline's address, stack backtraces and calls @@ -724,8 +743,8 @@ migrate your tool to one of the following options: See following documents: - - Documentation/trace/kprobetrace.txt - - Documentation/trace/events.txt + - Documentation/trace/kprobetrace.rst + - Documentation/trace/events.rst - tools/perf/Documentation/perf-probe.txt diff --git a/Documentation/maintainer/pull-requests.rst b/Documentation/maintainer/pull-requests.rst index a19db3458b56..22b271de0304 100644 --- a/Documentation/maintainer/pull-requests.rst +++ b/Documentation/maintainer/pull-requests.rst @@ -41,7 +41,7 @@ named ``char-misc-next``, you would be using the following command:: that will create a signed tag called ``char-misc-4.15-rc1`` based on the last commit in the ``char-misc-next`` branch, and sign it with your gpg key -(see :ref:`Documentation/maintainer/configure_git.rst `). +(see :ref:`Documentation/maintainer/configure-git.rst `). Linus will only accept pull requests based on a signed tag. Other maintainers may differ. diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index a02d6bbfc9d0..0d8d7ef131e9 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -2179,32 +2179,41 @@ or: event_indicated = 1; wake_up_process(event_daemon); -A write memory barrier is implied by wake_up() and co. if and only if they -wake something up. The barrier occurs before the task state is cleared, and so -sits between the STORE to indicate the event and the STORE to set TASK_RUNNING: +A general memory barrier is executed by wake_up() if it wakes something up. +If it doesn't wake anything up then a memory barrier may or may not be +executed; you must not rely on it. The barrier occurs before the task state +is accessed, in particular, it sits between the STORE to indicate the event +and the STORE to set TASK_RUNNING: - CPU 1 CPU 2 + CPU 1 (Sleeper) CPU 2 (Waker) =============================== =============================== set_current_state(); STORE event_indicated smp_store_mb(); wake_up(); - STORE current->state - STORE current->state - LOAD event_indicated + STORE current->state ... + + LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL) + STORE task->state -To repeat, this write memory barrier is present if and only if something -is actually awakened. To see this, consider the following sequence of -events, where X and Y are both initially zero: +where "task" is the thread being woken up and it equals CPU 1's "current". + +To repeat, a general memory barrier is guaranteed to be executed by wake_up() +if something is actually awakened, but otherwise there is no such guarantee. +To see this, consider the following sequence of events, where X and Y are both +initially zero: CPU 1 CPU 2 =============================== =============================== - X = 1; STORE event_indicated + X = 1; Y = 1; smp_mb(); wake_up(); - Y = 1; wait_event(wq, Y == 1); - wake_up(); load from Y sees 1, no memory barrier - load from X might see 0 + LOAD Y LOAD X + +If a wakeup does occur, one (at least) of the two loads must see 1. If, on +the other hand, a wakeup does not occur, both loads might see 0. -In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed -to see 1. +wake_up_process() always executes a general memory barrier. The barrier again +occurs before the task state is accessed. In particular, if the wake_up() in +the previous snippet were replaced by a call to wake_up_process() then one of +the two loads would be guaranteed to see 1. The available waker functions include: @@ -2224,6 +2233,8 @@ The available waker functions include: wake_up_poll(); wake_up_process(); +In terms of memory ordering, these functions all provide the same guarantees of +a wake_up() (or stronger). [!] Note that the memory barriers implied by the sleeper and the waker do _not_ order multiple stores before the wake-up with respect to loads of those stored diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt index c13214d073a4..d3e5dd26db12 100644 --- a/Documentation/networking/bonding.txt +++ b/Documentation/networking/bonding.txt @@ -1490,7 +1490,7 @@ To remove an ARP target: To configure the interval between learning packet transmits: # echo 12 > /sys/class/net/bond0/bonding/lp_interval - NOTE: the lp_inteval is the number of seconds between instances where + NOTE: the lp_interval is the number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default interval is 1 second. diff --git a/Documentation/networking/can.rst b/Documentation/networking/can.rst index d23c51abf8c6..2fd0b51a8c52 100644 --- a/Documentation/networking/can.rst +++ b/Documentation/networking/can.rst @@ -164,7 +164,7 @@ The Linux network devices (by default) just can handle the transmission and reception of media dependent frames. Due to the arbitration on the CAN bus the transmission of a low prio CAN-ID may be delayed by the reception of a high prio CAN frame. To -reflect the correct [*]_ traffic on the node the loopback of the sent +reflect the correct [#f1]_ traffic on the node the loopback of the sent data has to be performed right after a successful transmission. If the CAN network interface is not capable of performing the loopback for some reason the SocketCAN core can do this task as a fallback solution. @@ -175,7 +175,7 @@ networking behaviour for CAN applications. Due to some requests from the RT-SocketCAN group the loopback optionally may be disabled for each separate socket. See sockopts from the CAN RAW sockets in :ref:`socketcan-raw-sockets`. -.. [*] you really like to have this when you're running analyser +.. [#f1] you really like to have this when you're running analyser tools like 'candump' or 'cansniffer' on the (same) node. diff --git a/Documentation/networking/dpaa2/overview.rst b/Documentation/networking/dpaa2/overview.rst index 79fede4447d6..d638b5a8aadd 100644 --- a/Documentation/networking/dpaa2/overview.rst +++ b/Documentation/networking/dpaa2/overview.rst @@ -1,5 +1,6 @@ .. include:: +========================================================= DPAA2 (Data Path Acceleration Architecture Gen2) Overview ========================================================= diff --git a/Documentation/networking/e100.rst b/Documentation/networking/e100.rst index d4d837027925..f81111eba9c5 100644 --- a/Documentation/networking/e100.rst +++ b/Documentation/networking/e100.rst @@ -1,3 +1,4 @@ +============================================================== Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters ============================================================== @@ -46,123 +47,131 @@ Driver Configuration Parameters The default value for each parameter is generally the recommended setting, unless otherwise noted. -Rx Descriptors: Number of receive descriptors. A receive descriptor is a data +Rx Descriptors: + Number of receive descriptors. A receive descriptor is a data structure that describes a receive buffer and its attributes to the network controller. The data in the descriptor is used by the controller to write data from the controller to host memory. In the 3.x.x driver the valid range for this parameter is 64-256. The default value is 256. This parameter can be changed using the command:: - ethtool -G eth? rx n + ethtool -G eth? rx n Where n is the number of desired Rx descriptors. -Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data +Tx Descriptors: + Number of transmit descriptors. A transmit descriptor is a data structure that describes a transmit buffer and its attributes to the network controller. The data in the descriptor is used by the controller to read data from the host memory to the controller. In the 3.x.x driver the valid range for this parameter is 64-256. The default value is 128. This parameter can be changed using the command:: - ethtool -G eth? tx n + ethtool -G eth? tx n Where n is the number of desired Tx descriptors. -Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by +Speed/Duplex: + The driver auto-negotiates the link speed and duplex settings by default. The ethtool utility can be used as follows to force speed/duplex.:: - ethtool -s eth? autoneg off speed {10|100} duplex {full|half} + ethtool -s eth? autoneg off speed {10|100} duplex {full|half} NOTE: setting the speed/duplex to incorrect values will cause the link to fail. -Event Log Message Level: The driver uses the message level flag to log events +Event Log Message Level: + The driver uses the message level flag to log events to syslog. The message level can be set at driver load time. It can also be set using the command:: - ethtool -s eth? msglvl n + ethtool -s eth? msglvl n Additional Configurations ========================= - Configuring the Driver on Different Distributions - ------------------------------------------------- +Configuring the Driver on Different Distributions +------------------------------------------------- - Configuring a network driver to load properly when the system is started is - distribution dependent. Typically, the configuration process involves adding - an alias line to /etc/modprobe.d/*.conf as well as editing other system - startup scripts and/or configuration files. Many popular Linux - distributions ship with tools to make these changes for you. To learn the - proper way to configure a network device for your system, refer to your - distribution documentation. If during this process you are asked for the - driver or module name, the name for the Linux Base Driver for the Intel - PRO/100 Family of Adapters is e100. +Configuring a network driver to load properly when the system is started +is distribution dependent. Typically, the configuration process involves +adding an alias line to `/etc/modprobe.d/*.conf` as well as editing other +system startup scripts and/or configuration files. Many popular Linux +distributions ship with tools to make these changes for you. To learn +the proper way to configure a network device for your system, refer to +your distribution documentation. If during this process you are asked +for the driver or module name, the name for the Linux Base Driver for +the Intel PRO/100 Family of Adapters is e100. - As an example, if you install the e100 driver for two PRO/100 adapters - (eth0 and eth1), add the following to a configuration file in /etc/modprobe.d/ +As an example, if you install the e100 driver for two PRO/100 adapters +(eth0 and eth1), add the following to a configuration file in +/etc/modprobe.d/:: alias eth0 e100 alias eth1 e100 - Viewing Link Messages - --------------------- - In order to see link messages and other Intel driver information on your - console, you must set the dmesg level up to six. This can be done by - entering the following on the command line before loading the e100 driver:: +Viewing Link Messages +--------------------- + +In order to see link messages and other Intel driver information on your +console, you must set the dmesg level up to six. This can be done by +entering the following on the command line before loading the e100 +driver:: dmesg -n 6 - If you wish to see all messages issued by the driver, including debug - messages, set the dmesg level to eight. +If you wish to see all messages issued by the driver, including debug +messages, set the dmesg level to eight. - NOTE: This setting is not saved across reboots. +NOTE: This setting is not saved across reboots. +ethtool +------- - ethtool - ------- +The driver utilizes the ethtool interface for driver configuration and +diagnostics, as well as displaying statistical information. The ethtool +version 1.6 or later is required for this functionality. - The driver utilizes the ethtool interface for driver configuration and - diagnostics, as well as displaying statistical information. The ethtool - version 1.6 or later is required for this functionality. +The latest release of ethtool can be found from +https://www.kernel.org/pub/software/network/ethtool/ - The latest release of ethtool can be found from - https://www.kernel.org/pub/software/network/ethtool/ +Enabling Wake on LAN* (WoL) +--------------------------- +WoL is provided through the ethtool* utility. For instructions on +enabling WoL with ethtool, refer to the ethtool man page. WoL will be +enabled on the system during the next shut down or reboot. For this +driver version, in order to enable WoL, the e100 driver must be loaded +when shutting down or rebooting the system. - Enabling Wake on LAN* (WoL) - --------------------------- - WoL is provided through the ethtool* utility. For instructions on enabling - WoL with ethtool, refer to the ethtool man page. +NAPI +---- - WoL will be enabled on the system during the next shut down or reboot. For - this driver version, in order to enable WoL, the e100 driver must be - loaded when shutting down or rebooting the system. +NAPI (Rx polling mode) is supported in the e100 driver. - NAPI - ---- +See https://wiki.linuxfoundation.org/networking/napi for more +information on NAPI. - NAPI (Rx polling mode) is supported in the e100 driver. +Multiple Interfaces on Same Ethernet Broadcast Network +------------------------------------------------------ - See https://wiki.linuxfoundation.org/networking/napi for more information - on NAPI. +Due to the default ARP behavior on Linux, it is not possible to have one +system on two IP networks in the same Ethernet broadcast domain +(non-partitioned switch) behave as expected. All Ethernet interfaces +will respond to IP traffic for any IP address assigned to the system. +This results in unbalanced receive traffic. - Multiple Interfaces on Same Ethernet Broadcast Network - ------------------------------------------------------ +If you have multiple interfaces in a server, either turn on ARP +filtering by - Due to the default ARP behavior on Linux, it is not possible to have - one system on two IP networks in the same Ethernet broadcast domain - (non-partitioned switch) behave as expected. All Ethernet interfaces - will respond to IP traffic for any IP address assigned to the system. - This results in unbalanced receive traffic. +(1) entering:: - If you have multiple interfaces in a server, either turn on ARP - filtering by + echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter - (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter - (this only works if your kernel's version is higher than 2.4.5), or + (this only works if your kernel's version is higher than 2.4.5), or - (2) installing the interfaces in separate broadcast domains (either - in different switches or in a switch partitioned to VLANs). +(2) installing the interfaces in separate broadcast domains (either + in different switches or in a switch partitioned to VLANs). Support diff --git a/Documentation/networking/e1000.rst b/Documentation/networking/e1000.rst index 616848940e63..f10dd4086921 100644 --- a/Documentation/networking/e1000.rst +++ b/Documentation/networking/e1000.rst @@ -1,3 +1,4 @@ +=========================================================== Linux* Base Driver for Intel(R) Ethernet Network Connection =========================================================== @@ -33,7 +34,8 @@ Command Line Parameters The default value for each parameter is generally the recommended setting, unless otherwise noted. -NOTES: For more information about the AutoNeg, Duplex, and Speed +NOTES: + For more information about the AutoNeg, Duplex, and Speed parameters, see the "Speed and Duplex Configuration" section in this document. @@ -44,22 +46,27 @@ NOTES: For more information about the AutoNeg, Duplex, and Speed AutoNeg ------- + (Supported only on adapters with copper connections) -Valid Range: 0x01-0x0F, 0x20-0x2F -Default Value: 0x2F + +:Valid Range: 0x01-0x0F, 0x20-0x2F +:Default Value: 0x2F This parameter is a bit-mask that specifies the speed and duplex settings advertised by the adapter. When this parameter is used, the Speed and Duplex parameters must not be specified. -NOTE: Refer to the Speed and Duplex section of this readme for more +NOTE: + Refer to the Speed and Duplex section of this readme for more information on the AutoNeg parameter. Duplex ------ + (Supported only on adapters with copper connections) -Valid Range: 0-2 (0=auto-negotiate, 1=half, 2=full) -Default Value: 0 + +:Valid Range: 0-2 (0=auto-negotiate, 1=half, 2=full) +:Default Value: 0 This defines the direction in which data is allowed to flow. Can be either one or two-directional. If both Duplex and the link partner are @@ -69,18 +76,22 @@ duplex. FlowControl ----------- -Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) -Default Value: Reads flow control settings from the EEPROM + +:Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) +:Default Value: Reads flow control settings from the EEPROM This parameter controls the automatic generation(Tx) and response(Rx) to Ethernet PAUSE frames. InterruptThrottleRate --------------------- + (not supported on Intel(R) 82542, 82543 or 82544-based adapters) -Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, - 4=simplified balancing) -Default Value: 3 + +:Valid Range: + 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, + 4=simplified balancing) +:Default Value: 3 The driver can limit the amount of interrupts per second that the adapter will generate for incoming packets. It does this by writing a value to the @@ -134,13 +145,15 @@ Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency, but is generally not suitable for bulk throughput traffic. -NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and +NOTE: + InterruptThrottleRate takes precedence over the TxAbsIntDelay and RxAbsIntDelay parameters. In other words, minimizing the receive and/or transmit absolute delays does not force the controller to generate more interrupts than what the Interrupt Throttle Rate allows. -CAUTION: If you are using the Intel(R) PRO/1000 CT Network Connection +CAUTION: + If you are using the Intel(R) PRO/1000 CT Network Connection (controller 82547), setting InterruptThrottleRate to a value greater than 75,000, may hang (stop transmitting) adapters under certain network conditions. If this occurs a NETDEV @@ -150,7 +163,8 @@ CAUTION: If you are using the Intel(R) PRO/1000 CT Network Connection hang, ensure that InterruptThrottleRate is set no greater than 75,000 and is not set to 0. -NOTE: When e1000 is loaded with default settings and multiple adapters +NOTE: + When e1000 is loaded with default settings and multiple adapters are in use simultaneously, the CPU utilization may increase non- linearly. In order to limit the CPU utilization without impacting the overall throughput, we recommend that you load the driver as @@ -167,9 +181,11 @@ NOTE: When e1000 is loaded with default settings and multiple adapters RxDescriptors ------------- -Valid Range: 48-256 for 82542 and 82543-based adapters - 48-4096 for all other supported adapters -Default Value: 256 + +:Valid Range: + - 48-256 for 82542 and 82543-based adapters + - 48-4096 for all other supported adapters +:Default Value: 256 This value specifies the number of receive buffer descriptors allocated by the driver. Increasing this value allows the driver to buffer more @@ -179,15 +195,17 @@ Each descriptor is 16 bytes. A receive buffer is also allocated for each descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending on the MTU setting. The maximum MTU size is 16110. -NOTE: MTU designates the frame size. It only needs to be set for Jumbo +NOTE: + MTU designates the frame size. It only needs to be set for Jumbo Frames. Depending on the available system resources, the request for a higher number of receive descriptors may be denied. In this case, use a lower number. RxIntDelay ---------- -Valid Range: 0-65535 (0=off) -Default Value: 0 + +:Valid Range: 0-65535 (0=off) +:Default Value: 0 This value delays the generation of receive interrupts in units of 1.024 microseconds. Receive interrupt reduction can improve CPU efficiency if @@ -197,7 +215,8 @@ of TCP traffic. If the system is reporting dropped receives, this value may be set too high, causing the driver to run out of available receive descriptors. -CAUTION: When setting RxIntDelay to a value other than 0, adapters may +CAUTION: + When setting RxIntDelay to a value other than 0, adapters may hang (stop transmitting) under certain network conditions. If this occurs a NETDEV WATCHDOG message is logged in the system event log. In addition, the controller is automatically reset, @@ -206,9 +225,11 @@ CAUTION: When setting RxIntDelay to a value other than 0, adapters may RxAbsIntDelay ------------- + (This parameter is supported only on 82540, 82545 and later adapters.) -Valid Range: 0-65535 (0=off) -Default Value: 128 + +:Valid Range: 0-65535 (0=off) +:Default Value: 128 This value, in units of 1.024 microseconds, limits the delay in which a receive interrupt is generated. Useful only if RxIntDelay is non-zero, @@ -219,9 +240,11 @@ conditions. Speed ----- + (This parameter is supported only on adapters with copper connections.) -Valid Settings: 0, 10, 100, 1000 -Default Value: 0 (auto-negotiate at all supported speeds) + +:Valid Settings: 0, 10, 100, 1000 +:Default Value: 0 (auto-negotiate at all supported speeds) Speed forces the line speed to the specified value in megabits per second (Mbps). If this parameter is not specified or is set to 0 and the link @@ -230,22 +253,26 @@ speed. Duplex should also be set when Speed is set to either 10 or 100. TxDescriptors ------------- -Valid Range: 48-256 for 82542 and 82543-based adapters - 48-4096 for all other supported adapters -Default Value: 256 + +:Valid Range: + - 48-256 for 82542 and 82543-based adapters + - 48-4096 for all other supported adapters +:Default Value: 256 This value is the number of transmit descriptors allocated by the driver. Increasing this value allows the driver to queue more transmits. Each descriptor is 16 bytes. -NOTE: Depending on the available system resources, the request for a +NOTE: + Depending on the available system resources, the request for a higher number of transmit descriptors may be denied. In this case, use a lower number. TxIntDelay ---------- -Valid Range: 0-65535 (0=off) -Default Value: 8 + +:Valid Range: 0-65535 (0=off) +:Default Value: 8 This value delays the generation of transmit interrupts in units of 1.024 microseconds. Transmit interrupt reduction can improve CPU @@ -255,9 +282,11 @@ causing the driver to run out of available transmit descriptors. TxAbsIntDelay ------------- + (This parameter is supported only on 82540, 82545 and later adapters.) -Valid Range: 0-65535 (0=off) -Default Value: 32 + +:Valid Range: 0-65535 (0=off) +:Default Value: 32 This value, in units of 1.024 microseconds, limits the delay in which a transmit interrupt is generated. Useful only if TxIntDelay is non-zero, @@ -268,18 +297,21 @@ network conditions. XsumRX ------ + (This parameter is NOT supported on the 82542-based adapter.) -Valid Range: 0-1 -Default Value: 1 + +:Valid Range: 0-1 +:Default Value: 1 A value of '1' indicates that the driver should enable IP checksum offload for received packets (both UDP and TCP) to the adapter hardware. Copybreak --------- -Valid Range: 0-xxxxxxx (0=off) -Default Value: 256 -Usage: modprobe e1000.ko copybreak=128 + +:Valid Range: 0-xxxxxxx (0=off) +:Default Value: 256 +:Usage: modprobe e1000.ko copybreak=128 Driver copies all packets below or equaling this size to a fresh RX buffer before handing it up the stack. @@ -291,8 +323,9 @@ it is also available during runtime at SmartPowerDownEnable -------------------- -Valid Range: 0-1 -Default Value: 0 (disabled) + +:Valid Range: 0-1 +:Default Value: 0 (disabled) Allows PHY to turn off in lower power states. The user can turn off this parameter in supported chipsets. @@ -308,14 +341,14 @@ fiber interface board only links at 1000 Mbps full-duplex. For copper-based boards, the keywords interact as follows: - The default operation is auto-negotiate. The board advertises all +- The default operation is auto-negotiate. The board advertises all supported speed and duplex combinations, and it links at the highest common speed and duplex mode IF the link partner is set to auto-negotiate. - If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps +- If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps is advertised (The 1000BaseT spec requires auto-negotiation.) - If Speed = 10 or 100, then both Speed and Duplex should be set. Auto- +- If Speed = 10 or 100, then both Speed and Duplex should be set. Auto- negotiation is disabled, and the AutoNeg parameter is ignored. Partner SHOULD also be forced. @@ -327,13 +360,15 @@ process. The parameter may be specified as either a decimal or hexadecimal value as determined by the bitmap below. +============== ====== ====== ======= ======= ====== ====== ======= ====== Bit position 7 6 5 4 3 2 1 0 Decimal Value 128 64 32 16 8 4 2 1 Hex value 80 40 20 10 8 4 2 1 Speed (Mbps) N/A N/A 1000 N/A 100 100 10 10 Duplex Full Full Half Full Half +============== ====== ====== ======= ======= ====== ====== ======= ====== -Some examples of using AutoNeg: +Some examples of using AutoNeg:: modprobe e1000 AutoNeg=0x01 (Restricts autonegotiation to 10 Half) modprobe e1000 AutoNeg=1 (Same as above) @@ -354,8 +389,9 @@ previously mentioned to force the adapter to the same speed and duplex. Additional Configurations ========================= - Jumbo Frames - ------------ +Jumbo Frames +------------ + Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500. Use the ifconfig command to increase the MTU size. For example:: @@ -367,11 +403,11 @@ Additional Configurations MTU=9000 - to the file /etc/sysconfig/network-scripts/ifcfg-eth. This example - applies to the Red Hat distributions; other distributions may store this - setting in a different location. + to the file /etc/sysconfig/network-scripts/ifcfg-eth. This example + applies to the Red Hat distributions; other distributions may store this + setting in a different location. - Notes: +Notes: Degradation in throughput performance may be observed in some Jumbo frames environments. If this is observed, increasing the application's socket buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. @@ -385,12 +421,14 @@ Additional Configurations poor performance or loss of link. - Adapters based on the Intel(R) 82542 and 82573V/E controller do not - support Jumbo Frames. These correspond to the following product names: + support Jumbo Frames. These correspond to the following product names:: + Intel(R) PRO/1000 Gigabit Server Adapter Intel(R) PRO/1000 PM Network Connection - ethtool - ------- +ethtool +------- + The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical information. The ethtool version 1.6 or later is required for this functionality. @@ -398,8 +436,9 @@ Additional Configurations The latest release of ethtool can be found from https://www.kernel.org/pub/software/network/ethtool/ - Enabling Wake on LAN* (WoL) - --------------------------- +Enabling Wake on LAN* (WoL) +--------------------------- + WoL is configured through the ethtool* utility. WoL will be enabled on the system during the next shut down or reboot. diff --git a/Documentation/networking/strparser.txt b/Documentation/networking/strparser.txt index 13081b3decef..a7d354ddda7b 100644 --- a/Documentation/networking/strparser.txt +++ b/Documentation/networking/strparser.txt @@ -48,7 +48,7 @@ void strp_pause(struct strparser *strp) Temporarily pause a stream parser. Message parsing is suspended and no new messages are delivered to the upper layer. -void strp_pause(struct strparser *strp) +void strp_unpause(struct strparser *strp) Unpause a paused stream parser. diff --git a/Documentation/riscv/pmu.txt b/Documentation/riscv/pmu.txt new file mode 100644 index 000000000000..b29f03a6d82f --- /dev/null +++ b/Documentation/riscv/pmu.txt @@ -0,0 +1,249 @@ +Supporting PMUs on RISC-V platforms +========================================== +Alan Kao , Mar 2018 + +Introduction +------------ + +As of this writing, perf_event-related features mentioned in The RISC-V ISA +Privileged Version 1.10 are as follows: +(please check the manual for more details) + +* [m|s]counteren +* mcycle[h], cycle[h] +* minstret[h], instret[h] +* mhpeventx, mhpcounterx[h] + +With such function set only, porting perf would require a lot of work, due to +the lack of the following general architectural performance monitoring features: + +* Enabling/Disabling counters + Counters are just free-running all the time in our case. +* Interrupt caused by counter overflow + No such feature in the spec. +* Interrupt indicator + It is not possible to have many interrupt ports for all counters, so an + interrupt indicator is required for software to tell which counter has + just overflowed. +* Writing to counters + There will be an SBI to support this since the kernel cannot modify the + counters [1]. Alternatively, some vendor considers to implement + hardware-extension for M-S-U model machines to write counters directly. + +This document aims to provide developers a quick guide on supporting their +PMUs in the kernel. The following sections briefly explain perf' mechanism +and todos. + +You may check previous discussions here [1][2]. Also, it might be helpful +to check the appendix for related kernel structures. + + +1. Initialization +----------------- + +*riscv_pmu* is a global pointer of type *struct riscv_pmu*, which contains +various methods according to perf's internal convention and PMU-specific +parameters. One should declare such instance to represent the PMU. By default, +*riscv_pmu* points to a constant structure *riscv_base_pmu*, which has very +basic support to a baseline QEMU model. + +Then he/she can either assign the instance's pointer to *riscv_pmu* so that +the minimal and already-implemented logic can be leveraged, or invent his/her +own *riscv_init_platform_pmu* implementation. + +In other words, existing sources of *riscv_base_pmu* merely provide a +reference implementation. Developers can flexibly decide how many parts they +can leverage, and in the most extreme case, they can customize every function +according to their needs. + + +2. Event Initialization +----------------------- + +When a user launches a perf command to monitor some events, it is first +interpreted by the userspace perf tool into multiple *perf_event_open* +system calls, and then each of them calls to the body of *event_init* +member function that was assigned in the previous step. In *riscv_base_pmu*'s +case, it is *riscv_event_init*. + +The main purpose of this function is to translate the event provided by user +into bitmap, so that HW-related control registers or counters can directly be +manipulated. The translation is based on the mappings and methods provided in +*riscv_pmu*. + +Note that some features can be done in this stage as well: + +(1) interrupt setting, which is stated in the next section; +(2) privilege level setting (user space only, kernel space only, both); +(3) destructor setting. Normally it is sufficient to apply *riscv_destroy_event*; +(4) tweaks for non-sampling events, which will be utilized by functions such as +*perf_adjust_period*, usually something like the follows: + +if (!is_sampling_event(event)) { + hwc->sample_period = x86_pmu.max_period; + hwc->last_period = hwc->sample_period; + local64_set(&hwc->period_left, hwc->sample_period); +} + +In the case of *riscv_base_pmu*, only (3) is provided for now. + + +3. Interrupt +------------ + +3.1. Interrupt Initialization + +This often occurs at the beginning of the *event_init* method. In common +practice, this should be a code segment like + +int x86_reserve_hardware(void) +{ + int err = 0; + + if (!atomic_inc_not_zero(&pmc_refcount)) { + mutex_lock(&pmc_reserve_mutex); + if (atomic_read(&pmc_refcount) == 0) { + if (!reserve_pmc_hardware()) + err = -EBUSY; + else + reserve_ds_buffers(); + } + if (!err) + atomic_inc(&pmc_refcount); + mutex_unlock(&pmc_reserve_mutex); + } + + return err; +} + +And the magic is in *reserve_pmc_hardware*, which usually does atomic +operations to make implemented IRQ accessible from some global function pointer. +*release_pmc_hardware* serves the opposite purpose, and it is used in event +destructors mentioned in previous section. + +(Note: From the implementations in all the architectures, the *reserve/release* +pair are always IRQ settings, so the *pmc_hardware* seems somehow misleading. +It does NOT deal with the binding between an event and a physical counter, +which will be introduced in the next section.) + +3.2. IRQ Structure + +Basically, a IRQ runs the following pseudo code: + +for each hardware counter that triggered this overflow + + get the event of this counter + + // following two steps are defined as *read()*, + // check the section Reading/Writing Counters for details. + count the delta value since previous interrupt + update the event->count (# event occurs) by adding delta, and + event->hw.period_left by subtracting delta + + if the event overflows + sample data + set the counter appropriately for the next overflow + + if the event overflows again + too frequently, throttle this event + fi + fi + +end for + +However as of this writing, none of the RISC-V implementations have designed an +interrupt for perf, so the details are to be completed in the future. + +4. Reading/Writing Counters +--------------------------- + +They seem symmetric but perf treats them quite differently. For reading, there +is a *read* interface in *struct pmu*, but it serves more than just reading. +According to the context, the *read* function not only reads the content of the +counter (event->count), but also updates the left period to the next interrupt +(event->hw.period_left). + +But the core of perf does not need direct write to counters. Writing counters +is hidden behind the abstraction of 1) *pmu->start*, literally start counting so one +has to set the counter to a good value for the next interrupt; 2) inside the IRQ +it should set the counter to the same resonable value. + +Reading is not a problem in RISC-V but writing would need some effort, since +counters are not allowed to be written by S-mode. + + +5. add()/del()/start()/stop() +----------------------------- + +Basic idea: add()/del() adds/deletes events to/from a PMU, and start()/stop() +starts/stop the counter of some event in the PMU. All of them take the same +arguments: *struct perf_event *event* and *int flag*. + +Consider perf as a state machine, then you will find that these functions serve +as the state transition process between those states. +Three states (event->hw.state) are defined: + +* PERF_HES_STOPPED: the counter is stopped +* PERF_HES_UPTODATE: the event->count is up-to-date +* PERF_HES_ARCH: arch-dependent usage ... we don't need this for now + +A normal flow of these state transitions are as follows: + +* A user launches a perf event, resulting in calling to *event_init*. +* When being context-switched in, *add* is called by the perf core, with a flag + PERF_EF_START, which means that the event should be started after it is added. + At this stage, a general event is bound to a physical counter, if any. + The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, because it is now + stopped, and the (software) event count does not need updating. +** *start* is then called, and the counter is enabled. + With flag PERF_EF_RELOAD, it writes an appropriate value to the counter (check + previous section for detail). + Nothing is written if the flag does not contain PERF_EF_RELOAD. + The state now is reset to none, because it is neither stopped nor updated + (the counting already started) +* When being context-switched out, *del* is called. It then checks out all the + events in the PMU and calls *stop* to update their counts. +** *stop* is called by *del* + and the perf core with flag PERF_EF_UPDATE, and it often shares the same + subroutine as *read* with the same logic. + The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again. + +** Life cycle of these two pairs: *add* and *del* are called repeatedly as + tasks switch in-and-out; *start* and *stop* is also called when the perf core + needs a quick stop-and-start, for instance, when the interrupt period is being + adjusted. + +Current implementation is sufficient for now and can be easily extended to +features in the future. + +A. Related Structures +--------------------- + +* struct pmu: include/linux/perf_event.h +* struct riscv_pmu: arch/riscv/include/asm/perf_event.h + + Both structures are designed to be read-only. + + *struct pmu* defines some function pointer interfaces, and most of them take +*struct perf_event* as a main argument, dealing with perf events according to +perf's internal state machine (check kernel/events/core.c for details). + + *struct riscv_pmu* defines PMU-specific parameters. The naming follows the +convention of all other architectures. + +* struct perf_event: include/linux/perf_event.h +* struct hw_perf_event + + The generic structure that represents perf events, and the hardware-related +details. + +* struct riscv_hw_events: arch/riscv/include/asm/perf_event.h + + The structure that holds the status of events, has two fixed members: +the number of events and the array of the events. + +References +---------- + +[1] https://github.com/riscv/riscv-linux/pull/124 +[2] https://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-dev/f19TmCNP6yA diff --git a/Documentation/sphinx/rstFlatTable.py b/Documentation/sphinx/rstFlatTable.py index 25feb0d35e7a..2019a55f6b18 100755 --- a/Documentation/sphinx/rstFlatTable.py +++ b/Documentation/sphinx/rstFlatTable.py @@ -53,8 +53,6 @@ from docutils.utils import SystemMessagePropagation # common globals # ============================================================================== -# The version numbering follows numbering of the specification -# (Documentation/books/kernel-doc-HOWTO). __version__ = '1.0' PY3 = sys.version_info[0] == 3 diff --git a/Documentation/trace/coresight.txt b/Documentation/trace/coresight.txt index 1d74ad0202b6..efbc832146e7 100644 --- a/Documentation/trace/coresight.txt +++ b/Documentation/trace/coresight.txt @@ -426,5 +426,5 @@ root@genericarmv8:~# Details on how to use the generic STM API can be found here [2]. [1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm -[2]. Documentation/trace/stm.txt +[2]. Documentation/trace/stm.rst [3]. https://github.com/Linaro/perf-opencsd diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst index 1afae55dc55c..696dc69b8158 100644 --- a/Documentation/trace/events.rst +++ b/Documentation/trace/events.rst @@ -8,7 +8,7 @@ Event Tracing 1. Introduction =============== -Tracepoints (see Documentation/trace/tracepoints.txt) can be used +Tracepoints (see Documentation/trace/tracepoints.rst) can be used without creating custom kernel modules to register probe functions using the event tracing infrastructure. diff --git a/Documentation/trace/ftrace-uses.rst b/Documentation/trace/ftrace-uses.rst index 00283b6dd101..1fbc69894eed 100644 --- a/Documentation/trace/ftrace-uses.rst +++ b/Documentation/trace/ftrace-uses.rst @@ -199,7 +199,7 @@ If @buf is NULL and reset is set, all functions will be enabled for tracing. The @buf can also be a glob expression to enable all functions that match a specific pattern. -See Filter Commands in :file:`Documentation/trace/ftrace.txt`. +See Filter Commands in :file:`Documentation/trace/ftrace.rst`. To just trace the schedule function: diff --git a/Documentation/trace/histogram.txt b/Documentation/trace/histogram.txt index b13771cb12c1..7ffea6aa22e3 100644 --- a/Documentation/trace/histogram.txt +++ b/Documentation/trace/histogram.txt @@ -7,7 +7,7 @@ Histogram triggers are special event triggers that can be used to aggregate trace event data into histograms. For information on - trace events and event triggers, see Documentation/trace/events.txt. + trace events and event triggers, see Documentation/trace/events.rst. 2. Histogram Trigger Command @@ -1729,35 +1729,35 @@ If a variable isn't a key variable or prefixed with 'vals=', the associated event field will be saved in a variable but won't be summed as a value: - # echo 'hist:keys=next_pid:ts1=common_timestamp ... >> event/trigger + # echo 'hist:keys=next_pid:ts1=common_timestamp ...' >> event/trigger Multiple variables can be assigned at the same time. The below would result in both ts0 and b being created as variables, with both common_timestamp and field1 additionally being summed as values: - # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ... >> \ + # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ...' >> \ event/trigger Note that variable assignments can appear either preceding or following their use. The command below behaves identically to the command above: - # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ... >> \ + # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ...' >> \ event/trigger Any number of variables not bound to a 'vals=' prefix can also be assigned by simply separating them with colons. Below is the same thing but without the values being summed in the histogram: - # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ... >> event/trigger + # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ...' >> event/trigger Variables set as above can be referenced and used in expressions on another event. For example, here's how a latency can be calculated: - # echo 'hist:keys=pid,prio:ts0=common_timestamp ... >> event1/trigger - # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ... >> event2/trigger + # echo 'hist:keys=pid,prio:ts0=common_timestamp ...' >> event1/trigger + # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ...' >> event2/trigger In the first line above, the event's timetamp is saved into the variable ts0. In the next line, ts0 is subtracted from the second @@ -1766,7 +1766,7 @@ yet another variable, 'wakeup_lat'. The hist trigger below in turn makes use of the wakeup_lat variable to compute a combined latency using the same key and variable from yet another event: - # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ... >> event3/trigger + # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ...' >> event3/trigger 2.2.2 Synthetic Events ---------------------- @@ -1807,10 +1807,11 @@ the command that defined it with a '!': At this point, there isn't yet an actual 'wakeup_latency' event instantiated in the event subsytem - for this to happen, a 'hist trigger action' needs to be instantiated and bound to actual fields -and variables defined on other events (see Section 6.3.3 below). +and variables defined on other events (see Section 2.2.3 below on +how that is done using hist trigger 'onmatch' action). Once that is +done, the 'wakeup_latency' synthetic event instance is created. -Once that is done, an event instance is created, and a histogram can -be defined using it: +A histogram can now be defined for the new synthetic event: # echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \ /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger @@ -1960,7 +1961,7 @@ hist trigger specification. back to that pid, the timestamp difference is calculated. If the resulting latency, stored in wakeup_lat, exceeds the current maximum latency, the values specified in the save() fields are - recoreded: + recorded: # echo 'hist:keys=pid:ts0=common_timestamp.usecs \ if comm=="cyclictest"' >> \ diff --git a/Documentation/trace/intel_th.rst b/Documentation/trace/intel_th.rst index 990f13265178..19e2d633f3c7 100644 --- a/Documentation/trace/intel_th.rst +++ b/Documentation/trace/intel_th.rst @@ -38,7 +38,7 @@ description is at Documentation/ABI/testing/sysfs-bus-intel_th-devices-gth. STH registers an stm class device, through which it provides interface to userspace and kernelspace software trace sources. See -Documentation/trace/stm.txt for more information on that. +Documentation/trace/stm.rst for more information on that. MSU can be configured to collect trace data into a system memory buffer, which can later on be read from its device nodes via read() or diff --git a/Documentation/trace/tracepoint-analysis.rst b/Documentation/trace/tracepoint-analysis.rst index a4d3ff2e5efb..716326b9f152 100644 --- a/Documentation/trace/tracepoint-analysis.rst +++ b/Documentation/trace/tracepoint-analysis.rst @@ -6,7 +6,7 @@ Notes on Analysing Behaviour Using Events and Tracepoints 1. Introduction =============== -Tracepoints (see Documentation/trace/tracepoints.txt) can be used without +Tracepoints (see Documentation/trace/tracepoints.rst) can be used without creating custom kernel modules to register probe functions using the event tracing infrastructure. @@ -55,7 +55,7 @@ simple case of:: 3.1 System-Wide Event Enabling ------------------------------ -See Documentation/trace/events.txt for a proper description on how events +See Documentation/trace/events.rst for a proper description on how events can be enabled system-wide. A short example of enabling all events related to page allocation would look something like:: @@ -112,7 +112,7 @@ at that point. 3.4 Local Event Enabling ------------------------ -Documentation/trace/ftrace.txt describes how to enable events on a per-thread +Documentation/trace/ftrace.rst describes how to enable events on a per-thread basis using set_ftrace_pid. 3.5 Local Event Enablement with PCL @@ -137,7 +137,7 @@ basis using PCL such as follows. 4. Event Filtering ================== -Documentation/trace/ftrace.txt covers in-depth how to filter events in +Documentation/trace/ftrace.rst covers in-depth how to filter events in ftrace. Obviously using grep and awk of trace_pipe is an option as well as any script reading trace_pipe. diff --git a/Documentation/translations/ja_JP/howto.rst b/Documentation/translations/ja_JP/howto.rst index 8d7ed0cbbf5f..f3116381c26b 100644 --- a/Documentation/translations/ja_JP/howto.rst +++ b/Documentation/translations/ja_JP/howto.rst @@ -1,5 +1,5 @@ NOTE: -This is a version of Documentation/HOWTO translated into Japanese. +This is a version of Documentation/process/howto.rst translated into Japanese. This document is maintained by Tsugikazu Shibata If you find any difference between this document and the original file or a problem with the translation, please contact the maintainer of this file. @@ -109,7 +109,7 @@ linux-api@vger.kernel.org に送ることを勧めます。 ています。 カーネルに関して初めての人はここからスタートすると良い でしょう。 - :ref:`Documentation/Process/changes.rst ` + :ref:`Documentation/process/changes.rst ` このファイルはカーネルをうまく生成(訳注 build )し、走らせるのに最 小限のレベルで必要な数々のソフトウェアパッケージの一覧を示してい ます。 diff --git a/Documentation/translations/ko_KR/howto.rst b/Documentation/translations/ko_KR/howto.rst index 624654bdcd8a..a8197e072599 100644 --- a/Documentation/translations/ko_KR/howto.rst +++ b/Documentation/translations/ko_KR/howto.rst @@ -160,7 +160,7 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다. 독특한 행동에 관하여 흔히 있는 오해들과 혼란들을 해소하고 있기 때문이다. - :ref:`Documentation/process/stable_kernel_rules.rst ` + :ref:`Documentation/process/stable-kernel-rules.rst ` 이 문서는 안정적인 커널 배포가 이루어지는 규칙을 설명하고 있으며 여러분들이 이러한 배포들 중 하나에 변경을 하길 원한다면 무엇을 해야 하는지를 설명한다. diff --git a/Documentation/translations/ko_KR/memory-barriers.txt b/Documentation/translations/ko_KR/memory-barriers.txt index 921739d00f69..7f01fb1c1084 100644 --- a/Documentation/translations/ko_KR/memory-barriers.txt +++ b/Documentation/translations/ko_KR/memory-barriers.txt @@ -1891,22 +1891,22 @@ Mandatory 배리어들은 SMP 시스템에서도 UP 시스템에서도 SMP 효 /* 소유권을 수정 */ desc->status = DEVICE_OWN; - /* MMIO 를 통해 디바이스에 공지를 하기 전에 메모리를 동기화 */ - wmb(); - /* 업데이트된 디스크립터의 디바이스에 공지 */ writel(DESC_NOTIFY, doorbell); } dma_rmb() 는 디스크립터로부터 데이터를 읽어오기 전에 디바이스가 소유권을 - 내놓았음을 보장하게 하고, dma_wmb() 는 디바이스가 자신이 소유권을 다시 - 가졌음을 보기 전에 디스크립터에 데이터가 쓰였음을 보장합니다. wmb() 는 - 캐시 일관성이 없는 (cache incoherent) MMIO 영역에 쓰기를 시도하기 전에 - 캐시 일관성이 있는 메모리 (cache coherent memory) 쓰기가 완료되었음을 - 보장해주기 위해 필요합니다. - - consistent memory 에 대한 자세한 내용을 위해선 Documentation/DMA-API.txt - 문서를 참고하세요. + 내려놓았을 것을 보장하고, dma_wmb() 는 디바이스가 자신이 소유권을 다시 + 가졌음을 보기 전에 디스크립터에 데이터가 쓰였을 것을 보장합니다. 참고로, + writel() 을 사용하면 캐시 일관성이 있는 메모리 (cache coherent memory) + 쓰기가 MMIO 영역에의 쓰기 전에 완료되었을 것을 보장하므로 writel() 앞에 + wmb() 를 실행할 필요가 없음을 알아두시기 바랍니다. writel() 보다 비용이 + 저렴한 writel_relaxed() 는 이런 보장을 제공하지 않으므로 여기선 사용되지 + 않아야 합니다. + + writel_relaxed() 와 같은 완화된 I/O 접근자들에 대한 자세한 내용을 위해서는 + "커널 I/O 배리어의 효과" 섹션을, consistent memory 에 대한 자세한 내용을 + 위해선 Documentation/DMA-API.txt 문서를 참고하세요. MMIO 쓰기 배리어 diff --git a/Documentation/translations/zh_CN/SubmittingDrivers b/Documentation/translations/zh_CN/SubmittingDrivers index 929385e4b194..15e73562f710 100644 --- a/Documentation/translations/zh_CN/SubmittingDrivers +++ b/Documentation/translations/zh_CN/SubmittingDrivers @@ -107,7 +107,7 @@ Linux 2.6: 程序测试的指导,请参阅 Documentation/power/drivers-testing.txt。有关驱动程序电 源管理问题相对全面的概述,请参阅 - Documentation/power/admin-guide/devices.rst。 + Documentation/driver-api/pm/devices.rst。 管理: 如果一个驱动程序的作者还在进行有效的维护,那么通常除了那 些明显正确且不需要任何检查的补丁以外,其他所有的补丁都会 diff --git a/Documentation/translations/zh_CN/gpio.txt b/Documentation/translations/zh_CN/gpio.txt index 4f8bf30a41dc..4cb1ba8b8fed 100644 --- a/Documentation/translations/zh_CN/gpio.txt +++ b/Documentation/translations/zh_CN/gpio.txt @@ -1,4 +1,4 @@ -Chinese translated version of Documentation/gpio.txt +Chinese translated version of Documentation/gpio If you have any comment or update to the content, please contact the original document maintainer directly. However, if you have a problem @@ -10,7 +10,7 @@ Maintainer: Grant Likely Linus Walleij Chinese maintainer: Fu Wei --------------------------------------------------------------------- -Documentation/gpio.txt 的中文翻译 +Documentation/gpio 的中文翻译 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 diff --git a/Documentation/translations/zh_CN/io_ordering.txt b/Documentation/translations/zh_CN/io_ordering.txt index e592daf4e014..1f8127bdd415 100644 --- a/Documentation/translations/zh_CN/io_ordering.txt +++ b/Documentation/translations/zh_CN/io_ordering.txt @@ -1,4 +1,4 @@ -Chinese translated version of Documentation/io_orderings.txt +Chinese translated version of Documentation/io_ordering.txt If you have any comment or update to the content, please contact the original document maintainer directly. However, if you have a problem diff --git a/Documentation/translations/zh_CN/magic-number.txt b/Documentation/translations/zh_CN/magic-number.txt index e9db693c0a23..7159cec04090 100644 --- a/Documentation/translations/zh_CN/magic-number.txt +++ b/Documentation/translations/zh_CN/magic-number.txt @@ -1,4 +1,4 @@ -Chinese translated version of Documentation/magic-number.txt +Chinese translated version of Documentation/process/magic-number.rst If you have any comment or update to the content, please post to LKML directly. However, if you have problem communicating in English you can also ask the @@ -7,7 +7,7 @@ translation is outdated or there is problem with translation. Chinese maintainer: Jia Wei Wei --------------------------------------------------------------------- -Documentation/magic-number.txt的中文翻译 +Documentation/process/magic-number.rst的中文翻译 如果想评论或更新本文的内容,请直接发信到LKML。如果你使用英文交流有困难的话,也可 以向中文版维护者求助。如果本翻译更新不及时或者翻译存在问题,请联系中文版维护者。 diff --git a/Documentation/translations/zh_CN/video4linux/omap3isp.txt b/Documentation/translations/zh_CN/video4linux/omap3isp.txt index 67ffbf352ae0..e9f29375aa95 100644 --- a/Documentation/translations/zh_CN/video4linux/omap3isp.txt +++ b/Documentation/translations/zh_CN/video4linux/omap3isp.txt @@ -1,4 +1,4 @@ -Chinese translated version of Documentation/video4linux/omap3isp.txt +Chinese translated version of Documentation/media/v4l-drivers/omap3isp.rst If you have any comment or update to the content, please contact the original document maintainer directly. However, if you have a problem @@ -11,7 +11,7 @@ Maintainer: Laurent Pinchart David Cohen Chinese maintainer: Fu Wei --------------------------------------------------------------------- -Documentation/video4linux/omap3isp.txt 的中文翻译 +Documentation/media/v4l-drivers/omap3isp.rst 的中文翻译 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 diff --git a/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt b/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt index c77c0f060864..66c7c568bd86 100644 --- a/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt +++ b/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt @@ -1,4 +1,4 @@ -Chinese translated version of Documentation/video4linux/v4l2-framework.txt +Chinese translated version of Documentation/media/media_kapi.rst If you have any comment or update to the content, please contact the original document maintainer directly. However, if you have a problem @@ -9,7 +9,7 @@ or if there is a problem with the translation. Maintainer: Mauro Carvalho Chehab Chinese maintainer: Fu Wei --------------------------------------------------------------------- -Documentation/video4linux/v4l2-framework.txt 的中文翻译 +Documentation/media/media_kapi.rst 的中文翻译 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 @@ -777,7 +777,7 @@ v4l2 核心 API 提供了一个处理视频缓冲的标准方法(称为“videob 线性 DMA(videobuf-dma-contig)以及大多用于 USB 设备的用 vmalloc 分配的缓冲(videobuf-vmalloc)。 -请参阅 Documentation/video4linux/videobuf,以获得更多关于 videobuf +请参阅 Documentation/media/kapi/v4l2-videobuf.rst,以获得更多关于 videobuf 层的使用信息。 v4l2_fh 结构体 diff --git a/Documentation/usb/gadget_configfs.txt b/Documentation/usb/gadget_configfs.txt index 635e57493709..b8cb38a98c19 100644 --- a/Documentation/usb/gadget_configfs.txt +++ b/Documentation/usb/gadget_configfs.txt @@ -226,7 +226,7 @@ $ rm configs/./ where . specify the configuration and is a symlink to a function being removed from the configuration, e.g.: -$ rm configfs/c.1/ncm.usb0 +$ rm configs/c.1/ncm.usb0 ... ... diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 495b7742ab58..cb8db4f9d097 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -4391,6 +4391,22 @@ all such vmexits. Do not enable KVM_FEATURE_PV_UNHALT if you disable HLT exits. +7.14 KVM_CAP_S390_HPAGE_1M + +Architectures: s390 +Parameters: none +Returns: 0 on success, -EINVAL if hpage module parameter was not set + or cmma is enabled + +With this capability the KVM support for memory backing with 1m pages +through hugetlbfs can be enabled for a VM. After the capability is +enabled, cmma can't be enabled anymore and pfmfi and the storage key +interpretation are disabled. If cmma has already been enabled or the +hpage module parameter is not set to 1, -EINVAL is returned. + +While it is generally possible to create a huge page backed VM without +this capability, the VM will not be able to run. + 8. Other capabilities. ---------------------- @@ -4610,7 +4626,7 @@ This capability indicates that kvm will implement the interfaces to handle reset, migration and nested KVM for branch prediction blocking. The stfle facility 82 should not be provided to the guest without this capability. -8.14 KVM_CAP_HYPERV_TLBFLUSH +8.18 KVM_CAP_HYPERV_TLBFLUSH Architectures: x86 diff --git a/Documentation/x86/intel_rdt_ui.txt b/Documentation/x86/intel_rdt_ui.txt index a16aa2113840..f662d3c530e5 100644 --- a/Documentation/x86/intel_rdt_ui.txt +++ b/Documentation/x86/intel_rdt_ui.txt @@ -29,7 +29,11 @@ mount options are: L2 and L3 CDP are controlled seperately. RDT features are orthogonal. A particular system may support only -monitoring, only control, or both monitoring and control. +monitoring, only control, or both monitoring and control. Cache +pseudo-locking is a unique way of using cache control to "pin" or +"lock" data in the cache. Details can be found in +"Cache Pseudo-Locking". + The mount succeeds if either of allocation or monitoring is present, but only those files and directories supported by the system will be created. @@ -65,6 +69,29 @@ related to allocation: some platforms support devices that have their own settings for cache use which can over-ride these bits. +"bit_usage": Annotated capacity bitmasks showing how all + instances of the resource are used. The legend is: + "0" - Corresponding region is unused. When the system's + resources have been allocated and a "0" is found + in "bit_usage" it is a sign that resources are + wasted. + "H" - Corresponding region is used by hardware only + but available for software use. If a resource + has bits set in "shareable_bits" but not all + of these bits appear in the resource groups' + schematas then the bits appearing in + "shareable_bits" but no resource group will + be marked as "H". + "X" - Corresponding region is available for sharing and + used by hardware and software. These are the + bits that appear in "shareable_bits" as + well as a resource group's allocation. + "S" - Corresponding region is used by software + and available for sharing. + "E" - Corresponding region is used exclusively by + one resource group. No sharing allowed. + "P" - Corresponding region is pseudo-locked. No + sharing allowed. Memory bandwitdh(MB) subdirectory contains the following files with respect to allocation: @@ -151,6 +178,9 @@ All groups contain the following files: CPUs to/from this group. As with the tasks file a hierarchy is maintained where MON groups may only include CPUs owned by the parent CTRL_MON group. + When the resouce group is in pseudo-locked mode this file will + only be readable, reflecting the CPUs associated with the + pseudo-locked region. "cpus_list": @@ -163,6 +193,21 @@ When control is enabled all CTRL_MON groups will also contain: A list of all the resources available to this group. Each resource has its own line and format - see below for details. +"size": + Mirrors the display of the "schemata" file to display the size in + bytes of each allocation instead of the bits representing the + allocation. + +"mode": + The "mode" of the resource group dictates the sharing of its + allocations. A "shareable" resource group allows sharing of its + allocations while an "exclusive" resource group does not. A + cache pseudo-locked region is created by first writing + "pseudo-locksetup" to the "mode" file before writing the cache + pseudo-locked region's schemata to the resource group's "schemata" + file. On successful pseudo-locked region creation the mode will + automatically change to "pseudo-locked". + When monitoring is enabled all MON groups will also contain: "mon_data": @@ -379,6 +424,170 @@ L3CODE:0=fffff;1=fffff;2=fffff;3=fffff L3DATA:0=fffff;1=fffff;2=3c0;3=fffff L3CODE:0=fffff;1=fffff;2=fffff;3=fffff +Cache Pseudo-Locking +-------------------- +CAT enables a user to specify the amount of cache space that an +application can fill. Cache pseudo-locking builds on the fact that a +CPU can still read and write data pre-allocated outside its current +allocated area on a cache hit. With cache pseudo-locking, data can be +preloaded into a reserved portion of cache that no application can +fill, and from that point on will only serve cache hits. The cache +pseudo-locked memory is made accessible to user space where an +application can map it into its virtual address space and thus have +a region of memory with reduced average read latency. + +The creation of a cache pseudo-locked region is triggered by a request +from the user to do so that is accompanied by a schemata of the region +to be pseudo-locked. The cache pseudo-locked region is created as follows: +- Create a CAT allocation CLOSNEW with a CBM matching the schemata + from the user of the cache region that will contain the pseudo-locked + memory. This region must not overlap with any current CAT allocation/CLOS + on the system and no future overlap with this cache region is allowed + while the pseudo-locked region exists. +- Create a contiguous region of memory of the same size as the cache + region. +- Flush the cache, disable hardware prefetchers, disable preemption. +- Make CLOSNEW the active CLOS and touch the allocated memory to load + it into the cache. +- Set the previous CLOS as active. +- At this point the closid CLOSNEW can be released - the cache + pseudo-locked region is protected as long as its CBM does not appear in + any CAT allocation. Even though the cache pseudo-locked region will from + this point on not appear in any CBM of any CLOS an application running with + any CLOS will be able to access the memory in the pseudo-locked region since + the region continues to serve cache hits. +- The contiguous region of memory loaded into the cache is exposed to + user-space as a character device. + +Cache pseudo-locking increases the probability that data will remain +in the cache via carefully configuring the CAT feature and controlling +application behavior. There is no guarantee that data is placed in +cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict +“locked” data from cache. Power management C-states may shrink or +power off cache. Deeper C-states will automatically be restricted on +pseudo-locked region creation. + +It is required that an application using a pseudo-locked region runs +with affinity to the cores (or a subset of the cores) associated +with the cache on which the pseudo-locked region resides. A sanity check +within the code will not allow an application to map pseudo-locked memory +unless it runs with affinity to cores associated with the cache on which the +pseudo-locked region resides. The sanity check is only done during the +initial mmap() handling, there is no enforcement afterwards and the +application self needs to ensure it remains affine to the correct cores. + +Pseudo-locking is accomplished in two stages: +1) During the first stage the system administrator allocates a portion + of cache that should be dedicated to pseudo-locking. At this time an + equivalent portion of memory is allocated, loaded into allocated + cache portion, and exposed as a character device. +2) During the second stage a user-space application maps (mmap()) the + pseudo-locked memory into its address space. + +Cache Pseudo-Locking Interface +------------------------------ +A pseudo-locked region is created using the resctrl interface as follows: + +1) Create a new resource group by creating a new directory in /sys/fs/resctrl. +2) Change the new resource group's mode to "pseudo-locksetup" by writing + "pseudo-locksetup" to the "mode" file. +3) Write the schemata of the pseudo-locked region to the "schemata" file. All + bits within the schemata should be "unused" according to the "bit_usage" + file. + +On successful pseudo-locked region creation the "mode" file will contain +"pseudo-locked" and a new character device with the same name as the resource +group will exist in /dev/pseudo_lock. This character device can be mmap()'ed +by user space in order to obtain access to the pseudo-locked memory region. + +An example of cache pseudo-locked region creation and usage can be found below. + +Cache Pseudo-Locking Debugging Interface +--------------------------------------- +The pseudo-locking debugging interface is enabled by default (if +CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl. + +There is no explicit way for the kernel to test if a provided memory +location is present in the cache. The pseudo-locking debugging interface uses +the tracing infrastructure to provide two ways to measure cache residency of +the pseudo-locked region: +1) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data + from these measurements are best visualized using a hist trigger (see + example below). In this test the pseudo-locked region is traversed at + a stride of 32 bytes while hardware prefetchers and preemption + are disabled. This also provides a substitute visualization of cache + hits and misses. +2) Cache hit and miss measurements using model specific precision counters if + available. Depending on the levels of cache on the system the pseudo_lock_l2 + and pseudo_lock_l3 tracepoints are available. + WARNING: triggering this measurement uses from two (for just L2 + measurements) to four (for L2 and L3 measurements) precision counters on + the system, if any other measurements are in progress the counters and + their corresponding event registers will be clobbered. + +When a pseudo-locked region is created a new debugfs directory is created for +it in debugfs as /sys/kernel/debug/resctrl/. A single +write-only file, pseudo_lock_measure, is present in this directory. The +measurement on the pseudo-locked region depends on the number, 1 or 2, +written to this debugfs file. Since the measurements are recorded with the +tracing infrastructure the relevant tracepoints need to be enabled before the +measurement is triggered. + +Example of latency debugging interface: +In this example a pseudo-locked region named "newlock" was created. Here is +how we can measure the latency in cycles of reading from this region and +visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS +is set: +# :> /sys/kernel/debug/tracing/trace +# echo 'hist:keys=latency' > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/trigger +# echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable +# echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure +# echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable +# cat /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/hist + +# event histogram +# +# trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active] +# + +{ latency: 456 } hitcount: 1 +{ latency: 50 } hitcount: 83 +{ latency: 36 } hitcount: 96 +{ latency: 44 } hitcount: 174 +{ latency: 48 } hitcount: 195 +{ latency: 46 } hitcount: 262 +{ latency: 42 } hitcount: 693 +{ latency: 40 } hitcount: 3204 +{ latency: 38 } hitcount: 3484 + +Totals: + Hits: 8192 + Entries: 9 + Dropped: 0 + +Example of cache hits/misses debugging: +In this example a pseudo-locked region named "newlock" was created on the L2 +cache of a platform. Here is how we can obtain details of the cache hits +and misses using the platform's precision counters. + +# :> /sys/kernel/debug/tracing/trace +# echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable +# echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure +# echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable +# cat /sys/kernel/debug/tracing/trace + +# tracer: nop +# +# _-----=> irqs-off +# / _----=> need-resched +# | / _---=> hardirq/softirq +# || / _--=> preempt-depth +# ||| / delay +# TASK-PID CPU# |||| TIMESTAMP FUNCTION +# | | | |||| | | + pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0 + + Examples for RDT allocation usage: Example 1 @@ -502,7 +711,172 @@ siblings and only the real time threads are scheduled on the cores 4-7. # echo F0 > p0/cpus -4) Locking between applications +Example 4 +--------- + +The resource groups in previous examples were all in the default "shareable" +mode allowing sharing of their cache allocations. If one resource group +configures a cache allocation then nothing prevents another resource group +to overlap with that allocation. + +In this example a new exclusive resource group will be created on a L2 CAT +system with two L2 cache instances that can be configured with an 8-bit +capacity bitmask. The new exclusive resource group will be configured to use +25% of each cache instance. + +# mount -t resctrl resctrl /sys/fs/resctrl/ +# cd /sys/fs/resctrl + +First, we observe that the default group is configured to allocate to all L2 +cache: + +# cat schemata +L2:0=ff;1=ff + +We could attempt to create the new resource group at this point, but it will +fail because of the overlap with the schemata of the default group: +# mkdir p0 +# echo 'L2:0=0x3;1=0x3' > p0/schemata +# cat p0/mode +shareable +# echo exclusive > p0/mode +-sh: echo: write error: Invalid argument +# cat info/last_cmd_status +schemata overlaps + +To ensure that there is no overlap with another resource group the default +resource group's schemata has to change, making it possible for the new +resource group to become exclusive. +# echo 'L2:0=0xfc;1=0xfc' > schemata +# echo exclusive > p0/mode +# grep . p0/* +p0/cpus:0 +p0/mode:exclusive +p0/schemata:L2:0=03;1=03 +p0/size:L2:0=262144;1=262144 + +A new resource group will on creation not overlap with an exclusive resource +group: +# mkdir p1 +# grep . p1/* +p1/cpus:0 +p1/mode:shareable +p1/schemata:L2:0=fc;1=fc +p1/size:L2:0=786432;1=786432 + +The bit_usage will reflect how the cache is used: +# cat info/L2/bit_usage +0=SSSSSSEE;1=SSSSSSEE + +A resource group cannot be forced to overlap with an exclusive resource group: +# echo 'L2:0=0x1;1=0x1' > p1/schemata +-sh: echo: write error: Invalid argument +# cat info/last_cmd_status +overlaps with exclusive group + +Example of Cache Pseudo-Locking +------------------------------- +Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked +region is exposed at /dev/pseudo_lock/newlock that can be provided to +application for argument to mmap(). + +# mount -t resctrl resctrl /sys/fs/resctrl/ +# cd /sys/fs/resctrl + +Ensure that there are bits available that can be pseudo-locked, since only +unused bits can be pseudo-locked the bits to be pseudo-locked needs to be +removed from the default resource group's schemata: +# cat info/L2/bit_usage +0=SSSSSSSS;1=SSSSSSSS +# echo 'L2:1=0xfc' > schemata +# cat info/L2/bit_usage +0=SSSSSSSS;1=SSSSSS00 + +Create a new resource group that will be associated with the pseudo-locked +region, indicate that it will be used for a pseudo-locked region, and +configure the requested pseudo-locked region capacity bitmask: + +# mkdir newlock +# echo pseudo-locksetup > newlock/mode +# echo 'L2:1=0x3' > newlock/schemata + +On success the resource group's mode will change to pseudo-locked, the +bit_usage will reflect the pseudo-locked region, and the character device +exposing the pseudo-locked region will exist: + +# cat newlock/mode +pseudo-locked +# cat info/L2/bit_usage +0=SSSSSSSS;1=SSSSSSPP +# ls -l /dev/pseudo_lock/newlock +crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock + +/* + * Example code to access one page of pseudo-locked cache region + * from user space. + */ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include + +/* + * It is required that the application runs with affinity to only + * cores associated with the pseudo-locked region. Here the cpu + * is hardcoded for convenience of example. + */ +static int cpuid = 2; + +int main(int argc, char *argv[]) +{ + cpu_set_t cpuset; + long page_size; + void *mapping; + int dev_fd; + int ret; + + page_size = sysconf(_SC_PAGESIZE); + + CPU_ZERO(&cpuset); + CPU_SET(cpuid, &cpuset); + ret = sched_setaffinity(0, sizeof(cpuset), &cpuset); + if (ret < 0) { + perror("sched_setaffinity"); + exit(EXIT_FAILURE); + } + + dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR); + if (dev_fd < 0) { + perror("open"); + exit(EXIT_FAILURE); + } + + mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, + dev_fd, 0); + if (mapping == MAP_FAILED) { + perror("mmap"); + close(dev_fd); + exit(EXIT_FAILURE); + } + + /* Application interacts with pseudo-locked memory @mapping */ + + ret = munmap(mapping, page_size); + if (ret < 0) { + perror("munmap"); + close(dev_fd); + exit(EXIT_FAILURE); + } + + close(dev_fd); + exit(EXIT_SUCCESS); +} + +Locking between applications +---------------------------- Certain operations on the resctrl filesystem, composed of read/writes to/from multiple files, must be atomic. @@ -510,7 +884,7 @@ to/from multiple files, must be atomic. As an example, the allocation of an exclusive reservation of L3 cache involves: - 1. Read the cbmmasks from each directory + 1. Read the cbmmasks from each directory or the per-resource "bit_usage" 2. Find a contiguous set of bits in the global CBM bitmask that is clear in any of the directory cbmmasks 3. Create a new directory diff --git a/Documentation/x86/x86_64/boot-options.txt b/Documentation/x86/x86_64/boot-options.txt index 8d109ef67ab6..ad6d2a80cf05 100644 --- a/Documentation/x86/x86_64/boot-options.txt +++ b/Documentation/x86/x86_64/boot-options.txt @@ -92,9 +92,7 @@ APICs Timing notsc - Don't use the CPU time stamp counter to read the wall time. - This can be used to work around timing problems on multiprocessor systems - with not properly synchronized CPUs. + Deprecated, use tsc=unstable instead. nohpet Don't use the HPET timer. @@ -156,6 +154,10 @@ NUMA If given as an integer, fills all system RAM with N fake nodes interleaved over physical nodes. + numa=fake=U + If given as an integer followed by 'U', it will divide each + physical node into N emulated nodes. + ACPI acpi=off Don't enable ACPI diff --git a/MAINTAINERS b/MAINTAINERS index cb468a535f32..0a2342770dee 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -581,7 +581,7 @@ W: https://www.infradead.org/~dhowells/kafs/ AGPGART DRIVER M: David Airlie -T: git git://people.freedesktop.org/~airlied/linux (part of drm maint) +T: git git://anongit.freedesktop.org/drm/drm S: Maintained F: drivers/char/agp/ F: include/linux/agp* @@ -1732,7 +1732,8 @@ F: arch/arm/mach-npcm/ F: arch/arm/boot/dts/nuvoton-npcm* F: include/dt-bindings/clock/nuvoton,npcm7xx-clks.h F: drivers/*/*npcm* -F: Documentation/*/*npcm* +F: Documentation/devicetree/bindings/*/*npcm* +F: Documentation/devicetree/bindings/*/*/*npcm* ARM/NUVOTON W90X900 ARM ARCHITECTURE M: Wan ZongShun @@ -2522,7 +2523,7 @@ S: Supported F: drivers/scsi/esas2r ATUSB IEEE 802.15.4 RADIO DRIVER -M: Stefan Schmidt +M: Stefan Schmidt L: linux-wpan@vger.kernel.org S: Maintained F: drivers/net/ieee802154/atusb.c @@ -2970,9 +2971,13 @@ N: bcm585* N: bcm586* N: bcm88312 N: hr2 -F: arch/arm64/boot/dts/broadcom/ns2* +N: stingray +F: arch/arm64/boot/dts/broadcom/northstar2/* +F: arch/arm64/boot/dts/broadcom/stingray/* F: drivers/clk/bcm/clk-ns* +F: drivers/clk/bcm/clk-sr* F: drivers/pinctrl/bcm/pinctrl-ns* +F: include/dt-bindings/clock/bcm-sr* BROADCOM KONA GPIO DRIVER M: Ray Jui @@ -3079,7 +3084,7 @@ M: Clemens Ladisch L: alsa-devel@alsa-project.org (moderated for non-subscribers) T: git git://git.alsa-project.org/alsa-kernel.git S: Maintained -F: Documentation/sound/alsa/Bt87x.txt +F: Documentation/sound/cards/bt87x.rst F: sound/pci/bt87x.c BT8XXGPIO DRIVER @@ -3375,7 +3380,7 @@ M: David Howells M: David Woodhouse L: keyrings@vger.kernel.org S: Maintained -F: Documentation/module-signing.txt +F: Documentation/admin-guide/module-signing.rst F: certs/ F: scripts/sign-file.c F: scripts/extract-cert.c @@ -4359,12 +4364,7 @@ L: iommu@lists.linux-foundation.org T: git git://git.infradead.org/users/hch/dma-mapping.git W: http://git.infradead.org/users/hch/dma-mapping.git S: Supported -F: lib/dma-debug.c -F: lib/dma-direct.c -F: lib/dma-noncoherent.c -F: lib/dma-virt.c -F: drivers/base/dma-mapping.c -F: drivers/base/dma-coherent.c +F: kernel/dma/ F: include/asm-generic/dma-mapping.h F: include/linux/dma-direct.h F: include/linux/dma-mapping.h @@ -4460,6 +4460,7 @@ F: Documentation/blockdev/drbd/ DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS M: Greg Kroah-Hartman +R: "Rafael J. Wysocki" T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git S: Supported F: Documentation/kobject.txt @@ -4513,7 +4514,7 @@ DRM DRIVER FOR ILITEK ILI9225 PANELS M: David Lechner S: Maintained F: drivers/gpu/drm/tinydrm/ili9225.c -F: Documentation/devicetree/bindings/display/ili9225.txt +F: Documentation/devicetree/bindings/display/ilitek,ili9225.txt DRM DRIVER FOR INTEL I810 VIDEO CARDS S: Orphan / Obsolete @@ -4599,13 +4600,13 @@ DRM DRIVER FOR SITRONIX ST7586 PANELS M: David Lechner S: Maintained F: drivers/gpu/drm/tinydrm/st7586.c -F: Documentation/devicetree/bindings/display/st7586.txt +F: Documentation/devicetree/bindings/display/sitronix,st7586.txt DRM DRIVER FOR SITRONIX ST7735R PANELS M: David Lechner S: Maintained F: drivers/gpu/drm/tinydrm/st7735r.c -F: Documentation/devicetree/bindings/display/st7735r.txt +F: Documentation/devicetree/bindings/display/sitronix,st7735r.txt DRM DRIVER FOR TDFX VIDEO CARDS S: Orphan / Obsolete @@ -4630,7 +4631,7 @@ F: include/uapi/drm/vmwgfx_drm.h DRM DRIVERS M: David Airlie L: dri-devel@lists.freedesktop.org -T: git git://people.freedesktop.org/~airlied/linux +T: git git://anongit.freedesktop.org/drm/drm B: https://bugs.freedesktop.org/ C: irc://chat.freenode.net/dri-devel S: Maintained @@ -4638,7 +4639,6 @@ F: drivers/gpu/drm/ F: drivers/gpu/vga/ F: Documentation/devicetree/bindings/display/ F: Documentation/devicetree/bindings/gpu/ -F: Documentation/devicetree/bindings/video/ F: Documentation/gpu/ F: include/drm/ F: include/uapi/drm/ @@ -4683,7 +4683,7 @@ M: Boris Brezillon L: dri-devel@lists.freedesktop.org S: Supported F: drivers/gpu/drm/atmel-hlcdc/ -F: Documentation/devicetree/bindings/drm/atmel/ +F: Documentation/devicetree/bindings/display/atmel/ T: git git://anongit.freedesktop.org/drm/drm-misc DRM DRIVERS FOR BRIDGE CHIPS @@ -4714,7 +4714,7 @@ S: Supported F: drivers/gpu/drm/fsl-dcu/ F: Documentation/devicetree/bindings/display/fsl,dcu.txt F: Documentation/devicetree/bindings/display/fsl,tcon.txt -F: Documentation/devicetree/bindings/display/panel/nec,nl4827hc19_05b.txt +F: Documentation/devicetree/bindings/display/panel/nec,nl4827hc19-05b.txt DRM DRIVERS FOR FREESCALE IMX M: Philipp Zabel @@ -4824,7 +4824,7 @@ M: Eric Anholt S: Supported F: drivers/gpu/drm/v3d/ F: include/uapi/drm/v3d_drm.h -F: Documentation/devicetree/bindings/display/brcm,bcm-v3d.txt +F: Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.txt T: git git://anongit.freedesktop.org/drm/drm-misc DRM DRIVERS FOR VC4 @@ -5444,6 +5444,7 @@ F: drivers/iommu/exynos-iommu.c EZchip NPS platform support M: Vineet Gupta +M: Ofer Levi S: Supported F: arch/arc/plat-eznps F: arch/arc/boot/dts/eznps.dts @@ -5674,7 +5675,7 @@ F: drivers/crypto/caam/ F: Documentation/devicetree/bindings/crypto/fsl-sec4.txt FREESCALE DIU FRAMEBUFFER DRIVER -M: Timur Tabi +M: Timur Tabi L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/fbdev/fsl-diu-fb.* @@ -5735,7 +5736,7 @@ M: Madalin Bucur L: netdev@vger.kernel.org S: Maintained F: drivers/net/ethernet/freescale/fman -F: Documentation/devicetree/bindings/powerpc/fsl/fman.txt +F: Documentation/devicetree/bindings/net/fsl-fman.txt FREESCALE QORIQ PTP CLOCK DRIVER M: Yangbo Lu @@ -5774,7 +5775,7 @@ S: Maintained F: drivers/net/wan/fsl_ucc_hdlc* FREESCALE QUICC ENGINE UCC UART DRIVER -M: Timur Tabi +M: Timur Tabi L: linuxppc-dev@lists.ozlabs.org S: Maintained F: drivers/tty/serial/ucc_uart.c @@ -5790,7 +5791,6 @@ F: include/linux/fsl/ FREESCALE SOC FS_ENET DRIVER M: Pantelis Antoniou -M: Vitaly Bordug L: linuxppc-dev@lists.ozlabs.org L: netdev@vger.kernel.org S: Maintained @@ -5798,7 +5798,7 @@ F: drivers/net/ethernet/freescale/fs_enet/ F: include/linux/fs_enet_pd.h FREESCALE SOC SOUND DRIVERS -M: Timur Tabi +M: Timur Tabi M: Nicolin Chen M: Xiubo Li R: Fabio Estevam @@ -5930,7 +5930,7 @@ F: Documentation/dev-tools/gcov.rst GDB KERNEL DEBUGGING HELPER SCRIPTS M: Jan Kiszka -M: Kieran Bingham +M: Kieran Bingham S: Supported F: scripts/gdb/ @@ -6501,7 +6501,7 @@ L: linux-mm@kvack.org S: Maintained F: mm/hmm* F: include/linux/hmm* -F: Documentation/vm/hmm.txt +F: Documentation/vm/hmm.rst HOST AP DRIVER M: Jouni Malinen @@ -6909,7 +6909,7 @@ F: drivers/clk/clk-versaclock5.c IEEE 802.15.4 SUBSYSTEM M: Alexander Aring -M: Stefan Schmidt +M: Stefan Schmidt L: linux-wpan@vger.kernel.org W: http://wpan.cakelab.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan.git @@ -6966,7 +6966,7 @@ IIO MULTIPLEXER M: Peter Rosin L: linux-iio@vger.kernel.org S: Maintained -F: Documentation/devicetree/bindings/iio/multiplexer/iio-mux.txt +F: Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt F: drivers/iio/multiplexer/iio-mux.c IIO SUBSYSTEM AND DRIVERS @@ -7096,6 +7096,7 @@ F: include/uapi/linux/input.h F: include/uapi/linux/input-event-codes.h F: include/linux/input/ F: Documentation/devicetree/bindings/input/ +F: Documentation/devicetree/bindings/serio/ F: Documentation/input/ INPUT MULTITOUCH (MT) PROTOCOL @@ -7401,7 +7402,7 @@ F: drivers/platform/x86/intel-wmi-thunderbolt.c INTEL(R) TRACE HUB M: Alexander Shishkin S: Supported -F: Documentation/trace/intel_th.txt +F: Documentation/trace/intel_th.rst F: drivers/hwtracing/intel_th/ INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT) @@ -7425,7 +7426,7 @@ M: Linus Walleij L: linux-iio@vger.kernel.org S: Maintained F: drivers/iio/gyro/mpu3050* -F: Documentation/devicetree/bindings/iio/gyroscope/inv,mpu3050.txt +F: Documentation/devicetree/bindings/iio/gyroscope/invensense,mpu3050.txt IOC3 ETHERNET DRIVER M: Ralf Baechle @@ -7985,7 +7986,7 @@ F: lib/test_kmod.c F: tools/testing/selftests/kmod/ KPROBES -M: Ananth N Mavinakayanahalli +M: Naveen N. Rao M: Anil S Keshavamurthy M: "David S. Miller" M: Masami Hiramatsu @@ -8316,10 +8317,16 @@ M: Jade Alglave M: Luc Maranget M: "Paul E. McKenney" R: Akira Yokosawa +R: Daniel Lustig L: linux-kernel@vger.kernel.org +L: linux-arch@vger.kernel.org S: Supported T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git F: tools/memory-model/ +F: Documentation/atomic_bitops.txt +F: Documentation/atomic_t.txt +F: Documentation/core-api/atomic_ops.rst +F: Documentation/core-api/refcount-vs-atomic.rst F: Documentation/memory-barriers.txt LINUX SECURITY MODULE (LSM) FRAMEWORK @@ -8629,7 +8636,7 @@ MARVELL MWIFIEX WIRELESS DRIVER M: Amitkumar Karwar M: Nishant Sarmukadam M: Ganapathi Bhat -M: Xinming Hu +M: Xinming Hu L: linux-wireless@vger.kernel.org S: Maintained F: drivers/net/wireless/marvell/mwifiex/ @@ -8700,7 +8707,7 @@ M: Guenter Roeck L: linux-hwmon@vger.kernel.org S: Maintained F: Documentation/hwmon/max6697 -F: Documentation/devicetree/bindings/i2c/max6697.txt +F: Documentation/devicetree/bindings/hwmon/max6697.txt F: drivers/hwmon/max6697.c F: include/linux/platform_data/max6697.h @@ -9075,12 +9082,12 @@ S: Maintained F: drivers/usb/mtu3/ MEGACHIPS STDPXXXX-GE-B850V3-FW LVDS/DP++ BRIDGES -M: Peter Senna Tschudin +M: Peter Senna Tschudin M: Martin Donnelly M: Martyn Welch S: Maintained F: drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c -F: Documentation/devicetree/bindings/video/bridge/megachips-stdpxxxx-ge-b850v3-fw.txt +F: Documentation/devicetree/bindings/display/bridge/megachips-stdpxxxx-ge-b850v3-fw.txt MEGARAID SCSI/SAS DRIVERS M: Kashyap Desai @@ -9665,7 +9672,7 @@ F: include/uapi/linux/mmc/ MULTIPLEXER SUBSYSTEM M: Peter Rosin S: Maintained -F: Documentation/ABI/testing/mux/sysfs-class-mux* +F: Documentation/ABI/testing/sysfs-class-mux* F: Documentation/devicetree/bindings/mux/ F: include/linux/dt-bindings/mux/ F: include/linux/mux/ @@ -9696,7 +9703,7 @@ MXSFB DRM DRIVER M: Marek Vasut S: Supported F: drivers/gpu/drm/mxsfb/ -F: Documentation/devicetree/bindings/display/mxsfb-drm.txt +F: Documentation/devicetree/bindings/display/mxsfb.txt MYRICOM MYRI-10G 10GbE DRIVER (MYRI10GE) M: Chris Lee @@ -9756,6 +9763,11 @@ L: linux-scsi@vger.kernel.org S: Maintained F: drivers/scsi/NCR_D700.* +NCSI LIBRARY: +M: Samuel Mendoza-Jonas +S: Maintained +F: net/ncsi/ + NCT6775 HARDWARE MONITOR DRIVER M: Guenter Roeck L: linux-hwmon@vger.kernel.org @@ -9882,6 +9894,7 @@ M: Andrew Lunn M: Vivien Didelot M: Florian Fainelli S: Maintained +F: Documentation/devicetree/bindings/net/dsa/ F: net/dsa/ F: include/net/dsa.h F: include/linux/dsa/ @@ -10208,11 +10221,13 @@ F: sound/soc/codecs/sgtl5000* NXP TDA998X DRM DRIVER M: Russell King -S: Supported +S: Maintained T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-devel T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-fixes F: drivers/gpu/drm/i2c/tda998x_drv.c F: include/drm/i2c/tda998x.h +F: include/dt-bindings/display/tda998x.h +K: "nxp,tda998x" NXP TFA9879 DRIVER M: Peter Rosin @@ -10244,7 +10259,7 @@ F: arch/powerpc/include/asm/pnv-ocxl.h F: drivers/misc/ocxl/ F: include/misc/ocxl* F: include/uapi/misc/ocxl.h -F: Documentation/accelerators/ocxl.txt +F: Documentation/accelerators/ocxl.rst OMAP AUDIO SUPPORT M: Peter Ujfalusi @@ -10273,18 +10288,16 @@ F: arch/arm/boot/dts/*am5* F: arch/arm/boot/dts/*dra7* OMAP DISPLAY SUBSYSTEM and FRAMEBUFFER SUPPORT (DSS2) -M: Tomi Valkeinen L: linux-omap@vger.kernel.org L: linux-fbdev@vger.kernel.org -S: Maintained +S: Orphan F: drivers/video/fbdev/omap2/ F: Documentation/arm/OMAP/DSS OMAP FRAMEBUFFER SUPPORT -M: Tomi Valkeinen L: linux-fbdev@vger.kernel.org L: linux-omap@vger.kernel.org -S: Maintained +S: Orphan F: drivers/video/fbdev/omap/ OMAP GENERAL PURPOSE MEMORY CONTROLLER SUPPORT @@ -10728,7 +10741,7 @@ PARALLEL LCD/KEYPAD PANEL DRIVER M: Willy Tarreau M: Ksenija Stanojevic S: Odd Fixes -F: Documentation/misc-devices/lcd-panel-cgram.txt +F: Documentation/auxdisplay/lcd-panel-cgram.txt F: drivers/misc/panel.c PARALLEL PORT SUBSYSTEM @@ -10885,7 +10898,7 @@ M: Will Deacon L: linux-pci@vger.kernel.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained -F: Documentation/devicetree/bindings/pci/controller-generic-pci.txt +F: Documentation/devicetree/bindings/pci/host-generic-pci.txt F: drivers/pci/controller/pci-host-common.c F: drivers/pci/controller/pci-host-generic.c @@ -11066,7 +11079,7 @@ M: Xiaowei Song M: Binghui Wang L: linux-pci@vger.kernel.org S: Maintained -F: Documentation/devicetree/bindings/pci/pcie-kirin.txt +F: Documentation/devicetree/bindings/pci/kirin-pcie.txt F: drivers/pci/controller/dwc/pcie-kirin.c PCIE DRIVER FOR HISILICON STB @@ -11478,6 +11491,15 @@ W: http://wireless.kernel.org/en/users/Drivers/p54 S: Obsolete F: drivers/net/wireless/intersil/prism54/ +PROC FILESYSTEM +R: Alexey Dobriyan +L: linux-kernel@vger.kernel.org +L: linux-fsdevel@vger.kernel.org +S: Maintained +F: fs/proc/ +F: include/linux/proc_fs.h +F: tools/testing/selftests/proc/ + PROC SYSCTL M: "Luis R. Rodriguez" M: Kees Cook @@ -11810,9 +11832,9 @@ F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt F: drivers/cpufreq/qcom-cpufreq-kryo.c QUALCOMM EMAC GIGABIT ETHERNET DRIVER -M: Timur Tabi +M: Timur Tabi L: netdev@vger.kernel.org -S: Supported +S: Maintained F: drivers/net/ethernet/qualcomm/emac/ QUALCOMM HEXAGON ARCHITECTURE @@ -11823,7 +11845,7 @@ S: Supported F: arch/hexagon/ QUALCOMM HIDMA DRIVER -M: Sinan Kaya +M: Sinan Kaya L: linux-arm-kernel@lists.infradead.org L: linux-arm-msm@vger.kernel.org L: dmaengine@vger.kernel.org @@ -12023,9 +12045,9 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git F: Documentation/RCU/ X: Documentation/RCU/torture.txt F: include/linux/rcu* -X: include/linux/srcu.h +X: include/linux/srcu*.h F: kernel/rcu/ -X: kernel/torture.c +X: kernel/rcu/srcu*.c REAL TIME CLOCK (RTC) SUBSYSTEM M: Alessandro Zummo @@ -12179,7 +12201,7 @@ F: drivers/mtd/nand/raw/r852.h RISC-V ARCHITECTURE M: Palmer Dabbelt -M: Albert Ou +M: Albert Ou L: linux-riscv@lists.infradead.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git S: Supported @@ -12386,7 +12408,6 @@ F: drivers/pci/hotplug/s390_pci_hpc.c S390 VFIO-CCW DRIVER M: Cornelia Huck -M: Dong Jia Shi M: Halil Pasic L: linux-s390@vger.kernel.org L: kvm@vger.kernel.org @@ -12457,7 +12478,7 @@ L: linux-crypto@vger.kernel.org L: linux-samsung-soc@vger.kernel.org S: Maintained F: drivers/crypto/exynos-rng.c -F: Documentation/devicetree/bindings/crypto/samsung,exynos-rng4.txt +F: Documentation/devicetree/bindings/rng/samsung,exynos4-rng.txt SAMSUNG EXYNOS TRUE RANDOM NUMBER GENERATOR (TRNG) DRIVER M: Łukasz Stelmach @@ -12939,6 +12960,14 @@ F: drivers/media/usb/siano/ F: drivers/media/usb/siano/ F: drivers/media/mmc/siano/ +SIFIVE DRIVERS +M: Palmer Dabbelt +L: linux-riscv@lists.infradead.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git +S: Supported +K: sifive +N: sifive + SILEAD TOUCHSCREEN DRIVER M: Hans de Goede L: linux-input@vger.kernel.org @@ -13054,8 +13083,8 @@ L: linux-kernel@vger.kernel.org W: http://www.rdrop.com/users/paulmck/RCU/ S: Supported T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git -F: include/linux/srcu.h -F: kernel/rcu/srcu.c +F: include/linux/srcu*.h +F: kernel/rcu/srcu*.c SERIAL LOW-POWER INTER-CHIP MEDIA BUS (SLIMbus) M: Srinivas Kandagatla @@ -13291,7 +13320,7 @@ M: Vinod Koul L: alsa-devel@alsa-project.org (moderated for non-subscribers) T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git S: Supported -F: Documentation/sound/alsa/compress_offload.txt +F: Documentation/sound/designs/compress-offload.rst F: include/sound/compress_driver.h F: include/uapi/sound/compress_* F: sound/core/compress_offload.c @@ -13312,7 +13341,7 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) W: http://alsa-project.org/main/index.php/ASoC S: Supported F: Documentation/devicetree/bindings/sound/ -F: Documentation/sound/alsa/soc/ +F: Documentation/sound/soc/ F: sound/soc/ F: include/sound/soc* @@ -13571,7 +13600,7 @@ F: drivers/*/stm32-*timer* F: drivers/pwm/pwm-stm32* F: include/linux/*/stm32-*tim* F: Documentation/ABI/testing/*timer-stm32 -F: Documentation/devicetree/bindings/*/stm32-*timer +F: Documentation/devicetree/bindings/*/stm32-*timer* F: Documentation/devicetree/bindings/pwm/pwm-stm32* STMMAC ETHERNET DRIVER @@ -13642,7 +13671,7 @@ M: Konrad Rzeszutek Wilk L: iommu@lists.linux-foundation.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git S: Supported -F: lib/swiotlb.c +F: kernel/dma/swiotlb.c F: arch/*/kernel/pci-swiotlb.c F: include/linux/swiotlb.h @@ -13794,7 +13823,7 @@ SYSTEM TRACE MODULE CLASS M: Alexander Shishkin S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/ash/stm.git -F: Documentation/trace/stm.txt +F: Documentation/trace/stm.rst F: drivers/hwtracing/stm/ F: include/linux/stm.h F: include/uapi/linux/stm.h @@ -14414,6 +14443,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git F: Documentation/RCU/torture.txt F: kernel/torture.c F: kernel/rcu/rcutorture.c +F: kernel/rcu/rcuperf.c F: kernel/locking/locktorture.c TOSHIBA ACPI EXTRAS DRIVER @@ -14471,7 +14501,7 @@ M: Steven Rostedt M: Ingo Molnar T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core S: Maintained -F: Documentation/trace/ftrace.txt +F: Documentation/trace/ftrace.rst F: arch/*/*/*/ftrace.h F: arch/*/kernel/ftrace.c F: include/*/ftrace.h @@ -14940,7 +14970,7 @@ M: Heikki Krogerus L: linux-usb@vger.kernel.org S: Maintained F: Documentation/ABI/testing/sysfs-class-typec -F: Documentation/usb/typec.rst +F: Documentation/driver-api/usb/typec.rst F: drivers/usb/typec/ F: include/linux/usb/typec.h @@ -15007,8 +15037,7 @@ F: drivers/media/usb/zr364xx/ USER-MODE LINUX (UML) M: Jeff Dike M: Richard Weinberger -L: user-mode-linux-devel@lists.sourceforge.net -L: user-mode-linux-user@lists.sourceforge.net +L: linux-um@lists.infradead.org W: http://user-mode-linux.sourceforge.net T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git S: Maintained @@ -15567,9 +15596,17 @@ M: x86@kernel.org L: linux-kernel@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core S: Maintained +F: Documentation/devicetree/bindings/x86/ F: Documentation/x86/ F: arch/x86/ +X86 ENTRY CODE +M: Andy Lutomirski +L: linux-kernel@vger.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm +S: Maintained +F: arch/x86/entry/ + X86 MCE INFRASTRUCTURE M: Tony Luck M: Borislav Petkov @@ -15592,7 +15629,7 @@ F: drivers/platform/x86/ F: drivers/platform/olpc/ X86 VDSO -M: Andy Lutomirski +M: Andy Lutomirski L: linux-kernel@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso S: Maintained @@ -15770,7 +15807,7 @@ YEALINK PHONE DRIVER M: Henk Vergonet L: usbb2k-api-dev@nongnu.org S: Maintained -F: Documentation/input/yealink.rst +F: Documentation/input/devices/yealink.rst F: drivers/input/misc/yealink.* Z8530 DRIVER FOR AX.25 diff --git a/Makefile b/Makefile index 8a26b5937241..863f58503bee 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 4 -PATCHLEVEL = 17 +PATCHLEVEL = 18 SUBLEVEL = 0 EXTRAVERSION = NAME = Merciless Moray @@ -353,9 +353,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ else if [ -x /bin/bash ]; then echo /bin/bash; \ else echo sh; fi ; fi) -HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS) -HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS) -HOST_LFS_LIBS := $(shell getconf LFS_LIBS) +HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null) +HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null) +HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null) HOSTCC = gcc HOSTCXX = g++ @@ -507,11 +507,6 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLA KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO endif -ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/cc-can-link.sh $(CC)), y) - CC_CAN_LINK := y - export CC_CAN_LINK -endif - # The expansion should be delayed until arch/$(SRCARCH)/Makefile is included. # Some architectures define CROSS_COMPILE in arch/$(SRCARCH)/Makefile. # CC_VERSION_TEXT is referenced from Kconfig (so it needs export), @@ -1717,6 +1712,6 @@ endif # skip-makefile PHONY += FORCE FORCE: -# Declare the contents of the .PHONY variable as phony. We keep that +# Declare the contents of the PHONY variable as phony. We keep that # information in a variable so we can use it in if_changed and friends. .PHONY: $(PHONY) diff --git a/arch/Kconfig b/arch/Kconfig index c302b3dd0058..1aa59063f1fd 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -403,7 +403,7 @@ config SECCOMP_FILTER in terms of Berkeley Packet Filter programs which implement task-defined system call filtering polices. - See Documentation/prctl/seccomp_filter.txt for details. + See Documentation/userspace-api/seccomp_filter.rst for details. preferred-plugin-hostcc := $(if-success,[ $(gcc-version) -ge 40800 ],$(HOSTCXX),$(HOSTCC)) @@ -549,7 +549,7 @@ config GCC_PLUGIN_RANDSTRUCT_PERFORMANCE in structures. This reduces the performance hit of RANDSTRUCT at the cost of weakened randomization. -config HAVE_CC_STACKPROTECTOR +config HAVE_STACKPROTECTOR bool help An arch should select this symbol if: @@ -560,7 +560,7 @@ config CC_HAS_STACKPROTECTOR_NONE config STACKPROTECTOR bool "Stack Protector buffer overflow detection" - depends on HAVE_CC_STACKPROTECTOR + depends on HAVE_STACKPROTECTOR depends on $(cc-option,-fstack-protector) default y help diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig index 0c4805a572c8..04a4a138ed13 100644 --- a/arch/alpha/Kconfig +++ b/arch/alpha/Kconfig @@ -555,11 +555,6 @@ config SMP If you don't know what to do here, say N. -config HAVE_DEC_LOCK - bool - depends on SMP - default y - config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 767bfdd42992..150a1c5d6a2c 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -18,11 +18,11 @@ * To ensure dependency ordering is preserved for the _relaxed and * _release atomics, an smp_read_barrier_depends() is unconditionally * inserted into the _relaxed variants, which are used to build the - * barriered versions. To avoid redundant back-to-back fences, we can - * define the _acquire and _fence versions explicitly. + * barriered versions. Avoid redundant back-to-back fences in the + * _acquire and _fence versions. */ -#define __atomic_op_acquire(op, args...) op##_relaxed(args) -#define __atomic_op_fence __atomic_op_release +#define __atomic_acquire_fence() +#define __atomic_post_full_fence() #define ATOMIC_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) } @@ -206,7 +206,7 @@ ATOMIC_OPS(xor, xor) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -214,7 +214,7 @@ ATOMIC_OPS(xor, xor) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, new, old; smp_mb(); @@ -235,38 +235,39 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) smp_mb(); return old; } - +#define atomic_fetch_add_unless atomic_fetch_add_unless /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as it was not @u. - * Returns true iff @v was not @u. + * Returns the old value of @v. */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) +static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { - long c, tmp; + long c, new, old; smp_mb(); __asm__ __volatile__( - "1: ldq_l %[tmp],%[mem]\n" - " cmpeq %[tmp],%[u],%[c]\n" - " addq %[tmp],%[a],%[tmp]\n" + "1: ldq_l %[old],%[mem]\n" + " cmpeq %[old],%[u],%[c]\n" + " addq %[old],%[a],%[new]\n" " bne %[c],2f\n" - " stq_c %[tmp],%[mem]\n" - " beq %[tmp],3f\n" + " stq_c %[new],%[mem]\n" + " beq %[new],3f\n" "2:\n" ".subsection 2\n" "3: br 1b\n" ".previous" - : [tmp] "=&r"(tmp), [c] "=&r"(c) + : [old] "=&r"(old), [new] "=&r"(new), [c] "=&r"(c) : [mem] "m"(*v), [a] "rI"(a), [u] "rI"(u) : "memory"); smp_mb(); - return !c; + return old; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless /* * atomic64_dec_if_positive - decrement by 1 if old value positive @@ -295,31 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) smp_mb(); return old - 1; } - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - -#define atomic_dec_return(v) atomic_sub_return(1,(v)) -#define atomic64_dec_return(v) atomic64_sub_return(1,(v)) - -#define atomic_inc_return(v) atomic_add_return(1,(v)) -#define atomic64_inc_return(v) atomic64_add_return(1,(v)) - -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0) - -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic64_inc_and_test(v) (atomic64_add_return(1, (v)) == 0) - -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) - -#define atomic_inc(v) atomic_add(1,(v)) -#define atomic64_inc(v) atomic64_add(1,(v)) - -#define atomic_dec(v) atomic_sub(1,(v)) -#define atomic64_dec(v) atomic64_sub(1,(v)) +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* _ALPHA_ATOMIC_H */ diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c index 6e921754c8fc..c210a25dd6da 100644 --- a/arch/alpha/kernel/osf_sys.c +++ b/arch/alpha/kernel/osf_sys.c @@ -1180,13 +1180,10 @@ SYSCALL_DEFINE2(osf_getrusage, int, who, struct rusage32 __user *, ru) SYSCALL_DEFINE4(osf_wait4, pid_t, pid, int __user *, ustatus, int, options, struct rusage32 __user *, ur) { - unsigned int status = 0; struct rusage r; - long err = kernel_wait4(pid, &status, options, &r); + long err = kernel_wait4(pid, ustatus, options, &r); if (err <= 0) return err; - if (put_user(status, ustatus)) - return -EFAULT; if (!ur) return err; if (put_tv_to_tv32(&ur->ru_utime, &r.ru_utime)) diff --git a/arch/alpha/lib/Makefile b/arch/alpha/lib/Makefile index 04f9729de57c..854d5e79979e 100644 --- a/arch/alpha/lib/Makefile +++ b/arch/alpha/lib/Makefile @@ -35,8 +35,6 @@ lib-y = __divqu.o __remqu.o __divlu.o __remlu.o \ callback_srm.o srm_puts.o srm_printk.o \ fls.o -lib-$(CONFIG_SMP) += dec_and_lock.o - # The division routines are built from single source, with different defines. AFLAGS___divqu.o = -DDIV AFLAGS___remqu.o = -DREM diff --git a/arch/alpha/lib/dec_and_lock.c b/arch/alpha/lib/dec_and_lock.c deleted file mode 100644 index a117707f57fe..000000000000 --- a/arch/alpha/lib/dec_and_lock.c +++ /dev/null @@ -1,44 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * arch/alpha/lib/dec_and_lock.c - * - * ll/sc version of atomic_dec_and_lock() - * - */ - -#include -#include -#include - - asm (".text \n\ - .global _atomic_dec_and_lock \n\ - .ent _atomic_dec_and_lock \n\ - .align 4 \n\ -_atomic_dec_and_lock: \n\ - .prologue 0 \n\ -1: ldl_l $1, 0($16) \n\ - subl $1, 1, $1 \n\ - beq $1, 2f \n\ - stl_c $1, 0($16) \n\ - beq $1, 4f \n\ - mb \n\ - clr $0 \n\ - ret \n\ -2: br $29, 3f \n\ -3: ldgp $29, 0($29) \n\ - br $atomic_dec_and_lock_1..ng \n\ - .subsection 2 \n\ -4: br 1b \n\ - .previous \n\ - .end _atomic_dec_and_lock"); - -static int __used atomic_dec_and_lock_1(atomic_t *atomic, spinlock_t *lock) -{ - /* Slow path */ - spin_lock(lock); - if (atomic_dec_and_test(atomic)) - return 1; - spin_unlock(lock); - return 0; -} -EXPORT_SYMBOL(_atomic_dec_and_lock); diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index e81bcd271be7..5151d81476a1 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -50,6 +50,9 @@ config ARC select HAVE_KERNEL_LZMA select ARCH_HAS_PTE_SPECIAL +config ARCH_HAS_CACHE_LINE_SIZE + def_bool y + config MIGHT_HAVE_PCI bool @@ -413,7 +416,7 @@ config ARC_HAS_DIV_REM config ARC_HAS_ACCL_REGS bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)" - default n + default y help Depending on the configuration, CPU can contain accumulator reg-pair (also referred to as r58:r59). These can also be used by gcc as GPR so diff --git a/arch/arc/Makefile b/arch/arc/Makefile index d37f49d6a27f..6c1b20dd76ad 100644 --- a/arch/arc/Makefile +++ b/arch/arc/Makefile @@ -16,7 +16,7 @@ endif KBUILD_DEFCONFIG := nsim_700_defconfig -cflags-y += -fno-common -pipe -fno-builtin -D__linux__ +cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__ cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7 cflags-$(CONFIG_ISA_ARCV2) += -mcpu=archs @@ -140,16 +140,3 @@ dtbs: scripts archclean: $(Q)$(MAKE) $(clean)=$(boot) - -# Hacks to enable final link due to absence of link-time branch relexation -# and gcc choosing optimal(shorter) branches at -O3 -# -# vineetg Feb 2010: -mlong-calls switched off for overall kernel build -# However lib/decompress_inflate.o (.init.text) calls -# zlib_inflate_workspacesize (.text) causing relocation errors. -# Thus forcing all exten calls in this file to be long calls -export CFLAGS_decompress_inflate.o = -mmedium-calls -export CFLAGS_initramfs.o = -mmedium-calls -ifdef CONFIG_SMP -export CFLAGS_core.o = -mmedium-calls -endif diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig index 09f85154c5a4..a635ea972304 100644 --- a/arch/arc/configs/axs101_defconfig +++ b/arch/arc/configs/axs101_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs/" CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y # CONFIG_VM_EVENT_COUNTERS is not set diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig index 09fed3ef22b6..aa507e423075 100644 --- a/arch/arc/configs/axs103_defconfig +++ b/arch/arc/configs/axs103_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y # CONFIG_VM_EVENT_COUNTERS is not set diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig index ea2f6d817d1a..eba07f468654 100644 --- a/arch/arc/configs/axs103_smp_defconfig +++ b/arch/arc/configs/axs103_smp_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y # CONFIG_VM_EVENT_COUNTERS is not set diff --git a/arch/arc/configs/haps_hs_defconfig b/arch/arc/configs/haps_hs_defconfig index ab231c040efe..098b19fbaa51 100644 --- a/arch/arc/configs/haps_hs_defconfig +++ b/arch/arc/configs/haps_hs_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_EXPERT=y CONFIG_PERF_EVENTS=y # CONFIG_COMPAT_BRK is not set diff --git a/arch/arc/configs/haps_hs_smp_defconfig b/arch/arc/configs/haps_hs_smp_defconfig index cf449cbf440d..0104c404d897 100644 --- a/arch/arc/configs/haps_hs_smp_defconfig +++ b/arch/arc/configs/haps_hs_smp_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y # CONFIG_VM_EVENT_COUNTERS is not set diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig index 1b54c72f4296..6491be0ddbc9 100644 --- a/arch/arc/configs/hsdk_defconfig +++ b/arch/arc/configs/hsdk_defconfig @@ -9,7 +9,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y # CONFIG_VM_EVENT_COUNTERS is not set diff --git a/arch/arc/configs/nsim_700_defconfig b/arch/arc/configs/nsim_700_defconfig index 31c2c70b34a1..99e05cf63fca 100644 --- a/arch/arc/configs/nsim_700_defconfig +++ b/arch/arc/configs/nsim_700_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs/" CONFIG_KALLSYMS_ALL=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y diff --git a/arch/arc/configs/nsim_hs_defconfig b/arch/arc/configs/nsim_hs_defconfig index a578c721d50f..0dc4f9b737e7 100644 --- a/arch/arc/configs/nsim_hs_defconfig +++ b/arch/arc/configs/nsim_hs_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" CONFIG_KALLSYMS_ALL=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y diff --git a/arch/arc/configs/nsim_hs_smp_defconfig b/arch/arc/configs/nsim_hs_smp_defconfig index 37d7395f3272..be3c30a15e54 100644 --- a/arch/arc/configs/nsim_hs_smp_defconfig +++ b/arch/arc/configs/nsim_hs_smp_defconfig @@ -9,7 +9,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" CONFIG_KALLSYMS_ALL=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y diff --git a/arch/arc/configs/nsimosci_defconfig b/arch/arc/configs/nsimosci_defconfig index 1e1470e2a7f0..3a74b9b21772 100644 --- a/arch/arc/configs/nsimosci_defconfig +++ b/arch/arc/configs/nsimosci_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs/" CONFIG_KALLSYMS_ALL=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y diff --git a/arch/arc/configs/nsimosci_hs_defconfig b/arch/arc/configs/nsimosci_hs_defconfig index 084a6e42685b..ea2834b4dc1d 100644 --- a/arch/arc/configs/nsimosci_hs_defconfig +++ b/arch/arc/configs/nsimosci_hs_defconfig @@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" CONFIG_KALLSYMS_ALL=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y diff --git a/arch/arc/configs/nsimosci_hs_smp_defconfig b/arch/arc/configs/nsimosci_hs_smp_defconfig index f36d47990415..80a5a1b4924b 100644 --- a/arch/arc/configs/nsimosci_hs_smp_defconfig +++ b/arch/arc/configs/nsimosci_hs_smp_defconfig @@ -9,7 +9,6 @@ CONFIG_IKCONFIG_PROC=y # CONFIG_UTS_NS is not set # CONFIG_PID_NS is not set CONFIG_BLK_DEV_INITRD=y -CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" CONFIG_PERF_EVENTS=y # CONFIG_COMPAT_BRK is not set CONFIG_KPROBES=y diff --git a/arch/arc/configs/tb10x_defconfig b/arch/arc/configs/tb10x_defconfig index 1aca2e8fd1ba..2cc87f909747 100644 --- a/arch/arc/configs/tb10x_defconfig +++ b/arch/arc/configs/tb10x_defconfig @@ -56,7 +56,6 @@ CONFIG_STMMAC_ETH=y # CONFIG_INPUT is not set # CONFIG_SERIO is not set # CONFIG_VT is not set -CONFIG_DEVPTS_MULTIPLE_INSTANCES=y # CONFIG_LEGACY_PTYS is not set # CONFIG_DEVKMEM is not set CONFIG_SERIAL_8250=y diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 11859287c52a..4e0072730241 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -187,7 +187,8 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) -#define atomic_andnot atomic_andnot +#define atomic_andnot atomic_andnot +#define atomic_fetch_andnot atomic_fetch_andnot #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ @@ -296,8 +297,6 @@ ATOMIC_OPS(add, +=, CTOP_INST_AADD_DI_R2_R2_R3) ATOMIC_FETCH_OP(op, c_op, asm_op) ATOMIC_OPS(and, &=, CTOP_INST_AAND_DI_R2_R2_R3) -#define atomic_andnot(mask, v) atomic_and(~(mask), (v)) -#define atomic_fetch_andnot(mask, v) atomic_fetch_and(~(mask), (v)) ATOMIC_OPS(or, |=, CTOP_INST_AOR_DI_R2_R2_R3) ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) @@ -308,48 +307,6 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * __atomic_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v - */ -#define __atomic_add_unless(v, a, u) \ -({ \ - int c, old; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND thmeselves don't provide any such semantics \ - */ \ - smp_mb(); \ - \ - c = atomic_read(v); \ - while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\ - c = old; \ - \ - smp_mb(); \ - \ - c; \ -}) - -#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) - -#define atomic_inc(v) atomic_add(1, v) -#define atomic_dec(v) atomic_sub(1, v) - -#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - -#define atomic_add_negative(i, v) (atomic_add_return(i, v) < 0) - - #ifdef CONFIG_GENERIC_ATOMIC64 #include @@ -472,7 +429,8 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OP_RETURN(op, op1, op2) \ ATOMIC64_FETCH_OP(op, op1, op2) -#define atomic64_andnot atomic64_andnot +#define atomic64_andnot atomic64_andnot +#define atomic64_fetch_andnot atomic64_fetch_andnot ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) @@ -559,53 +517,43 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return val; } +#define atomic64_dec_if_positive atomic64_dec_if_positive /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. * - * if (v != u) { v += a; ret = 1} else {ret = 0} - * Returns 1 iff @v was not @u (i.e. if add actually happened) + * Atomically adds @a to @v, if it was not @u. + * Returns the old value of @v */ -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, + long long u) { - long long val; - int op_done; + long long old, temp; smp_mb(); __asm__ __volatile__( "1: llockd %0, [%2] \n" - " mov %1, 1 \n" " brne %L0, %L4, 2f # continue to add since v != u \n" " breq.d %H0, %H4, 3f # return since v == u \n" - " mov %1, 0 \n" "2: \n" - " add.f %L0, %L0, %L3 \n" - " adc %H0, %H0, %H3 \n" - " scondd %0, [%2] \n" + " add.f %L1, %L0, %L3 \n" + " adc %H1, %H0, %H3 \n" + " scondd %1, [%2] \n" " bnz 1b \n" "3: \n" - : "=&r"(val), "=&r" (op_done) + : "=&r"(old), "=&r" (temp) : "r"(&v->counter), "r"(a), "r"(u) : "cc"); /* memory clobber comes from smp_mb() */ smp_mb(); - return op_done; + return old; } - -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) -#define atomic64_inc(v) atomic64_add(1LL, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) -#define atomic64_dec(v) atomic64_sub(1LL, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #endif /* !CONFIG_GENERIC_ATOMIC64 */ diff --git a/arch/arc/include/asm/cache.h b/arch/arc/include/asm/cache.h index 8486f328cc5d..ff7d3232764a 100644 --- a/arch/arc/include/asm/cache.h +++ b/arch/arc/include/asm/cache.h @@ -48,7 +48,9 @@ }) /* Largest line length for either L1 or L2 is 128 bytes */ -#define ARCH_DMA_MINALIGN 128 +#define SMP_CACHE_BYTES 128 +#define cache_line_size() SMP_CACHE_BYTES +#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES extern void arc_cache_init(void); extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); diff --git a/arch/arc/include/asm/delay.h b/arch/arc/include/asm/delay.h index d5da2115d78a..03d6bb0f4e13 100644 --- a/arch/arc/include/asm/delay.h +++ b/arch/arc/include/asm/delay.h @@ -17,8 +17,11 @@ #ifndef __ASM_ARC_UDELAY_H #define __ASM_ARC_UDELAY_H +#include #include /* HZ */ +extern unsigned long loops_per_jiffy; + static inline void __delay(unsigned long loops) { __asm__ __volatile__( diff --git a/arch/arc/include/asm/entry-compact.h b/arch/arc/include/asm/entry-compact.h index ec36d5b6d435..29f3988c9424 100644 --- a/arch/arc/include/asm/entry-compact.h +++ b/arch/arc/include/asm/entry-compact.h @@ -234,6 +234,9 @@ POP gp RESTORE_R12_TO_R0 +#ifdef CONFIG_ARC_CURR_IN_REG + ld r25, [sp, 12] +#endif ld sp, [sp] /* restore original sp */ /* orig_r0, ECR, user_r25 skipped automatically */ .endm @@ -315,6 +318,9 @@ POP gp RESTORE_R12_TO_R0 +#ifdef CONFIG_ARC_CURR_IN_REG + ld r25, [sp, 12] +#endif ld sp, [sp] /* restore original sp */ /* orig_r0, ECR, user_r25 skipped automatically */ .endm diff --git a/arch/arc/include/asm/entry.h b/arch/arc/include/asm/entry.h index 51597f344a62..302b0db8ea2b 100644 --- a/arch/arc/include/asm/entry.h +++ b/arch/arc/include/asm/entry.h @@ -86,9 +86,6 @@ POP r1 POP r0 -#ifdef CONFIG_ARC_CURR_IN_REG - ld r25, [sp, 12] -#endif .endm /*-------------------------------------------------------------- diff --git a/arch/arc/include/asm/kprobes.h b/arch/arc/include/asm/kprobes.h index 2e52d18e6bc7..2c1b479d5aea 100644 --- a/arch/arc/include/asm/kprobes.h +++ b/arch/arc/include/asm/kprobes.h @@ -45,8 +45,6 @@ struct prev_kprobe { struct kprobe_ctlblk { unsigned int kprobe_status; - struct pt_regs jprobe_saved_regs; - char jprobes_stack[MAX_STACK_SIZE]; struct prev_kprobe prev_kprobe; }; diff --git a/arch/arc/include/asm/mach_desc.h b/arch/arc/include/asm/mach_desc.h index c28e6c347b49..871f3cb16af9 100644 --- a/arch/arc/include/asm/mach_desc.h +++ b/arch/arc/include/asm/mach_desc.h @@ -34,9 +34,7 @@ struct machine_desc { const char *name; const char **dt_compat; void (*init_early)(void); -#ifdef CONFIG_SMP void (*init_per_cpu)(unsigned int); -#endif void (*init_machine)(void); void (*init_late)(void); diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 109baa06831c..09ddddf71cc5 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -105,7 +105,7 @@ typedef pte_t * pgtable_t; #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) /* Default Permissions for stack/heaps pages (Non Executable) */ -#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE) +#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) #define WANT_PAGE_VIRTUAL 1 diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 8ec5599a0957..cf4be70d5892 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -377,7 +377,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, /* Decode a PTE containing swap "identifier "into constituents */ #define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f) -#define __swp_offset(pte_lookalike) ((pte_lookalike).val << 13) +#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13) /* NOPs, to keep generic kernel happy */ #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) diff --git a/arch/arc/kernel/irq.c b/arch/arc/kernel/irq.c index 538b36afe89e..62b185057c04 100644 --- a/arch/arc/kernel/irq.c +++ b/arch/arc/kernel/irq.c @@ -31,10 +31,10 @@ void __init init_IRQ(void) /* a SMP H/w block could do IPI IRQ request here */ if (plat_smp_ops.init_per_cpu) plat_smp_ops.init_per_cpu(smp_processor_id()); +#endif if (machine_desc->init_per_cpu) machine_desc->init_per_cpu(smp_processor_id()); -#endif } /* diff --git a/arch/arc/kernel/kprobes.c b/arch/arc/kernel/kprobes.c index 42b05046fad9..df35d4c0b0b8 100644 --- a/arch/arc/kernel/kprobes.c +++ b/arch/arc/kernel/kprobes.c @@ -225,24 +225,18 @@ int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs) /* If we have no pre-handler or it returned 0, we continue with * normal processing. If we have a pre-handler and it returned - * non-zero - which is expected from setjmp_pre_handler for - * jprobe, we return without single stepping and leave that to - * the break-handler which is invoked by a kprobe from - * jprobe_return + * non-zero - which means user handler setup registers to exit + * to another instruction, we must skip the single stepping. */ if (!p->pre_handler || !p->pre_handler(p, regs)) { setup_singlestep(p, regs); kcb->kprobe_status = KPROBE_HIT_SS; + } else { + reset_current_kprobe(); + preempt_enable_no_resched(); } return 1; - } else if (kprobe_running()) { - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - setup_singlestep(p, regs); - kcb->kprobe_status = KPROBE_HIT_SS; - return 1; - } } /* no_kprobe: */ @@ -386,38 +380,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, return ret; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long sp_addr = regs->sp; - - kcb->jprobe_saved_regs = *regs; - memcpy(kcb->jprobes_stack, (void *)sp_addr, MIN_STACK_SIZE(sp_addr)); - regs->ret = (unsigned long)(jp->entry); - - return 1; -} - -void __kprobes jprobe_return(void) -{ - __asm__ __volatile__("unimp_s"); - return; -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long sp_addr; - - *regs = kcb->jprobe_saved_regs; - sp_addr = regs->sp; - memcpy((void *)sp_addr, kcb->jprobes_stack, MIN_STACK_SIZE(sp_addr)); - preempt_enable_no_resched(); - - return 1; -} - static void __used kretprobe_trampoline_holder(void) { __asm__ __volatile__(".global kretprobe_trampoline\n" @@ -483,9 +445,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, kretprobe_assert(ri, orig_ret_address, trampoline_address); regs->ret = orig_ret_address; - reset_current_kprobe(); kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 5ac3b547453f..4674541eba3f 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -47,7 +47,8 @@ SYSCALL_DEFINE0(arc_gettls) SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) { struct pt_regs *regs = current_pt_regs(); - int uval = -EFAULT; + u32 uval; + int ret; /* * This is only for old cores lacking LLOCK/SCOND, which by defintion @@ -60,23 +61,47 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) /* Z indicates to userspace if operation succeded */ regs->status32 &= ~STATUS_Z_MASK; - if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int))) - return -EFAULT; + ret = access_ok(VERIFY_WRITE, uaddr, sizeof(*uaddr)); + if (!ret) + goto fail; +again: preempt_disable(); - if (__get_user(uval, uaddr)) - goto done; + ret = __get_user(uval, uaddr); + if (ret) + goto fault; - if (uval == expected) { - if (!__put_user(new, uaddr)) - regs->status32 |= STATUS_Z_MASK; - } + if (uval != expected) + goto out; -done: - preempt_enable(); + ret = __put_user(new, uaddr); + if (ret) + goto fault; + + regs->status32 |= STATUS_Z_MASK; +out: + preempt_enable(); return uval; + +fault: + preempt_enable(); + + if (unlikely(ret != -EFAULT)) + goto fail; + + down_read(¤t->mm->mmap_sem); + ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr, + FAULT_FLAG_WRITE, NULL); + up_read(¤t->mm->mmap_sem); + + if (likely(!ret)) + goto again; + +fail: + force_sig(SIGSEGV, current); + return ret; } #ifdef CONFIG_ISA_ARCV2 diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 9dbe645ee127..25c631942500 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -1038,7 +1038,7 @@ void flush_cache_mm(struct mm_struct *mm) void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, unsigned long pfn) { - unsigned int paddr = pfn << PAGE_SHIFT; + phys_addr_t paddr = pfn << PAGE_SHIFT; u_vaddr &= PAGE_MASK; @@ -1058,8 +1058,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page(page_address(page), u_vaddr); - __flush_dcache_page(page_address(page), page_address(page)); + __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); + __flush_dcache_page((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page)); } @@ -1246,6 +1247,16 @@ void __init arc_cache_init_master(void) } } + /* + * Check that SMP_CACHE_BYTES (and hence ARCH_DMA_MINALIGN) is larger + * or equal to any cache line length. + */ + BUILD_BUG_ON_MSG(L1_CACHE_BYTES > SMP_CACHE_BYTES, + "SMP_CACHE_BYTES must be >= any cache line length"); + if (is_isa_arcv2() && (l2_line_sz > SMP_CACHE_BYTES)) + panic("L2 Cache line [%d] > kernel Config [%d]\n", + l2_line_sz, SMP_CACHE_BYTES); + /* Note that SLC disable not formally supported till HS 3.0 */ if (is_isa_arcv2() && l2_line_sz && !slc_enable) arc_slc_disable(); diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 8c1071840979..ec47e6079f5d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -129,14 +129,59 @@ int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma, return ret; } +/* + * Cache operations depending on function and direction argument, inspired by + * https://lkml.org/lkml/2018/5/18/979 + * "dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] + * dma-mapping: provide a generic dma-noncoherent implementation)" + * + * | map == for_device | unmap == for_cpu + * |---------------------------------------------------------------- + * TO_DEV | writeback writeback | none none + * FROM_DEV | invalidate invalidate | invalidate* invalidate* + * BIDIR | writeback+inv writeback+inv | invalidate invalidate + * + * [*] needed for CPU speculative prefetches + * + * NOTE: we don't check the validity of direction argument as it is done in + * upper layer functions (in include/linux/dma-mapping.h) + */ + void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - dma_cache_wback(paddr, size); + switch (dir) { + case DMA_TO_DEVICE: + dma_cache_wback(paddr, size); + break; + + case DMA_FROM_DEVICE: + dma_cache_inv(paddr, size); + break; + + case DMA_BIDIRECTIONAL: + dma_cache_wback_inv(paddr, size); + break; + + default: + break; + } } void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - dma_cache_inv(paddr, size); + switch (dir) { + case DMA_TO_DEVICE: + break; + + /* FROM_DEVICE invalidate needed if speculative CPU prefetch only */ + case DMA_FROM_DEVICE: + case DMA_BIDIRECTIONAL: + dma_cache_inv(paddr, size); + break; + + default: + break; + } } diff --git a/arch/arc/plat-eznps/include/plat/ctop.h b/arch/arc/plat-eznps/include/plat/ctop.h index 0c7d11022d0f..4f6a1673b3a6 100644 --- a/arch/arc/plat-eznps/include/plat/ctop.h +++ b/arch/arc/plat-eznps/include/plat/ctop.h @@ -21,6 +21,7 @@ #error "Incorrect ctop.h include" #endif +#include #include /* core auxiliary registers */ @@ -143,6 +144,15 @@ struct nps_host_reg_gim_p_int_dst { }; /* AUX registers definition */ +struct nps_host_reg_aux_dpc { + union { + struct { + u32 ien:1, men:1, hen:1, reserved:29; + }; + u32 value; + }; +}; + struct nps_host_reg_aux_udmc { union { struct { diff --git a/arch/arc/plat-eznps/mtm.c b/arch/arc/plat-eznps/mtm.c index 2388de3d09ef..ed0077ef666e 100644 --- a/arch/arc/plat-eznps/mtm.c +++ b/arch/arc/plat-eznps/mtm.c @@ -15,6 +15,8 @@ */ #include +#include +#include #include #include #include @@ -157,10 +159,10 @@ void mtm_enable_core(unsigned int cpu) /* Verify and set the value of the mtm hs counter */ static int __init set_mtm_hs_ctr(char *ctr_str) { - long hs_ctr; + int hs_ctr; int ret; - ret = kstrtol(ctr_str, 0, &hs_ctr); + ret = kstrtoint(ctr_str, 0, &hs_ctr); if (ret || hs_ctr > MT_HS_CNT_MAX || hs_ctr < MT_HS_CNT_MIN) { pr_err("** Invalid @nps_mtm_hs_ctr [%d] needs to be [%d:%d] (incl)\n", diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig index 19ab3cf98f0f..9356753c2ed8 100644 --- a/arch/arc/plat-hsdk/Kconfig +++ b/arch/arc/plat-hsdk/Kconfig @@ -7,5 +7,8 @@ menuconfig ARC_SOC_HSDK bool "ARC HS Development Kit SOC" + depends on ISA_ARCV2 + select ARC_HAS_ACCL_REGS select CLK_HSDK select RESET_HSDK + select MIGHT_HAVE_PCI diff --git a/arch/arc/plat-hsdk/platform.c b/arch/arc/plat-hsdk/platform.c index 2958aedb649a..2588b842407c 100644 --- a/arch/arc/plat-hsdk/platform.c +++ b/arch/arc/plat-hsdk/platform.c @@ -42,6 +42,66 @@ static void __init hsdk_init_per_cpu(unsigned int cpu) #define SDIO_UHS_REG_EXT (SDIO_BASE + 0x108) #define SDIO_UHS_REG_EXT_DIV_2 (2 << 30) +#define HSDK_GPIO_INTC (ARC_PERIPHERAL_BASE + 0x3000) + +static void __init hsdk_enable_gpio_intc_wire(void) +{ + /* + * Peripherals on CPU Card are wired to cpu intc via intermediate + * DW APB GPIO blocks (mainly for debouncing) + * + * --------------------- + * | snps,archs-intc | + * --------------------- + * | + * ---------------------- + * | snps,archs-idu-intc | + * ---------------------- + * | | | | | + * | [eth] [USB] [... other peripherals] + * | + * ------------------- + * | snps,dw-apb-intc | + * ------------------- + * | | | | + * [Bt] [HAPS] [... other peripherals] + * + * Current implementation of "irq-dw-apb-ictl" driver doesn't work well + * with stacked INTCs. In particular problem happens if its master INTC + * not yet instantiated. See discussion here - + * https://lkml.org/lkml/2015/3/4/755 + * + * So setup the first gpio block as a passive pass thru and hide it from + * DT hardware topology - connect intc directly to cpu intc + * The GPIO "wire" needs to be init nevertheless (here) + * + * One side adv is that peripheral interrupt handling avoids one nested + * intc ISR hop + * + * According to HSDK User's Manual [1], "Table 2 Interrupt Mapping" + * we have the following GPIO input lines used as sources of interrupt: + * - GPIO[0] - Bluetooth interrupt of RS9113 module + * - GPIO[2] - HAPS interrupt (on HapsTrak 3 connector) + * - GPIO[3] - Audio codec (MAX9880A) interrupt + * - GPIO[8-23] - Available on Arduino and PMOD_x headers + * For now there's no use of Arduino and PMOD_x headers in Linux + * use-case so we only enable lines 0, 2 and 3. + * + * [1] https://github.com/foss-for-synopsys-dwc-arc-processors/ARC-Development-Systems-Forum/wiki/docs/ARC_HSDK_User_Guide.pdf + */ +#define GPIO_INTEN (HSDK_GPIO_INTC + 0x30) +#define GPIO_INTMASK (HSDK_GPIO_INTC + 0x34) +#define GPIO_INTTYPE_LEVEL (HSDK_GPIO_INTC + 0x38) +#define GPIO_INT_POLARITY (HSDK_GPIO_INTC + 0x3c) +#define GPIO_INT_CONNECTED_MASK 0x0d + + iowrite32(0xffffffff, (void __iomem *) GPIO_INTMASK); + iowrite32(~GPIO_INT_CONNECTED_MASK, (void __iomem *) GPIO_INTMASK); + iowrite32(0x00000000, (void __iomem *) GPIO_INTTYPE_LEVEL); + iowrite32(0xffffffff, (void __iomem *) GPIO_INT_POLARITY); + iowrite32(GPIO_INT_CONNECTED_MASK, (void __iomem *) GPIO_INTEN); +} + static void __init hsdk_init_early(void) { /* @@ -62,6 +122,8 @@ static void __init hsdk_init_early(void) * minimum possible div-by-2. */ iowrite32(SDIO_UHS_REG_EXT_DIV_2, (void __iomem *) SDIO_UHS_REG_EXT); + + hsdk_enable_gpio_intc_wire(); } static const char *hsdk_compat[] __initconst = { diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 2a78bdef9a24..0f328d639d51 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -8,9 +8,11 @@ config ARM select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FORTIFY_SOURCE + select ARCH_HAS_KCOV + select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_PTE_SPECIAL if ARM_LPAE - select ARCH_HAS_SET_MEMORY select ARCH_HAS_PHYS_TO_DMA + select ARCH_HAS_SET_MEMORY select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL select ARCH_HAS_STRICT_MODULE_RWX if MMU select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST @@ -57,7 +59,6 @@ config ARM select HAVE_ARCH_TRACEHOOK select HAVE_ARM_SMCCC if CPU_V7 select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32 - select HAVE_CC_STACKPROTECTOR select HAVE_CONTEXT_TRACKING select HAVE_C_RECORDMCOUNT select HAVE_DEBUG_KMEMLEAK @@ -92,6 +93,7 @@ config ARM select HAVE_RCU_TABLE_FREE if (SMP && ARM_LPAE) select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RSEQ + select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_UID16 select HAVE_VIRT_CPU_ACCOUNTING_GEN @@ -336,8 +338,8 @@ config ARCH_MULTIPLATFORM select TIMER_OF select COMMON_CLK select GENERIC_CLOCKEVENTS + select GENERIC_IRQ_MULTI_HANDLER select MIGHT_HAVE_PCI - select MULTI_IRQ_HANDLER select PCI_DOMAINS if PCI select SPARSE_IRQ select USE_OF @@ -464,9 +466,9 @@ config ARCH_DOVE bool "Marvell Dove" select CPU_PJ4 select GENERIC_CLOCKEVENTS + select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select MIGHT_HAVE_PCI - select MULTI_IRQ_HANDLER select MVEBU_MBUS select PINCTRL select PINCTRL_DOVE @@ -511,8 +513,8 @@ config ARCH_LPC32XX select COMMON_CLK select CPU_ARM926T select GENERIC_CLOCKEVENTS + select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB - select MULTI_IRQ_HANDLER select SPARSE_IRQ select USE_OF help @@ -531,11 +533,11 @@ config ARCH_PXA select TIMER_OF select CPU_XSCALE if !CPU_XSC3 select GENERIC_CLOCKEVENTS + select GENERIC_IRQ_MULTI_HANDLER select GPIO_PXA select GPIOLIB select HAVE_IDE select IRQ_DOMAIN - select MULTI_IRQ_HANDLER select PLAT_PXA select SPARSE_IRQ help @@ -571,11 +573,11 @@ config ARCH_SA1100 select CPU_FREQ select CPU_SA1100 select GENERIC_CLOCKEVENTS + select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select HAVE_IDE select IRQ_DOMAIN select ISA - select MULTI_IRQ_HANDLER select NEED_MACH_MEMORY_H select SPARSE_IRQ help @@ -589,10 +591,10 @@ config ARCH_S3C24XX select GENERIC_CLOCKEVENTS select GPIO_SAMSUNG select GPIOLIB + select GENERIC_IRQ_MULTI_HANDLER select HAVE_S3C2410_I2C if I2C select HAVE_S3C2410_WATCHDOG if WATCHDOG select HAVE_S3C_RTC if RTC_CLASS - select MULTI_IRQ_HANDLER select NEED_MACH_IO_H select SAMSUNG_ATAGS select USE_OF @@ -626,10 +628,10 @@ config ARCH_OMAP1 select CLKSRC_MMIO select GENERIC_CLOCKEVENTS select GENERIC_IRQ_CHIP + select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select HAVE_IDE select IRQ_DOMAIN - select MULTI_IRQ_HANDLER select NEED_MACH_IO_H if PCCARD select NEED_MACH_MEMORY_H select SPARSE_IRQ @@ -920,11 +922,6 @@ config IWMMXT Enable support for iWMMXt context switching at run time if running on a CPU that supports it. -config MULTI_IRQ_HANDLER - bool - help - Allow each machine to specify it's own IRQ handler at run time. - if !MMU source "arch/arm/Kconfig-nommu" endif @@ -1244,8 +1241,14 @@ config PCI VESA. If you have PCI, say Y, otherwise N. config PCI_DOMAINS - bool + bool "Support for multiple PCI domains" depends on PCI + help + Enable PCI domains kernel management. Say Y if your machine + has a PCI bus hierarchy that requires more than one PCI + domain (aka segment) to be correctly managed. Say N otherwise. + + If you don't know what to do here, say N. config PCI_DOMAINS_GENERIC def_bool PCI_DOMAINS @@ -1301,7 +1304,7 @@ config SMP will run faster if you say N here. See also , - and the SMP-HOWTO available at + and the SMP-HOWTO available at . If you don't know what to do here, say N. diff --git a/arch/arm/Makefile b/arch/arm/Makefile index fc26c3d7b9b6..62ebeae9f837 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -46,12 +46,12 @@ ifeq ($(CONFIG_CPU_BIG_ENDIAN),y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__ARMEB__ AS += -EB -LD += -EB +LDFLAGS += -EB else KBUILD_CPPFLAGS += -mlittle-endian CHECKFLAGS += -D__ARMEL__ AS += -EL -LD += -EL +LDFLAGS += -EL endif # diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index a3c5fbcad4ab..1f5a5ffe7fcf 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -25,6 +25,9 @@ endif GCOV_PROFILE := n +# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. +KCOV_INSTRUMENT := n + # # Architecture dependencies # diff --git a/arch/arm/boot/dts/am335x-bone-common.dtsi b/arch/arm/boot/dts/am335x-bone-common.dtsi index f9e8667f5886..73b514dddf65 100644 --- a/arch/arm/boot/dts/am335x-bone-common.dtsi +++ b/arch/arm/boot/dts/am335x-bone-common.dtsi @@ -168,7 +168,6 @@ AM33XX_IOPAD(0x8f0, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_dat3.mmc0_dat3 */ AM33XX_IOPAD(0x904, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_cmd.mmc0_cmd */ AM33XX_IOPAD(0x900, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_clk.mmc0_clk */ - AM33XX_IOPAD(0x9a0, PIN_INPUT | MUX_MODE4) /* mcasp0_aclkr.mmc0_sdwp */ >; }; diff --git a/arch/arm/boot/dts/am3517.dtsi b/arch/arm/boot/dts/am3517.dtsi index ca294914bbb1..23ea381d363f 100644 --- a/arch/arm/boot/dts/am3517.dtsi +++ b/arch/arm/boot/dts/am3517.dtsi @@ -39,6 +39,8 @@ ti,davinci-ctrl-ram-size = <0x2000>; ti,davinci-rmii-en = /bits/ 8 <1>; local-mac-address = [ 00 00 00 00 00 00 ]; + clocks = <&emac_ick>; + clock-names = "ick"; }; davinci_mdio: ethernet@5c030000 { @@ -49,6 +51,8 @@ bus_freq = <1000000>; #address-cells = <1>; #size-cells = <0>; + clocks = <&emac_fck>; + clock-names = "fck"; }; uart4: serial@4809e000 { @@ -87,6 +91,11 @@ }; }; +/* Table Table 5-79 of the TRM shows 480ab000 is reserved */ +&usb_otg_hs { + status = "disabled"; +}; + &iva { status = "disabled"; }; diff --git a/arch/arm/boot/dts/am437x-sk-evm.dts b/arch/arm/boot/dts/am437x-sk-evm.dts index 440351ad0b80..d4be3fd0b6f4 100644 --- a/arch/arm/boot/dts/am437x-sk-evm.dts +++ b/arch/arm/boot/dts/am437x-sk-evm.dts @@ -610,6 +610,8 @@ touchscreen-size-x = <480>; touchscreen-size-y = <272>; + + wakeup-source; }; tlv320aic3106: tlv320aic3106@1b { diff --git a/arch/arm/boot/dts/armada-385-synology-ds116.dts b/arch/arm/boot/dts/armada-385-synology-ds116.dts index 6782ce481ac9..d8769956cbfc 100644 --- a/arch/arm/boot/dts/armada-385-synology-ds116.dts +++ b/arch/arm/boot/dts/armada-385-synology-ds116.dts @@ -139,7 +139,7 @@ 3700 5 3900 6 4000 7>; - cooling-cells = <2>; + #cooling-cells = <2>; }; gpio-leds { diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi index 18edc9bc7927..929459c42760 100644 --- a/arch/arm/boot/dts/armada-38x.dtsi +++ b/arch/arm/boot/dts/armada-38x.dtsi @@ -547,7 +547,7 @@ thermal: thermal@e8078 { compatible = "marvell,armada380-thermal"; - reg = <0xe4078 0x4>, <0xe4074 0x4>; + reg = <0xe4078 0x4>, <0xe4070 0x8>; status = "okay"; }; diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi index 9fe4f5a6379e..2c4df2d2d4a6 100644 --- a/arch/arm/boot/dts/bcm-cygnus.dtsi +++ b/arch/arm/boot/dts/bcm-cygnus.dtsi @@ -216,7 +216,7 @@ reg = <0x18008000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; @@ -245,7 +245,7 @@ reg = <0x1800b000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; @@ -256,7 +256,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <0>; @@ -278,10 +278,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; }; }; @@ -291,7 +291,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <1>; @@ -313,10 +313,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; }; }; diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi index 3f9cedd8011f..3084a7c95733 100644 --- a/arch/arm/boot/dts/bcm-hr2.dtsi +++ b/arch/arm/boot/dts/bcm-hr2.dtsi @@ -264,7 +264,7 @@ reg = <0x38000 0x50>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; }; @@ -279,7 +279,7 @@ reg = <0x3b000 0x50>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; }; }; @@ -300,7 +300,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <0>; @@ -322,10 +322,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; brcm,pcie-msi-inten; }; }; @@ -336,7 +336,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <1>; @@ -358,10 +358,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; brcm,pcie-msi-inten; }; }; diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi index dcc55aa84583..09ba85046322 100644 --- a/arch/arm/boot/dts/bcm-nsp.dtsi +++ b/arch/arm/boot/dts/bcm-nsp.dtsi @@ -391,7 +391,7 @@ reg = <0x38000 0x50>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; dma-coherent; status = "disabled"; @@ -496,7 +496,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <0>; @@ -519,10 +519,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; brcm,pcie-msi-inten; }; }; @@ -533,7 +533,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <1>; @@ -556,10 +556,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; brcm,pcie-msi-inten; }; }; @@ -570,7 +570,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <2>; @@ -593,10 +593,10 @@ compatible = "brcm,iproc-msi"; msi-controller; interrupt-parent = <&gic>; - interrupts = , - , - , - ; + interrupts = , + , + , + ; brcm,pcie-msi-inten; }; }; diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi index 9a076c409f4e..ef995e50ee12 100644 --- a/arch/arm/boot/dts/bcm5301x.dtsi +++ b/arch/arm/boot/dts/bcm5301x.dtsi @@ -365,7 +365,7 @@ i2c0: i2c@18009000 { compatible = "brcm,iproc-i2c"; reg = <0x18009000 0x50>; - interrupts = ; + interrupts = ; #address-cells = <1>; #size-cells = <0>; clock-frequency = <100000>; diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi index f6f1597b03df..0f4f817a9e22 100644 --- a/arch/arm/boot/dts/da850.dtsi +++ b/arch/arm/boot/dts/da850.dtsi @@ -549,11 +549,7 @@ gpio-controller; #gpio-cells = <2>; reg = <0x226000 0x1000>; - interrupts = <42 IRQ_TYPE_EDGE_BOTH - 43 IRQ_TYPE_EDGE_BOTH 44 IRQ_TYPE_EDGE_BOTH - 45 IRQ_TYPE_EDGE_BOTH 46 IRQ_TYPE_EDGE_BOTH - 47 IRQ_TYPE_EDGE_BOTH 48 IRQ_TYPE_EDGE_BOTH - 49 IRQ_TYPE_EDGE_BOTH 50 IRQ_TYPE_EDGE_BOTH>; + interrupts = <42 43 44 45 46 47 48 49 50>; ti,ngpio = <144>; ti,davinci-gpio-unbanked = <0>; status = "disabled"; diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi index 9dcd14edc202..e03495a799ce 100644 --- a/arch/arm/boot/dts/dra7.dtsi +++ b/arch/arm/boot/dts/dra7.dtsi @@ -1580,7 +1580,6 @@ dr_mode = "otg"; snps,dis_u3_susphy_quirk; snps,dis_u2_susphy_quirk; - snps,dis_metastability_quirk; }; }; @@ -1608,6 +1607,7 @@ dr_mode = "otg"; snps,dis_u3_susphy_quirk; snps,dis_u2_susphy_quirk; + snps,dis_metastability_quirk; }; }; diff --git a/arch/arm/boot/dts/imx51-zii-rdu1.dts b/arch/arm/boot/dts/imx51-zii-rdu1.dts index df9eca94d812..8a878687197b 100644 --- a/arch/arm/boot/dts/imx51-zii-rdu1.dts +++ b/arch/arm/boot/dts/imx51-zii-rdu1.dts @@ -770,7 +770,7 @@ pinctrl_ts: tsgrp { fsl,pins = < - MX51_PAD_CSI1_D8__GPIO3_12 0x85 + MX51_PAD_CSI1_D8__GPIO3_12 0x04 MX51_PAD_CSI1_D9__GPIO3_13 0x85 >; }; diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi index 70483ce72ba6..77f8f030dd07 100644 --- a/arch/arm/boot/dts/imx6q.dtsi +++ b/arch/arm/boot/dts/imx6q.dtsi @@ -90,7 +90,7 @@ clocks = <&clks IMX6Q_CLK_ECSPI5>, <&clks IMX6Q_CLK_ECSPI5>; clock-names = "ipg", "per"; - dmas = <&sdma 11 7 1>, <&sdma 12 7 2>; + dmas = <&sdma 11 8 1>, <&sdma 12 8 2>; dma-names = "rx", "tx"; status = "disabled"; }; diff --git a/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi b/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi index 19a075aee19e..f14df0baf2ab 100644 --- a/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi +++ b/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi @@ -692,7 +692,7 @@ dsa,member = <0 0>; eeprom-length = <512>; interrupt-parent = <&gpio6>; - interrupts = <3 IRQ_TYPE_EDGE_FALLING>; + interrupts = <3 IRQ_TYPE_LEVEL_LOW>; interrupt-controller; #interrupt-cells = <2>; diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi index d8b94f47498b..4e4a55aad5c9 100644 --- a/arch/arm/boot/dts/imx6sx.dtsi +++ b/arch/arm/boot/dts/imx6sx.dtsi @@ -1344,7 +1344,7 @@ ranges = <0x81000000 0 0 0x08f80000 0 0x00010000 /* downstream I/O */ 0x82000000 0 0x08000000 0x08000000 0 0x00f00000>; /* non-prefetchable memory */ num-lanes = <1>; - interrupts = ; + interrupts = ; interrupt-names = "msi"; #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0x7>; diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts index bdf73cbcec3a..e7c3c563ff8f 100644 --- a/arch/arm/boot/dts/omap4-droid4-xt894.dts +++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts @@ -159,13 +159,7 @@ dais = <&mcbsp2_port>, <&mcbsp3_port>; }; -}; - -&dss { - status = "okay"; -}; -&gpio6 { pwm8: dmtimer-pwm-8 { pinctrl-names = "default"; pinctrl-0 = <&vibrator_direction_pin>; @@ -192,7 +186,10 @@ pwm-names = "enable", "direction"; direction-duty-cycle-ns = <10000000>; }; +}; +&dss { + status = "okay"; }; &dsi1 { diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi index 486d4e7433ed..b38f8c240558 100644 --- a/arch/arm/boot/dts/socfpga.dtsi +++ b/arch/arm/boot/dts/socfpga.dtsi @@ -748,13 +748,13 @@ nand0: nand@ff900000 { #address-cells = <0x1>; #size-cells = <0x1>; - compatible = "denali,denali-nand-dt"; + compatible = "altr,socfpga-denali-nand"; reg = <0xff900000 0x100000>, <0xffb80000 0x10000>; reg-names = "nand_data", "denali_reg"; interrupts = <0x0 0x90 0x4>; dma-mask = <0xffffffff>; - clocks = <&nand_clk>; + clocks = <&nand_x_clk>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi index bead79e4b2aa..791ca15c799e 100644 --- a/arch/arm/boot/dts/socfpga_arria10.dtsi +++ b/arch/arm/boot/dts/socfpga_arria10.dtsi @@ -593,8 +593,7 @@ #size-cells = <0>; reg = <0xffda5000 0x100>; interrupts = <0 102 4>; - num-chipselect = <4>; - bus-num = <0>; + num-cs = <4>; /*32bit_access;*/ tx-dma-channel = <&pdma 16>; rx-dma-channel = <&pdma 17>; @@ -633,7 +632,7 @@ nand: nand@ffb90000 { #address-cells = <1>; #size-cells = <1>; - compatible = "denali,denali-nand-dt", "altr,socfpga-denali-nand"; + compatible = "altr,socfpga-denali-nand"; reg = <0xffb90000 0x72000>, <0xffb80000 0x10000>; reg-names = "nand_data", "denali_reg"; diff --git a/arch/arm/common/Makefile b/arch/arm/common/Makefile index 1e9f7af8f70f..3157be413297 100644 --- a/arch/arm/common/Makefile +++ b/arch/arm/common/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_DMABOUNCE) += dmabounce.o obj-$(CONFIG_SHARP_LOCOMO) += locomo.o obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o obj-$(CONFIG_SHARP_SCOOP) += scoop.o -obj-$(CONFIG_SMP) += secure_cntvoff.o +obj-$(CONFIG_CPU_V7) += secure_cntvoff.o obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o CFLAGS_REMOVE_mcpm_entry.o = -pg diff --git a/arch/arm/configs/imx_v4_v5_defconfig b/arch/arm/configs/imx_v4_v5_defconfig index 054591dc9a00..4cd2f4a2bff4 100644 --- a/arch/arm/configs/imx_v4_v5_defconfig +++ b/arch/arm/configs/imx_v4_v5_defconfig @@ -141,9 +141,11 @@ CONFIG_USB_STORAGE=y CONFIG_USB_CHIPIDEA=y CONFIG_USB_CHIPIDEA_UDC=y CONFIG_USB_CHIPIDEA_HOST=y +CONFIG_USB_CHIPIDEA_ULPI=y CONFIG_NOP_USB_XCEIV=y CONFIG_USB_GADGET=y CONFIG_USB_ETH=m +CONFIG_USB_ULPI_BUS=y CONFIG_MMC=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig index f70507ab91ee..200ebda47e0c 100644 --- a/arch/arm/configs/imx_v6_v7_defconfig +++ b/arch/arm/configs/imx_v6_v7_defconfig @@ -302,6 +302,7 @@ CONFIG_USB_STORAGE=y CONFIG_USB_CHIPIDEA=y CONFIG_USB_CHIPIDEA_UDC=y CONFIG_USB_CHIPIDEA_HOST=y +CONFIG_USB_CHIPIDEA_ULPI=y CONFIG_USB_SERIAL=m CONFIG_USB_SERIAL_GENERIC=y CONFIG_USB_SERIAL_FTDI_SIO=m @@ -338,6 +339,7 @@ CONFIG_USB_GADGETFS=m CONFIG_USB_FUNCTIONFS=m CONFIG_USB_MASS_STORAGE=m CONFIG_USB_G_SERIAL=m +CONFIG_USB_ULPI_BUS=y CONFIG_MMC=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig index 7e1c543162c3..8f6be1982545 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -1,5 +1,4 @@ CONFIG_SYSVIPC=y -CONFIG_FHANDLE=y CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y CONFIG_CGROUPS=y @@ -10,20 +9,10 @@ CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y CONFIG_PARTITION_ADVANCED=y CONFIG_CMDLINE_PARTITION=y -CONFIG_ARCH_MULTI_V7=y -# CONFIG_ARCH_MULTI_V5 is not set -# CONFIG_ARCH_MULTI_V4 is not set CONFIG_ARCH_VIRT=y CONFIG_ARCH_ALPINE=y CONFIG_ARCH_ARTPEC=y CONFIG_MACH_ARTPEC6=y -CONFIG_ARCH_MVEBU=y -CONFIG_MACH_ARMADA_370=y -CONFIG_MACH_ARMADA_375=y -CONFIG_MACH_ARMADA_38X=y -CONFIG_MACH_ARMADA_39X=y -CONFIG_MACH_ARMADA_XP=y -CONFIG_MACH_DOVE=y CONFIG_ARCH_AT91=y CONFIG_SOC_SAMA5D2=y CONFIG_SOC_SAMA5D3=y @@ -32,9 +21,9 @@ CONFIG_ARCH_BCM=y CONFIG_ARCH_BCM_CYGNUS=y CONFIG_ARCH_BCM_HR2=y CONFIG_ARCH_BCM_NSP=y -CONFIG_ARCH_BCM_21664=y -CONFIG_ARCH_BCM_281XX=y CONFIG_ARCH_BCM_5301X=y +CONFIG_ARCH_BCM_281XX=y +CONFIG_ARCH_BCM_21664=y CONFIG_ARCH_BCM2835=y CONFIG_ARCH_BCM_63XX=y CONFIG_ARCH_BRCMSTB=y @@ -43,14 +32,14 @@ CONFIG_MACH_BERLIN_BG2=y CONFIG_MACH_BERLIN_BG2CD=y CONFIG_MACH_BERLIN_BG2Q=y CONFIG_ARCH_DIGICOLOR=y +CONFIG_ARCH_EXYNOS=y +CONFIG_EXYNOS5420_MCPM=y CONFIG_ARCH_HIGHBANK=y CONFIG_ARCH_HISI=y CONFIG_ARCH_HI3xxx=y -CONFIG_ARCH_HIX5HD2=y CONFIG_ARCH_HIP01=y CONFIG_ARCH_HIP04=y -CONFIG_ARCH_KEYSTONE=y -CONFIG_ARCH_MESON=y +CONFIG_ARCH_HIX5HD2=y CONFIG_ARCH_MXC=y CONFIG_SOC_IMX50=y CONFIG_SOC_IMX51=y @@ -60,29 +49,30 @@ CONFIG_SOC_IMX6SL=y CONFIG_SOC_IMX6SX=y CONFIG_SOC_IMX6UL=y CONFIG_SOC_IMX7D=y -CONFIG_SOC_VF610=y CONFIG_SOC_LS1021A=y +CONFIG_SOC_VF610=y +CONFIG_ARCH_KEYSTONE=y +CONFIG_ARCH_MEDIATEK=y +CONFIG_ARCH_MESON=y +CONFIG_ARCH_MVEBU=y +CONFIG_MACH_ARMADA_370=y +CONFIG_MACH_ARMADA_375=y +CONFIG_MACH_ARMADA_38X=y +CONFIG_MACH_ARMADA_39X=y +CONFIG_MACH_ARMADA_XP=y +CONFIG_MACH_DOVE=y CONFIG_ARCH_OMAP3=y CONFIG_ARCH_OMAP4=y CONFIG_SOC_OMAP5=y CONFIG_SOC_AM33XX=y CONFIG_SOC_AM43XX=y CONFIG_SOC_DRA7XX=y +CONFIG_ARCH_SIRF=y CONFIG_ARCH_QCOM=y -CONFIG_ARCH_MEDIATEK=y CONFIG_ARCH_MSM8X60=y CONFIG_ARCH_MSM8960=y CONFIG_ARCH_MSM8974=y CONFIG_ARCH_ROCKCHIP=y -CONFIG_ARCH_SOCFPGA=y -CONFIG_PLAT_SPEAR=y -CONFIG_ARCH_SPEAR13XX=y -CONFIG_MACH_SPEAR1310=y -CONFIG_MACH_SPEAR1340=y -CONFIG_ARCH_STI=y -CONFIG_ARCH_STM32=y -CONFIG_ARCH_EXYNOS=y -CONFIG_EXYNOS5420_MCPM=y CONFIG_ARCH_RENESAS=y CONFIG_ARCH_EMEV2=y CONFIG_ARCH_R7S72100=y @@ -99,40 +89,33 @@ CONFIG_ARCH_R8A7792=y CONFIG_ARCH_R8A7793=y CONFIG_ARCH_R8A7794=y CONFIG_ARCH_SH73A0=y +CONFIG_ARCH_SOCFPGA=y +CONFIG_PLAT_SPEAR=y +CONFIG_ARCH_SPEAR13XX=y +CONFIG_MACH_SPEAR1310=y +CONFIG_MACH_SPEAR1340=y +CONFIG_ARCH_STI=y +CONFIG_ARCH_STM32=y CONFIG_ARCH_SUNXI=y -CONFIG_ARCH_SIRF=y CONFIG_ARCH_TEGRA=y -CONFIG_ARCH_TEGRA_2x_SOC=y -CONFIG_ARCH_TEGRA_3x_SOC=y -CONFIG_ARCH_TEGRA_114_SOC=y -CONFIG_ARCH_TEGRA_124_SOC=y CONFIG_ARCH_UNIPHIER=y CONFIG_ARCH_U8500=y -CONFIG_MACH_HREFV60=y -CONFIG_MACH_SNOWBALL=y CONFIG_ARCH_VEXPRESS=y CONFIG_ARCH_VEXPRESS_TC2_PM=y CONFIG_ARCH_WM8850=y CONFIG_ARCH_ZYNQ=y -CONFIG_TRUSTED_FOUNDATIONS=y -CONFIG_PCI=y -CONFIG_PCI_HOST_GENERIC=y -CONFIG_PCI_DRA7XX=y -CONFIG_PCI_DRA7XX_EP=y -CONFIG_PCI_KEYSTONE=y -CONFIG_PCI_MSI=y +CONFIG_PCIEPORTBUS=y CONFIG_PCI_MVEBU=y CONFIG_PCI_TEGRA=y CONFIG_PCI_RCAR_GEN2=y CONFIG_PCIE_RCAR=y -CONFIG_PCIEPORTBUS=y +CONFIG_PCI_DRA7XX_EP=y +CONFIG_PCI_KEYSTONE=y CONFIG_PCI_ENDPOINT=y CONFIG_PCI_ENDPOINT_CONFIGFS=y CONFIG_PCI_EPF_TEST=m CONFIG_SMP=y CONFIG_NR_CPUS=16 -CONFIG_HIGHPTE=y -CONFIG_CMA=y CONFIG_SECCOMP=y CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_ATAG_DTB_COMPAT=y @@ -145,14 +128,14 @@ CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y +CONFIG_CPUFREQ_DT=y CONFIG_ARM_IMX6Q_CPUFREQ=y CONFIG_QORIQ_CPUFREQ=y CONFIG_CPU_IDLE=y CONFIG_ARM_CPUIDLE=y -CONFIG_NEON=y -CONFIG_KERNEL_MODE_NEON=y CONFIG_ARM_ZYNQ_CPUIDLE=y CONFIG_ARM_EXYNOS_CPUIDLE=y +CONFIG_KERNEL_MODE_NEON=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y @@ -170,23 +153,13 @@ CONFIG_IPV6_MIP6=m CONFIG_IPV6_TUNNEL=m CONFIG_IPV6_MULTIPLE_TABLES=y CONFIG_NET_DSA=m -CONFIG_NET_SWITCHDEV=y CONFIG_CAN=y -CONFIG_CAN_RAW=y -CONFIG_CAN_BCM=y -CONFIG_CAN_DEV=y CONFIG_CAN_AT91=m CONFIG_CAN_FLEXCAN=m -CONFIG_CAN_RCAR=m +CONFIG_CAN_SUN4I=y CONFIG_CAN_XILINXCAN=y +CONFIG_CAN_RCAR=m CONFIG_CAN_MCP251X=y -CONFIG_NET_DSA_BCM_SF2=m -CONFIG_B53=m -CONFIG_B53_SPI_DRIVER=m -CONFIG_B53_MDIO_DRIVER=m -CONFIG_B53_MMAP_DRIVER=m -CONFIG_B53_SRAB_DRIVER=m -CONFIG_CAN_SUN4I=y CONFIG_BT=m CONFIG_BT_HCIUART=m CONFIG_BT_HCIUART_BCM=y @@ -199,11 +172,9 @@ CONFIG_RFKILL_INPUT=y CONFIG_RFKILL_GPIO=y CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS_MOUNT=y -CONFIG_DMA_CMA=y CONFIG_CMA_SIZE_MBYTES=64 CONFIG_OMAP_OCP2SCP=y CONFIG_SIMPLE_PM_BUS=y -CONFIG_SUNXI_RSB=y CONFIG_MTD=y CONFIG_MTD_CMDLINE_PARTS=y CONFIG_MTD_BLOCK=y @@ -236,7 +207,6 @@ CONFIG_PCI_ENDPOINT_TEST=m CONFIG_EEPROM_AT24=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y -CONFIG_SCSI_MULTI_LUN=y CONFIG_ATA=y CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI_PLATFORM=y @@ -251,14 +221,20 @@ CONFIG_SATA_MV=y CONFIG_SATA_RCAR=y CONFIG_NETDEVICES=y CONFIG_VIRTIO_NET=y -CONFIG_HIX5HD2_GMAC=y +CONFIG_B53_SPI_DRIVER=m +CONFIG_B53_MDIO_DRIVER=m +CONFIG_B53_MMAP_DRIVER=m +CONFIG_B53_SRAB_DRIVER=m +CONFIG_NET_DSA_BCM_SF2=m CONFIG_SUN4I_EMAC=y -CONFIG_MACB=y CONFIG_BCMGENET=m CONFIG_BGMAC_BCMA=y CONFIG_SYSTEMPORT=m +CONFIG_MACB=y CONFIG_NET_CALXEDA_XGMAC=y CONFIG_GIANFAR=y +CONFIG_HIX5HD2_GMAC=y +CONFIG_E1000E=y CONFIG_IGB=y CONFIG_MV643XX_ETH=y CONFIG_MVNETA=y @@ -268,19 +244,17 @@ CONFIG_R8169=y CONFIG_SH_ETH=y CONFIG_SMSC911X=y CONFIG_STMMAC_ETH=y -CONFIG_STMMAC_PLATFORM=y CONFIG_DWMAC_DWC_QOS_ETH=y CONFIG_TI_CPSW=y CONFIG_XILINX_EMACLITE=y CONFIG_AT803X_PHY=y -CONFIG_MARVELL_PHY=y -CONFIG_SMSC_PHY=y CONFIG_BROADCOM_PHY=y CONFIG_ICPLUS_PHY=y -CONFIG_REALTEK_PHY=y +CONFIG_MARVELL_PHY=y CONFIG_MICREL_PHY=y -CONFIG_FIXED_PHY=y +CONFIG_REALTEK_PHY=y CONFIG_ROCKCHIP_PHY=y +CONFIG_SMSC_PHY=y CONFIG_USB_PEGASUS=y CONFIG_USB_RTL8152=m CONFIG_USB_LAN78XX=m @@ -288,29 +262,29 @@ CONFIG_USB_USBNET=y CONFIG_USB_NET_SMSC75XX=y CONFIG_USB_NET_SMSC95XX=y CONFIG_BRCMFMAC=m -CONFIG_RT2X00=m -CONFIG_RT2800USB=m CONFIG_MWIFIEX=m CONFIG_MWIFIEX_SDIO=m +CONFIG_RT2X00=m +CONFIG_RT2800USB=m CONFIG_INPUT_JOYDEV=y CONFIG_INPUT_EVDEV=y CONFIG_KEYBOARD_QT1070=m CONFIG_KEYBOARD_GPIO=y CONFIG_KEYBOARD_TEGRA=y -CONFIG_KEYBOARD_SPEAR=y +CONFIG_KEYBOARD_SAMSUNG=m CONFIG_KEYBOARD_ST_KEYSCAN=y +CONFIG_KEYBOARD_SPEAR=y CONFIG_KEYBOARD_CROS_EC=m -CONFIG_KEYBOARD_SAMSUNG=m CONFIG_MOUSE_PS2_ELANTECH=y CONFIG_MOUSE_CYAPA=m CONFIG_MOUSE_ELAN_I2C=y CONFIG_INPUT_TOUCHSCREEN=y CONFIG_TOUCHSCREEN_ATMEL_MXT=m CONFIG_TOUCHSCREEN_MMS114=m +CONFIG_TOUCHSCREEN_WM97XX=m CONFIG_TOUCHSCREEN_ST1232=m CONFIG_TOUCHSCREEN_STMPE=y CONFIG_TOUCHSCREEN_SUN4I=y -CONFIG_TOUCHSCREEN_WM97XX=m CONFIG_INPUT_MISC=y CONFIG_INPUT_MAX77693_HAPTIC=m CONFIG_INPUT_MAX8997_HAPTIC=m @@ -327,13 +301,12 @@ CONFIG_SERIAL_8250_DW=y CONFIG_SERIAL_8250_EM=y CONFIG_SERIAL_8250_MT6577=y CONFIG_SERIAL_8250_UNIPHIER=y +CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_AMBA_PL011=y CONFIG_SERIAL_AMBA_PL011_CONSOLE=y CONFIG_SERIAL_ATMEL=y CONFIG_SERIAL_ATMEL_CONSOLE=y CONFIG_SERIAL_ATMEL_TTYAT=y -CONFIG_SERIAL_BCM63XX=y -CONFIG_SERIAL_BCM63XX_CONSOLE=y CONFIG_SERIAL_MESON=y CONFIG_SERIAL_MESON_CONSOLE=y CONFIG_SERIAL_SAMSUNG=y @@ -345,15 +318,14 @@ CONFIG_SERIAL_IMX=y CONFIG_SERIAL_IMX_CONSOLE=y CONFIG_SERIAL_SH_SCI=y CONFIG_SERIAL_SH_SCI_NR_UARTS=20 -CONFIG_SERIAL_SH_SCI_CONSOLE=y -CONFIG_SERIAL_SH_SCI_DMA=y CONFIG_SERIAL_MSM=y CONFIG_SERIAL_MSM_CONSOLE=y CONFIG_SERIAL_VT8500=y CONFIG_SERIAL_VT8500_CONSOLE=y -CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_OMAP=y CONFIG_SERIAL_OMAP_CONSOLE=y +CONFIG_SERIAL_BCM63XX=y +CONFIG_SERIAL_BCM63XX_CONSOLE=y CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_SERIAL_FSL_LPUART=y @@ -365,12 +337,10 @@ CONFIG_SERIAL_ST_ASC_CONSOLE=y CONFIG_SERIAL_STM32=y CONFIG_SERIAL_STM32_CONSOLE=y CONFIG_SERIAL_DEV_BUS=y -CONFIG_HVC_DRIVER=y CONFIG_VIRTIO_CONSOLE=y +CONFIG_HW_RANDOM=y +CONFIG_HW_RANDOM_ST=y CONFIG_I2C_CHARDEV=y -CONFIG_I2C_DAVINCI=y -CONFIG_I2C_MESON=y -CONFIG_I2C_MUX=y CONFIG_I2C_ARB_GPIO_CHALLENGE=m CONFIG_I2C_MUX_PCA954x=y CONFIG_I2C_MUX_PINCTRL=y @@ -378,12 +348,13 @@ CONFIG_I2C_DEMUX_PINCTRL=y CONFIG_I2C_AT91=m CONFIG_I2C_BCM2835=y CONFIG_I2C_CADENCE=y +CONFIG_I2C_DAVINCI=y CONFIG_I2C_DESIGNWARE_PLATFORM=y CONFIG_I2C_DIGICOLOR=m CONFIG_I2C_EMEV2=m CONFIG_I2C_GPIO=m -CONFIG_I2C_EXYNOS5=y CONFIG_I2C_IMX=y +CONFIG_I2C_MESON=y CONFIG_I2C_MV64XXX=y CONFIG_I2C_RIIC=y CONFIG_I2C_RK3X=y @@ -427,7 +398,6 @@ CONFIG_SPI_SPIDEV=y CONFIG_SPMI=y CONFIG_PINCTRL_AS3722=y CONFIG_PINCTRL_PALMAS=y -CONFIG_PINCTRL_BCM2835=y CONFIG_PINCTRL_APQ8064=y CONFIG_PINCTRL_APQ8084=y CONFIG_PINCTRL_IPQ8064=y @@ -437,25 +407,33 @@ CONFIG_PINCTRL_MSM8X74=y CONFIG_PINCTRL_MSM8916=y CONFIG_PINCTRL_QCOM_SPMI_PMIC=y CONFIG_PINCTRL_QCOM_SSBI_PMIC=y -CONFIG_GPIO_GENERIC_PLATFORM=y CONFIG_GPIO_DAVINCI=y CONFIG_GPIO_DWAPB=y CONFIG_GPIO_EM=y CONFIG_GPIO_RCAR=y +CONFIG_GPIO_SYSCON=y CONFIG_GPIO_UNIPHIER=y CONFIG_GPIO_XILINX=y CONFIG_GPIO_ZYNQ=y CONFIG_GPIO_PCA953X=y CONFIG_GPIO_PCA953X_IRQ=y CONFIG_GPIO_PCF857X=y -CONFIG_GPIO_TWL4030=y CONFIG_GPIO_PALMAS=y -CONFIG_GPIO_SYSCON=y CONFIG_GPIO_TPS6586X=y CONFIG_GPIO_TPS65910=y +CONFIG_GPIO_TWL4030=y +CONFIG_POWER_AVS=y +CONFIG_ROCKCHIP_IODOMAIN=y +CONFIG_POWER_RESET_AS3722=y +CONFIG_POWER_RESET_GPIO=y +CONFIG_POWER_RESET_GPIO_RESTART=y +CONFIG_POWER_RESET_ST=y +CONFIG_POWER_RESET_KEYSTONE=y +CONFIG_POWER_RESET_RMOBILE=y CONFIG_BATTERY_ACT8945A=y CONFIG_BATTERY_CPCAP=m CONFIG_BATTERY_SBS=y +CONFIG_AXP20X_POWER=m CONFIG_BATTERY_MAX17040=m CONFIG_BATTERY_MAX17042=m CONFIG_CHARGER_CPCAP=m @@ -464,15 +442,6 @@ CONFIG_CHARGER_MAX77693=m CONFIG_CHARGER_MAX8997=m CONFIG_CHARGER_MAX8998=m CONFIG_CHARGER_TPS65090=y -CONFIG_AXP20X_POWER=m -CONFIG_POWER_RESET_AS3722=y -CONFIG_POWER_RESET_GPIO=y -CONFIG_POWER_RESET_GPIO_RESTART=y -CONFIG_POWER_RESET_KEYSTONE=y -CONFIG_POWER_RESET_RMOBILE=y -CONFIG_POWER_RESET_ST=y -CONFIG_POWER_AVS=y -CONFIG_ROCKCHIP_IODOMAIN=y CONFIG_SENSORS_IIO_HWMON=y CONFIG_SENSORS_LM90=y CONFIG_SENSORS_LM95245=y @@ -480,14 +449,12 @@ CONFIG_SENSORS_NTC_THERMISTOR=m CONFIG_SENSORS_PWM_FAN=m CONFIG_SENSORS_INA2XX=m CONFIG_CPU_THERMAL=y -CONFIG_BCM2835_THERMAL=m -CONFIG_BRCMSTB_THERMAL=m CONFIG_IMX_THERMAL=y CONFIG_ROCKCHIP_THERMAL=y CONFIG_RCAR_THERMAL=y CONFIG_ARMADA_THERMAL=y -CONFIG_DAVINCI_WATCHDOG=m -CONFIG_EXYNOS_THERMAL=m +CONFIG_BCM2835_THERMAL=m +CONFIG_BRCMSTB_THERMAL=m CONFIG_ST_THERMAL_MEMMAP=y CONFIG_WATCHDOG=y CONFIG_DA9063_WATCHDOG=m @@ -495,20 +462,24 @@ CONFIG_XILINX_WATCHDOG=y CONFIG_ARM_SP805_WATCHDOG=y CONFIG_AT91SAM9X_WATCHDOG=y CONFIG_SAMA5D4_WATCHDOG=y +CONFIG_DW_WATCHDOG=y +CONFIG_DAVINCI_WATCHDOG=m CONFIG_ORION_WATCHDOG=y CONFIG_RN5T618_WATCHDOG=y -CONFIG_ST_LPC_WATCHDOG=y CONFIG_SUNXI_WATCHDOG=y CONFIG_IMX2_WDT=y +CONFIG_ST_LPC_WATCHDOG=y CONFIG_TEGRA_WATCHDOG=m CONFIG_MESON_WATCHDOG=y -CONFIG_DW_WATCHDOG=y CONFIG_DIGICOLOR_WATCHDOG=y CONFIG_RENESAS_WDT=m -CONFIG_BCM2835_WDT=y CONFIG_BCM47XX_WDT=y -CONFIG_BCM7038_WDT=m +CONFIG_BCM2835_WDT=y CONFIG_BCM_KONA_WDT=y +CONFIG_BCM7038_WDT=m +CONFIG_BCMA_HOST_SOC=y +CONFIG_BCMA_DRIVER_GMAC_CMN=y +CONFIG_BCMA_DRIVER_GPIO=y CONFIG_MFD_ACT8945A=y CONFIG_MFD_AS3711=y CONFIG_MFD_AS3722=y @@ -516,7 +487,6 @@ CONFIG_MFD_ATMEL_FLEXCOM=y CONFIG_MFD_ATMEL_HLCDC=m CONFIG_MFD_BCM590XX=y CONFIG_MFD_AC100=y -CONFIG_MFD_AXP20X=y CONFIG_MFD_AXP20X_I2C=y CONFIG_MFD_AXP20X_RSB=y CONFIG_MFD_CROS_EC=m @@ -529,11 +499,11 @@ CONFIG_MFD_MAX77693=m CONFIG_MFD_MAX8907=y CONFIG_MFD_MAX8997=y CONFIG_MFD_MAX8998=y -CONFIG_MFD_RK808=y CONFIG_MFD_CPCAP=y CONFIG_MFD_PM8XXX=y CONFIG_MFD_QCOM_RPM=y CONFIG_MFD_SPMI_PMIC=y +CONFIG_MFD_RK808=y CONFIG_MFD_RN5T618=y CONFIG_MFD_SEC_CORE=y CONFIG_MFD_STMPE=y @@ -543,10 +513,10 @@ CONFIG_MFD_TPS65217=y CONFIG_MFD_TPS65218=y CONFIG_MFD_TPS6586X=y CONFIG_MFD_TPS65910=y -CONFIG_REGULATOR_ACT8945A=y -CONFIG_REGULATOR_AB8500=y CONFIG_REGULATOR_ACT8865=y +CONFIG_REGULATOR_ACT8945A=y CONFIG_REGULATOR_ANATOP=y +CONFIG_REGULATOR_AB8500=y CONFIG_REGULATOR_AS3711=y CONFIG_REGULATOR_AS3722=y CONFIG_REGULATOR_AXP20X=y @@ -554,10 +524,7 @@ CONFIG_REGULATOR_BCM590XX=y CONFIG_REGULATOR_CPCAP=y CONFIG_REGULATOR_DA9210=y CONFIG_REGULATOR_FAN53555=y -CONFIG_REGULATOR_RK808=y CONFIG_REGULATOR_GPIO=y -CONFIG_MFD_SYSCON=y -CONFIG_POWER_RESET_SYSCON=y CONFIG_REGULATOR_LP872X=y CONFIG_REGULATOR_MAX14577=m CONFIG_REGULATOR_MAX8907=y @@ -571,7 +538,8 @@ CONFIG_REGULATOR_PALMAS=y CONFIG_REGULATOR_PBIAS=y CONFIG_REGULATOR_PWM=y CONFIG_REGULATOR_QCOM_RPM=y -CONFIG_REGULATOR_QCOM_SMD_RPM=y +CONFIG_REGULATOR_QCOM_SMD_RPM=m +CONFIG_REGULATOR_RK808=y CONFIG_REGULATOR_RN5T618=y CONFIG_REGULATOR_S2MPS11=y CONFIG_REGULATOR_S5M8767=y @@ -592,18 +560,17 @@ CONFIG_MEDIA_CEC_SUPPORT=y CONFIG_MEDIA_CONTROLLER=y CONFIG_VIDEO_V4L2_SUBDEV_API=y CONFIG_MEDIA_USB_SUPPORT=y -CONFIG_USB_VIDEO_CLASS=y -CONFIG_USB_GSPCA=y +CONFIG_USB_VIDEO_CLASS=m CONFIG_V4L_PLATFORM_DRIVERS=y CONFIG_SOC_CAMERA=m CONFIG_SOC_CAMERA_PLATFORM=m -CONFIG_VIDEO_RCAR_VIN=m -CONFIG_VIDEO_ATMEL_ISI=m CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS=m CONFIG_VIDEO_S5P_FIMC=m CONFIG_VIDEO_S5P_MIPI_CSIS=m CONFIG_VIDEO_EXYNOS_FIMC_LITE=m CONFIG_VIDEO_EXYNOS4_FIMC_IS=m +CONFIG_VIDEO_RCAR_VIN=m +CONFIG_VIDEO_ATMEL_ISI=m CONFIG_V4L_MEM2MEM_DRIVERS=y CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m CONFIG_VIDEO_SAMSUNG_S5P_MFC=m @@ -614,19 +581,15 @@ CONFIG_VIDEO_STI_DELTA=m CONFIG_VIDEO_RENESAS_JPU=m CONFIG_VIDEO_RENESAS_VSP1=m CONFIG_V4L_TEST_DRIVERS=y +CONFIG_VIDEO_VIVID=m CONFIG_CEC_PLATFORM_DRIVERS=y CONFIG_VIDEO_SAMSUNG_S5P_CEC=m # CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set CONFIG_VIDEO_ADV7180=m CONFIG_VIDEO_ML86V7667=m CONFIG_DRM=y -CONFIG_DRM_I2C_ADV7511=m -CONFIG_DRM_I2C_ADV7511_AUDIO=y # CONFIG_DRM_I2C_CH7006 is not set # CONFIG_DRM_I2C_SIL164 is not set -CONFIG_DRM_DUMB_VGA_DAC=m -CONFIG_DRM_NXP_PTN3460=m -CONFIG_DRM_PARADE_PS8622=m CONFIG_DRM_NOUVEAU=m CONFIG_DRM_EXYNOS=m CONFIG_DRM_EXYNOS_FIMD=y @@ -645,13 +608,18 @@ CONFIG_DRM_RCAR_LVDS=y CONFIG_DRM_SUN4I=m CONFIG_DRM_FSL_DCU=m CONFIG_DRM_TEGRA=y +CONFIG_DRM_PANEL_SIMPLE=y CONFIG_DRM_PANEL_SAMSUNG_LD9040=m CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03=m CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0=m -CONFIG_DRM_PANEL_SIMPLE=y +CONFIG_DRM_DUMB_VGA_DAC=m +CONFIG_DRM_NXP_PTN3460=m +CONFIG_DRM_PARADE_PS8622=m CONFIG_DRM_SII9234=m +CONFIG_DRM_I2C_ADV7511=m +CONFIG_DRM_I2C_ADV7511_AUDIO=y CONFIG_DRM_STI=m -CONFIG_DRM_VC4=y +CONFIG_DRM_VC4=m CONFIG_DRM_ETNAVIV=m CONFIG_DRM_MXSFB=m CONFIG_FB_ARMCLCD=y @@ -659,8 +627,6 @@ CONFIG_FB_EFI=y CONFIG_FB_WM8505=y CONFIG_FB_SH_MOBILE_LCDC=y CONFIG_FB_SIMPLE=y -CONFIG_BACKLIGHT_LCD_SUPPORT=y -CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_LCD_PLATFORM=m CONFIG_BACKLIGHT_PWM=y CONFIG_BACKLIGHT_AS3711=y @@ -668,7 +634,6 @@ CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y CONFIG_SOUND=m CONFIG_SND=m -CONFIG_SND_DYNAMIC_MINORS=y CONFIG_SND_HDA_TEGRA=m CONFIG_SND_HDA_INPUT_BEEP=y CONFIG_SND_HDA_PATCH_LOADER=y @@ -692,7 +657,7 @@ CONFIG_SND_SOC_SNOW=m CONFIG_SND_SOC_ODROID=m CONFIG_SND_SOC_SH4_FSI=m CONFIG_SND_SOC_RCAR=m -CONFIG_SND_SIMPLE_SCU_CARD=m +CONFIG_SND_SOC_STI=m CONFIG_SND_SUN4I_CODEC=m CONFIG_SND_SOC_TEGRA=m CONFIG_SND_SOC_TEGRA20_I2S=m @@ -703,31 +668,25 @@ CONFIG_SND_SOC_TEGRA_WM8903=m CONFIG_SND_SOC_TEGRA_WM9712=m CONFIG_SND_SOC_TEGRA_TRIMSLICE=m CONFIG_SND_SOC_TEGRA_ALC5632=m -CONFIG_SND_SOC_CPCAP=m CONFIG_SND_SOC_TEGRA_MAX98090=m CONFIG_SND_SOC_AK4642=m +CONFIG_SND_SOC_CPCAP=m CONFIG_SND_SOC_SGTL5000=m CONFIG_SND_SOC_SPDIF=m -CONFIG_SND_SOC_WM8978=m -CONFIG_SND_SOC_STI=m CONFIG_SND_SOC_STI_SAS=m -CONFIG_SND_SIMPLE_CARD=m +CONFIG_SND_SOC_WM8978=m +CONFIG_SND_SIMPLE_SCU_CARD=m CONFIG_USB=y CONFIG_USB_OTG=y CONFIG_USB_XHCI_HCD=y CONFIG_USB_XHCI_MVEBU=y -CONFIG_USB_XHCI_RCAR=m CONFIG_USB_XHCI_TEGRA=m CONFIG_USB_EHCI_HCD=y -CONFIG_USB_EHCI_MSM=m -CONFIG_USB_EHCI_EXYNOS=y -CONFIG_USB_EHCI_TEGRA=y CONFIG_USB_EHCI_HCD_STI=y -CONFIG_USB_EHCI_HCD_PLATFORM=y -CONFIG_USB_ISP1760=y +CONFIG_USB_EHCI_TEGRA=y +CONFIG_USB_EHCI_EXYNOS=y CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD_STI=y -CONFIG_USB_OHCI_HCD_PLATFORM=y CONFIG_USB_OHCI_EXYNOS=m CONFIG_USB_R8A66597_HCD=m CONFIG_USB_RENESAS_USBHS=m @@ -746,18 +705,18 @@ CONFIG_USB_TI_CPPI41_DMA=y CONFIG_USB_TUSB_OMAP_DMA=y CONFIG_USB_DWC3=y CONFIG_USB_DWC2=y -CONFIG_USB_HSIC_USB3503=y CONFIG_USB_CHIPIDEA=y CONFIG_USB_CHIPIDEA_UDC=y CONFIG_USB_CHIPIDEA_HOST=y +CONFIG_USB_ISP1760=y +CONFIG_USB_HSIC_USB3503=y CONFIG_AB8500_USB=y -CONFIG_KEYSTONE_USB_PHY=y +CONFIG_KEYSTONE_USB_PHY=m CONFIG_NOP_USB_XCEIV=m CONFIG_AM335X_PHY_USB=m CONFIG_TWL6030_USB=m CONFIG_USB_GPIO_VBUS=y CONFIG_USB_ISP1301=y -CONFIG_USB_MSM_OTG=m CONFIG_USB_MXS_PHY=y CONFIG_USB_GADGET=y CONFIG_USB_FSL_USB2=y @@ -793,21 +752,20 @@ CONFIG_MMC_SDHCI_OF_ESDHC=y CONFIG_MMC_SDHCI_ESDHC_IMX=y CONFIG_MMC_SDHCI_DOVE=y CONFIG_MMC_SDHCI_TEGRA=y +CONFIG_MMC_SDHCI_S3C=y CONFIG_MMC_SDHCI_PXAV3=y CONFIG_MMC_SDHCI_SPEAR=y -CONFIG_MMC_SDHCI_S3C=y CONFIG_MMC_SDHCI_S3C_DMA=y CONFIG_MMC_SDHCI_BCM_KONA=y +CONFIG_MMC_MESON_MX_SDIO=y CONFIG_MMC_SDHCI_ST=y CONFIG_MMC_OMAP=y CONFIG_MMC_OMAP_HS=y CONFIG_MMC_ATMELMCI=y CONFIG_MMC_SDHCI_MSM=y -CONFIG_MMC_MESON_MX_SDIO=y CONFIG_MMC_MVSDIO=y CONFIG_MMC_SDHI=y CONFIG_MMC_DW=y -CONFIG_MMC_DW_PLTFM=y CONFIG_MMC_DW_EXYNOS=y CONFIG_MMC_DW_ROCKCHIP=y CONFIG_MMC_SH_MMCIF=y @@ -847,94 +805,85 @@ CONFIG_RTC_DRV_MAX77686=y CONFIG_RTC_DRV_RK808=m CONFIG_RTC_DRV_RS5C372=m CONFIG_RTC_DRV_BQ32K=m -CONFIG_RTC_DRV_PALMAS=y -CONFIG_RTC_DRV_ST_LPC=y CONFIG_RTC_DRV_TWL4030=y +CONFIG_RTC_DRV_PALMAS=y CONFIG_RTC_DRV_TPS6586X=y CONFIG_RTC_DRV_TPS65910=y CONFIG_RTC_DRV_S35390A=m CONFIG_RTC_DRV_RX8581=m CONFIG_RTC_DRV_EM3027=y +CONFIG_RTC_DRV_S5M=m CONFIG_RTC_DRV_DA9063=m CONFIG_RTC_DRV_EFI=m CONFIG_RTC_DRV_DIGICOLOR=m -CONFIG_RTC_DRV_S5M=m CONFIG_RTC_DRV_S3C=m CONFIG_RTC_DRV_PL031=y CONFIG_RTC_DRV_AT91RM9200=m CONFIG_RTC_DRV_AT91SAM9=m CONFIG_RTC_DRV_VT8500=y -CONFIG_RTC_DRV_SUN6I=y CONFIG_RTC_DRV_SUNXI=y CONFIG_RTC_DRV_MV=y CONFIG_RTC_DRV_TEGRA=y +CONFIG_RTC_DRV_ST_LPC=y CONFIG_RTC_DRV_CPCAP=m CONFIG_DMADEVICES=y -CONFIG_DW_DMAC=y CONFIG_AT_HDMAC=y CONFIG_AT_XDMAC=y +CONFIG_DMA_BCM2835=y +CONFIG_DMA_SUN6I=y CONFIG_FSL_EDMA=y +CONFIG_IMX_DMA=y +CONFIG_IMX_SDMA=y CONFIG_MV_XOR=y +CONFIG_MXS_DMA=y +CONFIG_PL330_DMA=y +CONFIG_SIRF_DMA=y +CONFIG_STE_DMA40=y +CONFIG_ST_FDMA=m CONFIG_TEGRA20_APB_DMA=y +CONFIG_XILINX_DMA=y +CONFIG_QCOM_BAM_DMA=y +CONFIG_DW_DMAC=y CONFIG_SH_DMAE=y CONFIG_RCAR_DMAC=y CONFIG_RENESAS_USB_DMAC=m -CONFIG_STE_DMA40=y -CONFIG_SIRF_DMA=y -CONFIG_TI_EDMA=y -CONFIG_PL330_DMA=y -CONFIG_IMX_SDMA=y -CONFIG_IMX_DMA=y -CONFIG_MXS_DMA=y -CONFIG_DMA_BCM2835=y -CONFIG_DMA_OMAP=y -CONFIG_QCOM_BAM_DMA=y -CONFIG_XILINX_DMA=y -CONFIG_DMA_SUN6I=y -CONFIG_ST_FDMA=m +CONFIG_VIRTIO_PCI=y +CONFIG_VIRTIO_MMIO=y CONFIG_STAGING=y -CONFIG_SENSORS_ISL29018=y -CONFIG_SENSORS_ISL29028=y CONFIG_MFD_NVEC=y CONFIG_KEYBOARD_NVEC=y CONFIG_SERIO_NVEC_PS2=y CONFIG_NVEC_POWER=y CONFIG_NVEC_PAZ00=y -CONFIG_BCMA=y -CONFIG_BCMA_HOST_SOC=y -CONFIG_BCMA_DRIVER_GMAC_CMN=y -CONFIG_BCMA_DRIVER_GPIO=y -CONFIG_QCOM_GSBI=y -CONFIG_QCOM_PM=y -CONFIG_QCOM_SMEM=y -CONFIG_QCOM_SMD_RPM=y -CONFIG_QCOM_SMP2P=y -CONFIG_QCOM_SMSM=y -CONFIG_QCOM_WCNSS_CTRL=m -CONFIG_ROCKCHIP_PM_DOMAINS=y -CONFIG_COMMON_CLK_QCOM=y -CONFIG_QCOM_CLK_RPM=y -CONFIG_CHROME_PLATFORMS=y CONFIG_STAGING_BOARD=y -CONFIG_CROS_EC_CHARDEV=m CONFIG_COMMON_CLK_MAX77686=y CONFIG_COMMON_CLK_RK808=m CONFIG_COMMON_CLK_S2MPS11=m +CONFIG_COMMON_CLK_QCOM=y +CONFIG_QCOM_CLK_RPM=y CONFIG_APQ_MMCC_8084=y CONFIG_MSM_GCC_8660=y CONFIG_MSM_MMCC_8960=y CONFIG_MSM_MMCC_8974=y -CONFIG_HWSPINLOCK_QCOM=y +CONFIG_BCM2835_MBOX=y CONFIG_ROCKCHIP_IOMMU=y CONFIG_TEGRA_IOMMU_GART=y CONFIG_TEGRA_IOMMU_SMMU=y CONFIG_REMOTEPROC=m CONFIG_ST_REMOTEPROC=m CONFIG_RPMSG_VIRTIO=m +CONFIG_RASPBERRYPI_POWER=y +CONFIG_QCOM_GSBI=y +CONFIG_QCOM_PM=y +CONFIG_QCOM_SMD_RPM=m +CONFIG_QCOM_WCNSS_CTRL=m +CONFIG_ROCKCHIP_PM_DOMAINS=y +CONFIG_ARCH_TEGRA_2x_SOC=y +CONFIG_ARCH_TEGRA_3x_SOC=y +CONFIG_ARCH_TEGRA_114_SOC=y +CONFIG_ARCH_TEGRA_124_SOC=y CONFIG_PM_DEVFREQ=y CONFIG_ARM_TEGRA_DEVFREQ=m -CONFIG_MEMORY=y -CONFIG_EXTCON=y CONFIG_TI_AEMIF=y CONFIG_IIO=y CONFIG_IIO_SW_TRIGGER=y @@ -947,56 +896,54 @@ CONFIG_VF610_ADC=m CONFIG_XILINX_XADC=y CONFIG_MPU3050_I2C=y CONFIG_CM36651=m +CONFIG_SENSORS_ISL29018=y +CONFIG_SENSORS_ISL29028=y CONFIG_AK8975=y -CONFIG_RASPBERRYPI_POWER=y CONFIG_IIO_HRTIMER_TRIGGER=y CONFIG_PWM=y CONFIG_PWM_ATMEL=m CONFIG_PWM_ATMEL_HLCDC_PWM=m CONFIG_PWM_ATMEL_TCB=m +CONFIG_PWM_BCM2835=y +CONFIG_PWM_BRCMSTB=m CONFIG_PWM_FSL_FTM=m CONFIG_PWM_MESON=m CONFIG_PWM_RCAR=m CONFIG_PWM_RENESAS_TPU=y CONFIG_PWM_ROCKCHIP=m CONFIG_PWM_SAMSUNG=m +CONFIG_PWM_STI=y CONFIG_PWM_SUN4I=y CONFIG_PWM_TEGRA=y CONFIG_PWM_VT8500=y +CONFIG_KEYSTONE_IRQ=y +CONFIG_PHY_SUN4I_USB=y +CONFIG_PHY_SUN9I_USB=y CONFIG_PHY_HIX5HD2_SATA=y -CONFIG_E1000E=y -CONFIG_PWM_STI=y -CONFIG_PWM_BCM2835=y -CONFIG_PWM_BRCMSTB=m -CONFIG_PHY_DM816X_USB=m -CONFIG_OMAP_USB2=y -CONFIG_TI_PIPE3=y -CONFIG_TWL4030_USB=m +CONFIG_PHY_BERLIN_SATA=y CONFIG_PHY_BERLIN_USB=y CONFIG_PHY_CPCAP_USB=m -CONFIG_PHY_BERLIN_SATA=y +CONFIG_PHY_QCOM_APQ8064_SATA=m +CONFIG_PHY_RCAR_GEN2=m CONFIG_PHY_ROCKCHIP_DP=m CONFIG_PHY_ROCKCHIP_USB=y -CONFIG_PHY_QCOM_APQ8064_SATA=m +CONFIG_PHY_SAMSUNG_USB2=m CONFIG_PHY_MIPHY28LP=y -CONFIG_PHY_RCAR_GEN2=m CONFIG_PHY_STIH407_USB=y CONFIG_PHY_STM32_USBPHYC=y -CONFIG_PHY_SUN4I_USB=y -CONFIG_PHY_SUN9I_USB=y -CONFIG_PHY_SAMSUNG_USB2=m CONFIG_PHY_TEGRA_XUSB=y -CONFIG_PHY_BRCM_SATA=y -CONFIG_NVMEM=y +CONFIG_PHY_DM816X_USB=m +CONFIG_OMAP_USB2=y +CONFIG_TI_PIPE3=y +CONFIG_TWL4030_USB=m CONFIG_NVMEM_IMX_OCOTP=y CONFIG_NVMEM_SUNXI_SID=y CONFIG_NVMEM_VF610_OCOTP=y -CONFIG_BCM2835_MBOX=y CONFIG_RASPBERRYPI_FIRMWARE=y -CONFIG_EFI_VARS=m -CONFIG_EFI_CAPSULE_LOADER=m CONFIG_BCM47XX_NVRAM=y CONFIG_BCM47XX_SPROM=y +CONFIG_EFI_VARS=m +CONFIG_EFI_CAPSULE_LOADER=m CONFIG_EXT4_FS=y CONFIG_AUTOFS4_FS=y CONFIG_MSDOS_FS=y @@ -1004,7 +951,6 @@ CONFIG_VFAT_FS=y CONFIG_NTFS_FS=y CONFIG_TMPFS_POSIX_ACL=y CONFIG_UBIFS_FS=y -CONFIG_TMPFS=y CONFIG_SQUASHFS=y CONFIG_SQUASHFS_LZO=y CONFIG_SQUASHFS_XZ=y @@ -1020,13 +966,7 @@ CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_ISO8859_1=y CONFIG_NLS_UTF8=y CONFIG_PRINTK_TIME=y -CONFIG_DEBUG_FS=y CONFIG_MAGIC_SYSRQ=y -CONFIG_LOCKUP_DETECTOR=y -CONFIG_CPUFREQ_DT=y -CONFIG_KEYSTONE_IRQ=y -CONFIG_HW_RANDOM=y -CONFIG_HW_RANDOM_ST=y CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_SKCIPHER=m @@ -1035,27 +975,19 @@ CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_DEV_MARVELL_CESA=m CONFIG_CRYPTO_DEV_EXYNOS_RNG=m CONFIG_CRYPTO_DEV_S5P=m +CONFIG_CRYPTO_DEV_ATMEL_AES=m +CONFIG_CRYPTO_DEV_ATMEL_TDES=m +CONFIG_CRYPTO_DEV_ATMEL_SHA=m CONFIG_CRYPTO_DEV_SUN4I_SS=m CONFIG_CRYPTO_DEV_ROCKCHIP=m CONFIG_ARM_CRYPTO=y -CONFIG_CRYPTO_SHA1_ARM=m CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_CE=m CONFIG_CRYPTO_SHA2_ARM_CE=m -CONFIG_CRYPTO_SHA256_ARM=m CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM=m CONFIG_CRYPTO_AES_ARM_BS=m CONFIG_CRYPTO_AES_ARM_CE=m -CONFIG_CRYPTO_CHACHA20_NEON=m -CONFIG_CRYPTO_CRC32_ARM_CE=m -CONFIG_CRYPTO_CRCT10DIF_ARM_CE=m CONFIG_CRYPTO_GHASH_ARM_CE=m -CONFIG_CRYPTO_DEV_ATMEL_AES=m -CONFIG_CRYPTO_DEV_ATMEL_TDES=m -CONFIG_CRYPTO_DEV_ATMEL_SHA=m -CONFIG_VIDEO_VIVID=m -CONFIG_VIRTIO=y -CONFIG_VIRTIO_PCI=y -CONFIG_VIRTIO_PCI_LEGACY=y -CONFIG_VIRTIO_MMIO=y +CONFIG_CRYPTO_CRC32_ARM_CE=m +CONFIG_CRYPTO_CHACHA20_NEON=m diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S index 3c1e203e53b9..57caa742016e 100644 --- a/arch/arm/crypto/speck-neon-core.S +++ b/arch/arm/crypto/speck-neon-core.S @@ -272,9 +272,11 @@ * Allocate stack space to store 128 bytes worth of tweaks. For * performance, this space is aligned to a 16-byte boundary so that we * can use the load/store instructions that declare 16-byte alignment. + * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'. */ - sub sp, #128 - bic sp, #0xf + sub r12, sp, #128 + bic r12, #0xf + mov sp, r12 .if \n == 64 // Load first tweak diff --git a/arch/arm/firmware/Makefile b/arch/arm/firmware/Makefile index a71f16536b6c..6e41336b0bc4 100644 --- a/arch/arm/firmware/Makefile +++ b/arch/arm/firmware/Makefile @@ -1 +1,4 @@ obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o + +# tf_generic_smc() fails to build with -fsanitize-coverage=trace-pc +KCOV_INSTRUMENT := n diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 0cd4dccbae78..b17ee03d280b 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -460,6 +460,10 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) adds \tmp, \addr, #\size - 1 sbcccs \tmp, \tmp, \limit bcs \bad +#ifdef CONFIG_CPU_SPECTRE + movcs \addr, #0 + csdb +#endif #endif .endm diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 66d0e215a773..f74756641410 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -130,7 +130,7 @@ static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new) } #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int oldval, newval; unsigned long tmp; @@ -156,6 +156,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) return oldval; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #else /* ARM_ARCH_6 */ @@ -215,15 +216,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - - c = atomic_read(v); - while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c) - c = old; - return c; -} +#define atomic_fetch_andnot atomic_fetch_andnot #endif /* __LINUX_ARM_ARCH__ */ @@ -254,17 +247,6 @@ ATOMIC_OPS(xor, ^=, eor) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -#define atomic_inc(v) atomic_add(1, v) -#define atomic_dec(v) atomic_sub(1, v) - -#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) -#define atomic_inc_return_relaxed(v) (atomic_add_return_relaxed(1, v)) -#define atomic_dec_return_relaxed(v) (atomic_sub_return_relaxed(1, v)) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - -#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0) - #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { long long counter; @@ -494,12 +476,13 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return result; } +#define atomic64_dec_if_positive atomic64_dec_if_positive -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, + long long u) { - long long val; + long long oldval, newval; unsigned long tmp; - int ret = 1; smp_mb(); prefetchw(&v->counter); @@ -508,33 +491,23 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) "1: ldrexd %0, %H0, [%4]\n" " teq %0, %5\n" " teqeq %H0, %H5\n" -" moveq %1, #0\n" " beq 2f\n" -" adds %Q0, %Q0, %Q6\n" -" adc %R0, %R0, %R6\n" -" strexd %2, %0, %H0, [%4]\n" +" adds %Q1, %Q0, %Q6\n" +" adc %R1, %R0, %R6\n" +" strexd %2, %1, %H1, [%4]\n" " teq %2, #0\n" " bne 1b\n" "2:" - : "=&r" (val), "+r" (ret), "=&r" (tmp), "+Qo" (v->counter) + : "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter) : "r" (&v->counter), "r" (u), "r" (a) : "cc"); - if (ret) + if (oldval != u) smp_mb(); - return ret; + return oldval; } - -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) -#define atomic64_inc(v) atomic64_add(1LL, (v)) -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1LL, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) -#define atomic64_dec(v) atomic64_sub(1LL, (v)) -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v)) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #endif /* !CONFIG_GENERIC_ATOMIC64 */ #endif diff --git a/arch/arm/include/asm/bitops.h b/arch/arm/include/asm/bitops.h index 4cab9bb823fb..c92e42a5c8f7 100644 --- a/arch/arm/include/asm/bitops.h +++ b/arch/arm/include/asm/bitops.h @@ -215,7 +215,6 @@ extern int _find_next_bit_be(const unsigned long *p, int size, int offset); #if __LINUX_ARM_ARCH__ < 5 -#include #include #include #include @@ -223,93 +222,20 @@ extern int _find_next_bit_be(const unsigned long *p, int size, int offset); #else -static inline int constant_fls(int x) -{ - int r = 32; - - if (!x) - return 0; - if (!(x & 0xffff0000u)) { - x <<= 16; - r -= 16; - } - if (!(x & 0xff000000u)) { - x <<= 8; - r -= 8; - } - if (!(x & 0xf0000000u)) { - x <<= 4; - r -= 4; - } - if (!(x & 0xc0000000u)) { - x <<= 2; - r -= 2; - } - if (!(x & 0x80000000u)) { - x <<= 1; - r -= 1; - } - return r; -} - -/* - * On ARMv5 and above those functions can be implemented around the - * clz instruction for much better code efficiency. __clz returns - * the number of leading zeros, zero input will return 32, and - * 0x80000000 will return 0. - */ -static inline unsigned int __clz(unsigned int x) -{ - unsigned int ret; - - asm("clz\t%0, %1" : "=r" (ret) : "r" (x)); - - return ret; -} - -/* - * fls() returns zero if the input is zero, otherwise returns the bit - * position of the last set bit, where the LSB is 1 and MSB is 32. - */ -static inline int fls(int x) -{ - if (__builtin_constant_p(x)) - return constant_fls(x); - - return 32 - __clz(x); -} - -/* - * __fls() returns the bit position of the last bit set, where the - * LSB is 0 and MSB is 31. Zero input is undefined. - */ -static inline unsigned long __fls(unsigned long x) -{ - return fls(x) - 1; -} - -/* - * ffs() returns zero if the input was zero, otherwise returns the bit - * position of the first set bit, where the LSB is 1 and MSB is 32. - */ -static inline int ffs(int x) -{ - return fls(x & -x); -} - /* - * __ffs() returns the bit position of the first bit set, where the - * LSB is 0 and MSB is 31. Zero input is undefined. + * On ARMv5 and above, the gcc built-ins may rely on the clz instruction + * and produce optimal inlined code in all cases. On ARMv7 it is even + * better by also using the rbit instruction. */ -static inline unsigned long __ffs(unsigned long x) -{ - return ffs(x) - 1; -} - -#define ffz(x) __ffs( ~(x) ) +#include +#include +#include +#include #endif +#include + #include #include diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 869080bedb89..ec1a5fd0d294 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -35,7 +35,7 @@ * Start addresses are inclusive and end addresses are exclusive; * start addresses should be rounded down, end addresses up. * - * See Documentation/cachetlb.txt for more information. + * See Documentation/core-api/cachetlb.rst for more information. * Please note that the implementation of these, and the required * effects are cache-type (VIVT/VIPT/PIPT) specific. * diff --git a/arch/arm/include/asm/efi.h b/arch/arm/include/asm/efi.h index 17f1f1a814ff..38badaae8d9d 100644 --- a/arch/arm/include/asm/efi.h +++ b/arch/arm/include/asm/efi.h @@ -58,6 +58,9 @@ void efi_virtmap_unload(void); #define efi_call_runtime(f, ...) sys_table_arg->runtime->f(__VA_ARGS__) #define efi_is_64bit() (false) +#define efi_table_attr(table, attr, instance) \ + ((table##_t *)instance)->attr + #define efi_call_proto(protocol, f, instance, ...) \ ((protocol##_t *)instance)->f(instance, ##__VA_ARGS__) diff --git a/arch/arm/include/asm/hw_breakpoint.h b/arch/arm/include/asm/hw_breakpoint.h index e46e4e7bdba3..ac54c06764e6 100644 --- a/arch/arm/include/asm/hw_breakpoint.h +++ b/arch/arm/include/asm/hw_breakpoint.h @@ -111,14 +111,17 @@ static inline void decode_ctrl_reg(u32 reg, asm volatile("mcr p14, 0, %0, " #N "," #M ", " #OP2 : : "r" (VAL));\ } while (0) +struct perf_event_attr; struct notifier_block; struct perf_event; struct pmu; extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, int *gen_len, int *gen_type); -extern int arch_check_bp_in_kernelspace(struct perf_event *bp); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp); +extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +extern int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h index b6f319606e30..c883fcbe93b6 100644 --- a/arch/arm/include/asm/irq.h +++ b/arch/arm/include/asm/irq.h @@ -31,11 +31,6 @@ extern void asm_do_IRQ(unsigned int, struct pt_regs *); void handle_IRQ(unsigned int, struct pt_regs *); void init_IRQ(void); -#ifdef CONFIG_MULTI_IRQ_HANDLER -extern void (*handle_arch_irq)(struct pt_regs *); -extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); -#endif - #ifdef CONFIG_SMP extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self); diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h index 59655459da59..82290f212d8e 100644 --- a/arch/arm/include/asm/kprobes.h +++ b/arch/arm/include/asm/kprobes.h @@ -44,8 +44,6 @@ struct prev_kprobe { struct kprobe_ctlblk { unsigned int kprobe_status; struct prev_kprobe prev_kprobe; - struct pt_regs jprobe_saved_regs; - char jprobes_stack[MAX_STACK_SIZE]; }; void arch_remove_kprobe(struct kprobe *); diff --git a/arch/arm/include/asm/mach/arch.h b/arch/arm/include/asm/mach/arch.h index 5c1ad11aa392..bb8851208e17 100644 --- a/arch/arm/include/asm/mach/arch.h +++ b/arch/arm/include/asm/mach/arch.h @@ -59,7 +59,7 @@ struct machine_desc { void (*init_time)(void); void (*init_machine)(void); void (*init_late)(void); -#ifdef CONFIG_MULTI_IRQ_HANDLER +#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER void (*handle_irq)(struct pt_regs *); #endif void (*restart)(enum reboot_mode, const char *); diff --git a/arch/arm/include/asm/mach/time.h b/arch/arm/include/asm/mach/time.h index 0f79e4dec7f9..4ac3a019a46f 100644 --- a/arch/arm/include/asm/mach/time.h +++ b/arch/arm/include/asm/mach/time.h @@ -13,7 +13,6 @@ extern void timer_tick(void); typedef void (*clock_access_fn)(struct timespec64 *); -extern int register_persistent_clock(clock_access_fn read_boot, - clock_access_fn read_persistent); +extern int register_persistent_clock(clock_access_fn read_persistent); #endif diff --git a/arch/arm/include/asm/probes.h b/arch/arm/include/asm/probes.h index 1e5b9bb92270..991c9127c650 100644 --- a/arch/arm/include/asm/probes.h +++ b/arch/arm/include/asm/probes.h @@ -51,7 +51,6 @@ struct arch_probes_insn { * We assume one instruction can consume at most 64 bytes stack, which is * 'push {r0-r15}'. Instructions consume more or unknown stack space like * 'str r0, [sp, #-80]' and 'str r0, [sp, r1]' should be prohibit to probe. - * Both kprobe and jprobe use this macro. */ #define MAX_STACK_SIZE 64 diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index e71cc35de163..9b37b6ab27fe 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -123,8 +123,8 @@ struct user_vfp_exc; extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *, struct user_vfp_exc __user *); -extern int vfp_restore_user_hwstate(struct user_vfp __user *, - struct user_vfp_exc __user *); +extern int vfp_restore_user_hwstate(struct user_vfp *, + struct user_vfp_exc *); #endif /* diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index d5562f9ce600..f854148c8d7c 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -292,5 +292,13 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, { } +static inline void tlb_flush_remove_tables(struct mm_struct *mm) +{ +} + +static inline void tlb_flush_remove_tables_local(void *arg) +{ +} + #endif /* CONFIG_MMU */ #endif diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 3d614e90c19f..5451e1f05a19 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -84,6 +84,13 @@ static inline void set_fs(mm_segment_t fs) : "cc"); \ flag; }) +/* + * This is a type: either unsigned long, if the argument fits into + * that type, or otherwise unsigned long long. + */ +#define __inttype(x) \ + __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) + /* * Single-value transfer routines. They automatically use the right * size if we just have the right pointer type. Note that the functions @@ -153,7 +160,7 @@ extern int __get_user_64t_4(void *); ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ register typeof(*(p)) __user *__p asm("r0") = (p); \ - register typeof(x) __r2 asm("r2"); \ + register __inttype(x) __r2 asm("r2"); \ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ unsigned int __ua_flags = uaccess_save_and_enable(); \ @@ -243,6 +250,16 @@ static inline void set_fs(mm_segment_t fs) #define user_addr_max() \ (uaccess_kernel() ? ~0UL : get_fs()) +#ifdef CONFIG_CPU_SPECTRE +/* + * When mitigating Spectre variant 1, it is not worth fixing the non- + * verifying accessors, because we need to add verification of the + * address space there. Force these to use the standard get_user() + * version instead. + */ +#define __get_user(x, ptr) get_user(x, ptr) +#else + /* * The "__xxx" versions of the user access functions do not verify the * address space - it must have been done previously with a separate @@ -259,12 +276,6 @@ static inline void set_fs(mm_segment_t fs) __gu_err; \ }) -#define __get_user_error(x, ptr, err) \ -({ \ - __get_user_err((x), (ptr), err); \ - (void) 0; \ -}) - #define __get_user_err(x, ptr, err) \ do { \ unsigned long __gu_addr = (unsigned long)(ptr); \ @@ -324,6 +335,7 @@ do { \ #define __get_user_asm_word(x, addr, err) \ __get_user_asm(x, addr, err, ldr) +#endif #define __put_user_switch(x, ptr, __err, __fn) \ diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 179a9f6bd1e3..e85a3af9ddeb 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -22,7 +22,7 @@ #include #include #include -#ifndef CONFIG_MULTI_IRQ_HANDLER +#ifndef CONFIG_GENERIC_IRQ_MULTI_HANDLER #include #endif #include @@ -39,7 +39,7 @@ * Interrupt handling. */ .macro irq_handler -#ifdef CONFIG_MULTI_IRQ_HANDLER +#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER ldr r1, =handle_arch_irq mov r0, sp badr lr, 9997f @@ -1226,9 +1226,3 @@ vector_addrexcptn: .globl cr_alignment cr_alignment: .space 4 - -#ifdef CONFIG_MULTI_IRQ_HANDLER - .globl handle_arch_irq -handle_arch_irq: - .space 4 -#endif diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 106a1466518d..746565a876dc 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -48,6 +48,7 @@ saved_pc .req lr * from those features make this path too inefficient. */ ret_fast_syscall: +__ret_fast_syscall: UNWIND(.fnstart ) UNWIND(.cantunwind ) disable_irq_notrace @ disable interrupts @@ -78,6 +79,7 @@ fast_work_pending: * call. */ ret_fast_syscall: +__ret_fast_syscall: UNWIND(.fnstart ) UNWIND(.cantunwind ) str r0, [sp, #S_R0 + S_OFF]! @ save returned r0 @@ -255,7 +257,7 @@ local_restart: tst r10, #_TIF_SYSCALL_WORK @ are we tracing syscalls? bne __sys_trace - invoke_syscall tbl, scno, r10, ret_fast_syscall + invoke_syscall tbl, scno, r10, __ret_fast_syscall add r1, sp, #S_OFF 2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE) diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S index dd546d65a383..ec29de250076 100644 --- a/arch/arm/kernel/head-nommu.S +++ b/arch/arm/kernel/head-nommu.S @@ -53,7 +53,11 @@ ENTRY(stext) THUMB(1: ) #endif - setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode +#ifdef CONFIG_ARM_VIRT_EXT + bl __hyp_stub_install +#endif + @ ensure svc mode and all interrupts masked + safe_svcmode_maskall r9 @ and irqs disabled #if defined(CONFIG_CPU_CP15) mrc p15, 0, r9, c0, c0 @ get processor id @@ -89,7 +93,11 @@ ENTRY(secondary_startup) * the processor type - there is no need to check the machine type * as it has already been validated by the primary processor. */ - setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 +#ifdef CONFIG_ARM_VIRT_EXT + bl __hyp_stub_install_secondary +#endif + safe_svcmode_maskall r9 + #ifndef CONFIG_CPU_CP15 ldr r9, =CONFIG_PROCESSOR_ID #else @@ -177,7 +185,7 @@ M_CLASS(streq r3, [r12, #PMSAv8_MAIR1]) bic r0, r0, #CR_I #endif mcr p15, 0, r0, c1, c0, 0 @ write control reg - isb + instr_sync #elif defined (CONFIG_CPU_V7M) #ifdef CONFIG_ARM_MPU ldreq r3, [r12, MPU_CTRL] diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c index 629e25152c0d..1d5fbf1d1c67 100644 --- a/arch/arm/kernel/hw_breakpoint.c +++ b/arch/arm/kernel/hw_breakpoint.c @@ -456,14 +456,13 @@ static int get_hbp_len(u8 hbp_len) /* * Check whether bp virtual address is in kernel space. */ -int arch_check_bp_in_kernelspace(struct perf_event *bp) +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { unsigned int len; unsigned long va; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - va = info->address; - len = get_hbp_len(info->ctrl.len); + va = hw->address; + len = get_hbp_len(hw->ctrl.len); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -518,42 +517,42 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, /* * Construct an arch_hw_breakpoint from a perf_event. */ -static int arch_build_bp_info(struct perf_event *bp) +static int arch_build_bp_info(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - /* Type */ - switch (bp->attr.bp_type) { + switch (attr->bp_type) { case HW_BREAKPOINT_X: - info->ctrl.type = ARM_BREAKPOINT_EXECUTE; + hw->ctrl.type = ARM_BREAKPOINT_EXECUTE; break; case HW_BREAKPOINT_R: - info->ctrl.type = ARM_BREAKPOINT_LOAD; + hw->ctrl.type = ARM_BREAKPOINT_LOAD; break; case HW_BREAKPOINT_W: - info->ctrl.type = ARM_BREAKPOINT_STORE; + hw->ctrl.type = ARM_BREAKPOINT_STORE; break; case HW_BREAKPOINT_RW: - info->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; + hw->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; break; default: return -EINVAL; } /* Len */ - switch (bp->attr.bp_len) { + switch (attr->bp_len) { case HW_BREAKPOINT_LEN_1: - info->ctrl.len = ARM_BREAKPOINT_LEN_1; + hw->ctrl.len = ARM_BREAKPOINT_LEN_1; break; case HW_BREAKPOINT_LEN_2: - info->ctrl.len = ARM_BREAKPOINT_LEN_2; + hw->ctrl.len = ARM_BREAKPOINT_LEN_2; break; case HW_BREAKPOINT_LEN_4: - info->ctrl.len = ARM_BREAKPOINT_LEN_4; + hw->ctrl.len = ARM_BREAKPOINT_LEN_4; break; case HW_BREAKPOINT_LEN_8: - info->ctrl.len = ARM_BREAKPOINT_LEN_8; - if ((info->ctrl.type != ARM_BREAKPOINT_EXECUTE) + hw->ctrl.len = ARM_BREAKPOINT_LEN_8; + if ((hw->ctrl.type != ARM_BREAKPOINT_EXECUTE) && max_watchpoint_len >= 8) break; default: @@ -566,24 +565,24 @@ static int arch_build_bp_info(struct perf_event *bp) * by the hardware and must be aligned to the appropriate number of * bytes. */ - if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE && - info->ctrl.len != ARM_BREAKPOINT_LEN_2 && - info->ctrl.len != ARM_BREAKPOINT_LEN_4) + if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE && + hw->ctrl.len != ARM_BREAKPOINT_LEN_2 && + hw->ctrl.len != ARM_BREAKPOINT_LEN_4) return -EINVAL; /* Address */ - info->address = bp->attr.bp_addr; + hw->address = attr->bp_addr; /* Privilege */ - info->ctrl.privilege = ARM_BREAKPOINT_USER; - if (arch_check_bp_in_kernelspace(bp)) - info->ctrl.privilege |= ARM_BREAKPOINT_PRIV; + hw->ctrl.privilege = ARM_BREAKPOINT_USER; + if (arch_check_bp_in_kernelspace(hw)) + hw->ctrl.privilege |= ARM_BREAKPOINT_PRIV; /* Enabled? */ - info->ctrl.enabled = !bp->attr.disabled; + hw->ctrl.enabled = !attr->disabled; /* Mismatch */ - info->ctrl.mismatch = 0; + hw->ctrl.mismatch = 0; return 0; } @@ -591,9 +590,10 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings. */ -int arch_validate_hwbkpt_settings(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); int ret = 0; u32 offset, alignment_mask = 0x3; @@ -602,14 +602,14 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) return -ENODEV; /* Build the arch_hw_breakpoint. */ - ret = arch_build_bp_info(bp); + ret = arch_build_bp_info(bp, attr, hw); if (ret) goto out; /* Check address alignment. */ - if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_8) alignment_mask = 0x7; - offset = info->address & alignment_mask; + offset = hw->address & alignment_mask; switch (offset) { case 0: /* Aligned */ @@ -617,19 +617,19 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) case 1: case 2: /* Allow halfword watchpoints and breakpoints. */ - if (info->ctrl.len == ARM_BREAKPOINT_LEN_2) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2) break; case 3: /* Allow single byte watchpoint. */ - if (info->ctrl.len == ARM_BREAKPOINT_LEN_1) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1) break; default: ret = -EINVAL; goto out; } - info->address &= ~alignment_mask; - info->ctrl.len <<= offset; + hw->address &= ~alignment_mask; + hw->ctrl.len <<= offset; if (is_default_overflow_handler(bp)) { /* @@ -640,7 +640,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) return -EINVAL; /* We don't allow mismatch breakpoints in kernel space. */ - if (arch_check_bp_in_kernelspace(bp)) + if (arch_check_bp_in_kernelspace(hw)) return -EPERM; /* @@ -655,8 +655,8 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) * reports them. */ if (!debug_exception_updates_fsr() && - (info->ctrl.type == ARM_BREAKPOINT_LOAD || - info->ctrl.type == ARM_BREAKPOINT_STORE)) + (hw->ctrl.type == ARM_BREAKPOINT_LOAD || + hw->ctrl.type == ARM_BREAKPOINT_STORE)) return -EINVAL; } diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index ece04a457486..9908dacf9229 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -102,16 +102,6 @@ void __init init_IRQ(void) uniphier_cache_init(); } -#ifdef CONFIG_MULTI_IRQ_HANDLER -void __init set_handle_irq(void (*handle_irq)(struct pt_regs *)) -{ - if (handle_arch_irq) - return; - - handle_arch_irq = handle_irq; -} -#endif - #ifdef CONFIG_SPARSE_IRQ int __init arch_probe_nr_irqs(void) { diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 225d1c58d2de..d9c299133111 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -338,6 +338,7 @@ static struct vm_area_struct gate_vma = { static int __init gate_vma_init(void) { + vma_init(&gate_vma, NULL); gate_vma.vm_page_prot = PAGE_READONLY_EXEC; return 0; } diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 35ca494c028c..4c249cb261f3 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -1145,7 +1145,7 @@ void __init setup_arch(char **cmdline_p) reserve_crashkernel(); -#ifdef CONFIG_MULTI_IRQ_HANDLER +#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER handle_arch_irq = mdesc->handle_irq; #endif diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index f09e9d66d605..b8f766cf3a90 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -150,22 +150,18 @@ static int preserve_vfp_context(struct vfp_sigframe __user *frame) static int restore_vfp_context(char __user **auxp) { - struct vfp_sigframe __user *frame = - (struct vfp_sigframe __user *)*auxp; - unsigned long magic; - unsigned long size; - int err = 0; - - __get_user_error(magic, &frame->magic, err); - __get_user_error(size, &frame->size, err); + struct vfp_sigframe frame; + int err; + err = __copy_from_user(&frame, *auxp, sizeof(frame)); if (err) - return -EFAULT; - if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) + return err; + + if (frame.magic != VFP_MAGIC || frame.size != VFP_STORAGE_SIZE) return -EINVAL; - *auxp += size; - return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc); + *auxp += sizeof(frame); + return vfp_restore_user_hwstate(&frame.ufp, &frame.ufp_exc); } #endif @@ -176,6 +172,7 @@ static int restore_vfp_context(char __user **auxp) static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) { + struct sigcontext context; char __user *aux; sigset_t set; int err; @@ -184,23 +181,26 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) if (err == 0) set_current_blocked(&set); - __get_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err); - __get_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err); - __get_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err); - __get_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err); - __get_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err); - __get_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err); - __get_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err); - __get_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err); - __get_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err); - __get_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err); - __get_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err); - __get_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err); - __get_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err); - __get_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err); - __get_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err); - __get_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err); - __get_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err); + err |= __copy_from_user(&context, &sf->uc.uc_mcontext, sizeof(context)); + if (err == 0) { + regs->ARM_r0 = context.arm_r0; + regs->ARM_r1 = context.arm_r1; + regs->ARM_r2 = context.arm_r2; + regs->ARM_r3 = context.arm_r3; + regs->ARM_r4 = context.arm_r4; + regs->ARM_r5 = context.arm_r5; + regs->ARM_r6 = context.arm_r6; + regs->ARM_r7 = context.arm_r7; + regs->ARM_r8 = context.arm_r8; + regs->ARM_r9 = context.arm_r9; + regs->ARM_r10 = context.arm_r10; + regs->ARM_fp = context.arm_fp; + regs->ARM_ip = context.arm_ip; + regs->ARM_sp = context.arm_sp; + regs->ARM_lr = context.arm_lr; + regs->ARM_pc = context.arm_pc; + regs->ARM_cpsr = context.arm_cpsr; + } err |= !valid_user_regs(regs); @@ -544,7 +544,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) * Increment event counter and perform fixup for the pre-signal * frame. */ - rseq_signal_deliver(regs); + rseq_signal_deliver(ksig, regs); /* * Set up the stack frame @@ -666,7 +666,7 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) } else { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); - rseq_handle_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } } local_irq_disable(); diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c index 1df21a61e379..f0dd4b6ebb63 100644 --- a/arch/arm/kernel/sys_oabi-compat.c +++ b/arch/arm/kernel/sys_oabi-compat.c @@ -329,9 +329,11 @@ asmlinkage long sys_oabi_semtimedop(int semid, return -ENOMEM; err = 0; for (i = 0; i < nsops; i++) { - __get_user_error(sops[i].sem_num, &tsops->sem_num, err); - __get_user_error(sops[i].sem_op, &tsops->sem_op, err); - __get_user_error(sops[i].sem_flg, &tsops->sem_flg, err); + struct oabi_sembuf osb; + err |= __copy_from_user(&osb, tsops, sizeof(osb)); + sops[i].sem_num = osb.sem_num; + sops[i].sem_op = osb.sem_op; + sops[i].sem_flg = osb.sem_flg; tsops++; } if (timeout) { diff --git a/arch/arm/kernel/time.c b/arch/arm/kernel/time.c index cf2701cb0de8..078b259ead4e 100644 --- a/arch/arm/kernel/time.c +++ b/arch/arm/kernel/time.c @@ -83,29 +83,18 @@ static void dummy_clock_access(struct timespec64 *ts) } static clock_access_fn __read_persistent_clock = dummy_clock_access; -static clock_access_fn __read_boot_clock = dummy_clock_access; void read_persistent_clock64(struct timespec64 *ts) { __read_persistent_clock(ts); } -void read_boot_clock64(struct timespec64 *ts) -{ - __read_boot_clock(ts); -} - -int __init register_persistent_clock(clock_access_fn read_boot, - clock_access_fn read_persistent) +int __init register_persistent_clock(clock_access_fn read_persistent) { /* Only allow the clockaccess functions to be registered once */ - if (__read_persistent_clock == dummy_clock_access && - __read_boot_clock == dummy_clock_access) { - if (read_boot) - __read_boot_clock = read_boot; + if (__read_persistent_clock == dummy_clock_access) { if (read_persistent) __read_persistent_clock = read_persistent; - return 0; } diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile index 7fc0638f263a..d2b5ec9c4b92 100644 --- a/arch/arm/kvm/hyp/Makefile +++ b/arch/arm/kvm/hyp/Makefile @@ -23,3 +23,11 @@ obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o obj-$(CONFIG_KVM_ARM_HOST) += switch.o CFLAGS_switch.o += $(CFLAGS_ARMV7VE) obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o + +# KVM code is run at a different exception code with a different map, so +# compiler instrumentation that inserts callbacks or checks into the code may +# cause crashes. Just disable it. +GCOV_PROFILE := n +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index 7a4b06049001..a826df3d3814 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -90,6 +90,15 @@ .text ENTRY(arm_copy_from_user) +#ifdef CONFIG_CPU_SPECTRE + get_thread_info r3 + ldr r3, [r3, #TI_ADDR_LIMIT] + adds ip, r1, r2 @ ip=addr+size + sub r3, r3, #1 @ addr_limit - 1 + cmpcc ip, r3 @ if (addr+size > addr_limit - 1) + movcs r1, #0 @ addr = NULL + csdb +#endif #include "copy_template.S" diff --git a/arch/arm/mach-bcm/Kconfig b/arch/arm/mach-bcm/Kconfig index c46a728df44e..25aac6ee2ab1 100644 --- a/arch/arm/mach-bcm/Kconfig +++ b/arch/arm/mach-bcm/Kconfig @@ -20,6 +20,7 @@ config ARCH_BCM_IPROC select GPIOLIB select ARM_AMBA select PINCTRL + select PCI_DOMAINS if PCI help This enables support for systems based on Broadcom IPROC architected SoCs. The IPROC complex contains one or more ARM CPUs along with common diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c index e22fb40e34bc..6d5beb11bd96 100644 --- a/arch/arm/mach-davinci/board-da850-evm.c +++ b/arch/arm/mach-davinci/board-da850-evm.c @@ -774,7 +774,7 @@ static struct gpiod_lookup_table mmc_gpios_table = { GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd", GPIO_ACTIVE_LOW), GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp", - GPIO_ACTIVE_LOW), + GPIO_ACTIVE_HIGH), }, }; diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c index 69df3620eca5..1c73694c871a 100644 --- a/arch/arm/mach-omap2/omap-smp.c +++ b/arch/arm/mach-omap2/omap-smp.c @@ -109,6 +109,45 @@ void omap5_erratum_workaround_801819(void) static inline void omap5_erratum_workaround_801819(void) { } #endif +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +/* + * Configure ACR and enable ACTLR[0] (Enable invalidates of BTB with + * ICIALLU) to activate the workaround for secondary Core. + * NOTE: it is assumed that the primary core's configuration is done + * by the boot loader (kernel will detect a misconfiguration and complain + * if this is not done). + * + * In General Purpose(GP) devices, ACR bit settings can only be done + * by ROM code in "secure world" using the smc call and there is no + * option to update the "firmware" on such devices. This also works for + * High security(HS) devices, as a backup option in case the + * "update" is not done in the "security firmware". + */ +static void omap5_secondary_harden_predictor(void) +{ + u32 acr, acr_mask; + + asm volatile ("mrc p15, 0, %0, c1, c0, 1" : "=r" (acr)); + + /* + * ACTLR[0] (Enable invalidates of BTB with ICIALLU) + */ + acr_mask = BIT(0); + + /* Do we already have it done.. if yes, skip expensive smc */ + if ((acr & acr_mask) == acr_mask) + return; + + acr |= acr_mask; + omap_smc1(OMAP5_DRA7_MON_SET_ACR_INDEX, acr); + + pr_debug("%s: ARM ACR setup for CVE_2017_5715 applied on CPU%d\n", + __func__, smp_processor_id()); +} +#else +static inline void omap5_secondary_harden_predictor(void) { } +#endif + static void omap4_secondary_init(unsigned int cpu) { /* @@ -131,6 +170,8 @@ static void omap4_secondary_init(unsigned int cpu) set_cntfreq(); /* Configure ACR to disable streaming WA for 801819 */ omap5_erratum_workaround_801819(); + /* Enable ACR to allow for ICUALLU workaround */ + omap5_secondary_harden_predictor(); } /* diff --git a/arch/arm/mach-pxa/irq.c b/arch/arm/mach-pxa/irq.c index 9c10248fadcc..4e8c2116808e 100644 --- a/arch/arm/mach-pxa/irq.c +++ b/arch/arm/mach-pxa/irq.c @@ -185,7 +185,7 @@ static int pxa_irq_suspend(void) { int i; - for (i = 0; i < pxa_internal_irq_nr / 32; i++) { + for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) { void __iomem *base = irq_base(i); saved_icmr[i] = __raw_readl(base + ICMR); @@ -204,7 +204,7 @@ static void pxa_irq_resume(void) { int i; - for (i = 0; i < pxa_internal_irq_nr / 32; i++) { + for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) { void __iomem *base = irq_base(i); __raw_writel(saved_icmr[i], base + ICMR); diff --git a/arch/arm/mach-rpc/ecard.c b/arch/arm/mach-rpc/ecard.c index 39aef4876ed4..04b2f22c2739 100644 --- a/arch/arm/mach-rpc/ecard.c +++ b/arch/arm/mach-rpc/ecard.c @@ -212,7 +212,7 @@ static DEFINE_MUTEX(ecard_mutex); */ static void ecard_init_pgtables(struct mm_struct *mm) { - struct vm_area_struct vma; + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, VM_EXEC); /* We want to set up the page tables for the following mapping: * Virtual Physical @@ -237,9 +237,6 @@ static void ecard_init_pgtables(struct mm_struct *mm) memcpy(dst_pgd, src_pgd, sizeof(pgd_t) * (EASI_SIZE / PGDIR_SIZE)); - vma.vm_flags = VM_EXEC; - vma.vm_mm = mm; - flush_tlb_range(&vma, IO_START, IO_START + IO_SIZE); flush_tlb_range(&vma, EASI_START, EASI_START + EASI_SIZE); } diff --git a/arch/arm/mach-socfpga/Kconfig b/arch/arm/mach-socfpga/Kconfig index d0f62eacf59d..4adb901dd5eb 100644 --- a/arch/arm/mach-socfpga/Kconfig +++ b/arch/arm/mach-socfpga/Kconfig @@ -10,6 +10,7 @@ menuconfig ARCH_SOCFPGA select HAVE_ARM_SCU select HAVE_ARM_TWD if SMP select MFD_SYSCON + select PCI_DOMAINS if PCI if ARCH_SOCFPGA config SOCFPGA_SUSPEND diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 96a7b6cf459b..b169e580bf82 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -702,7 +702,6 @@ config ARM_THUMBEE config ARM_VIRT_EXT bool - depends on MMU default y if CPU_V7 help Enable the kernel to make use of the ARM Virtualization diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index c186474422f3..0cc8e04295a4 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -736,20 +736,29 @@ static int __mark_rodata_ro(void *unused) return 0; } +static int kernel_set_to_readonly __read_mostly; + void mark_rodata_ro(void) { + kernel_set_to_readonly = 1; stop_machine(__mark_rodata_ro, NULL, NULL); debug_checkwx(); } void set_kernel_text_rw(void) { + if (!kernel_set_to_readonly) + return; + set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false, current->active_mm); } void set_kernel_text_ro(void) { + if (!kernel_set_to_readonly) + return; + set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true, current->active_mm); } diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 5dd6c58d653b..7d67c70bbded 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -53,7 +53,8 @@ static inline bool security_extensions_enabled(void) { /* Check CPUID Identification Scheme before ID_PFR1 read */ if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) - return !!cpuid_feature_extract(CPUID_EXT_PFR1, 4); + return cpuid_feature_extract(CPUID_EXT_PFR1, 4) || + cpuid_feature_extract(CPUID_EXT_PFR1, 20); return 0; } diff --git a/arch/arm/mm/tcm.h b/arch/arm/mm/tcm.h index 8015ad434a40..24101925fe64 100644 --- a/arch/arm/mm/tcm.h +++ b/arch/arm/mm/tcm.h @@ -11,7 +11,7 @@ void __init tcm_init(void); #else /* No TCM support, just blank inlines to be optimized out */ -inline void tcm_init(void) +static inline void tcm_init(void) { } #endif diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 6e8b71613039..f6a62ae44a65 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1844,7 +1844,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) /* there are 2 passes here */ bpf_jit_dump(prog->len, image_size, 2, ctx.target); - set_memory_ro((unsigned long)header, header->pages); + bpf_jit_binary_lock_ro(header); prog->bpf_func = (void *)ctx.target; prog->jited = 1; prog->jited_len = image_size; diff --git a/arch/arm/plat-omap/counter_32k.c b/arch/arm/plat-omap/counter_32k.c index 2438b96004c1..fcc5bfec8bd1 100644 --- a/arch/arm/plat-omap/counter_32k.c +++ b/arch/arm/plat-omap/counter_32k.c @@ -110,7 +110,7 @@ int __init omap_init_clocksource_32k(void __iomem *vbase) } sched_clock_register(omap_32k_read_sched_clock, 32, 32768); - register_persistent_clock(NULL, omap_read_persistent_clock64); + register_persistent_clock(omap_read_persistent_clock64); pr_info("OMAP clocksource: 32k_counter at 32768 Hz\n"); return 0; diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c index e90cc8a08186..f8bd523d64d1 100644 --- a/arch/arm/probes/kprobes/core.c +++ b/arch/arm/probes/kprobes/core.c @@ -47,9 +47,6 @@ (unsigned long)(addr) + \ (size)) -/* Used as a marker in ARM_pc to note when we're in a jprobe. */ -#define JPROBE_MAGIC_ADDR 0xffffffff - DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); @@ -289,8 +286,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs) break; case KPROBE_REENTER: /* A nested probe was hit in FIQ, it is a BUG */ - pr_warn("Unrecoverable kprobe detected at %p.\n", - p->addr); + pr_warn("Unrecoverable kprobe detected.\n"); + dump_kprobe(p); /* fall through */ default: /* impossible cases */ @@ -303,10 +300,10 @@ void __kprobes kprobe_handler(struct pt_regs *regs) /* * If we have no pre-handler or it returned 0, we - * continue with normal processing. If we have a - * pre-handler and it returned non-zero, it prepped - * for calling the break_handler below on re-entry, - * so get out doing nothing more here. + * continue with normal processing. If we have a + * pre-handler and it returned non-zero, it will + * modify the execution path and no need to single + * stepping. Let's just reset current kprobe and exit. */ if (!p->pre_handler || !p->pre_handler(p, regs)) { kcb->kprobe_status = KPROBE_HIT_SS; @@ -315,20 +312,9 @@ void __kprobes kprobe_handler(struct pt_regs *regs) kcb->kprobe_status = KPROBE_HIT_SSDONE; p->post_handler(p, regs, 0); } - reset_current_kprobe(); - } - } - } else if (cur) { - /* We probably hit a jprobe. Call its break handler. */ - if (cur->break_handler && cur->break_handler(cur, regs)) { - kcb->kprobe_status = KPROBE_HIT_SS; - singlestep(cur, regs, kcb); - if (cur->post_handler) { - kcb->kprobe_status = KPROBE_HIT_SSDONE; - cur->post_handler(cur, regs, 0); } + reset_current_kprobe(); } - reset_current_kprobe(); } else { /* * The probe was removed and a race is in progress. @@ -521,117 +507,6 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, regs->ARM_lr = (unsigned long)&kretprobe_trampoline; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - long sp_addr = regs->ARM_sp; - long cpsr; - - kcb->jprobe_saved_regs = *regs; - memcpy(kcb->jprobes_stack, (void *)sp_addr, MIN_STACK_SIZE(sp_addr)); - regs->ARM_pc = (long)jp->entry; - - cpsr = regs->ARM_cpsr | PSR_I_BIT; -#ifdef CONFIG_THUMB2_KERNEL - /* Set correct Thumb state in cpsr */ - if (regs->ARM_pc & 1) - cpsr |= PSR_T_BIT; - else - cpsr &= ~PSR_T_BIT; -#endif - regs->ARM_cpsr = cpsr; - - preempt_disable(); - return 1; -} - -void __kprobes jprobe_return(void) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - __asm__ __volatile__ ( - /* - * Setup an empty pt_regs. Fill SP and PC fields as - * they're needed by longjmp_break_handler. - * - * We allocate some slack between the original SP and start of - * our fabricated regs. To be precise we want to have worst case - * covered which is STMFD with all 16 regs so we allocate 2 * - * sizeof(struct_pt_regs)). - * - * This is to prevent any simulated instruction from writing - * over the regs when they are accessing the stack. - */ -#ifdef CONFIG_THUMB2_KERNEL - "sub r0, %0, %1 \n\t" - "mov sp, r0 \n\t" -#else - "sub sp, %0, %1 \n\t" -#endif - "ldr r0, ="__stringify(JPROBE_MAGIC_ADDR)"\n\t" - "str %0, [sp, %2] \n\t" - "str r0, [sp, %3] \n\t" - "mov r0, sp \n\t" - "bl kprobe_handler \n\t" - - /* - * Return to the context saved by setjmp_pre_handler - * and restored by longjmp_break_handler. - */ -#ifdef CONFIG_THUMB2_KERNEL - "ldr lr, [sp, %2] \n\t" /* lr = saved sp */ - "ldrd r0, r1, [sp, %5] \n\t" /* r0,r1 = saved lr,pc */ - "ldr r2, [sp, %4] \n\t" /* r2 = saved psr */ - "stmdb lr!, {r0, r1, r2} \n\t" /* push saved lr and */ - /* rfe context */ - "ldmia sp, {r0 - r12} \n\t" - "mov sp, lr \n\t" - "ldr lr, [sp], #4 \n\t" - "rfeia sp! \n\t" -#else - "ldr r0, [sp, %4] \n\t" - "msr cpsr_cxsf, r0 \n\t" - "ldmia sp, {r0 - pc} \n\t" -#endif - : - : "r" (kcb->jprobe_saved_regs.ARM_sp), - "I" (sizeof(struct pt_regs) * 2), - "J" (offsetof(struct pt_regs, ARM_sp)), - "J" (offsetof(struct pt_regs, ARM_pc)), - "J" (offsetof(struct pt_regs, ARM_cpsr)), - "J" (offsetof(struct pt_regs, ARM_lr)) - : "memory", "cc"); -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - long stack_addr = kcb->jprobe_saved_regs.ARM_sp; - long orig_sp = regs->ARM_sp; - struct jprobe *jp = container_of(p, struct jprobe, kp); - - if (regs->ARM_pc == JPROBE_MAGIC_ADDR) { - if (orig_sp != stack_addr) { - struct pt_regs *saved_regs = - (struct pt_regs *)kcb->jprobe_saved_regs.ARM_sp; - printk("current sp %lx does not match saved sp %lx\n", - orig_sp, stack_addr); - printk("Saved registers for jprobe %p\n", jp); - show_regs(saved_regs); - printk("Current registers\n"); - show_regs(regs); - BUG(); - } - *regs = kcb->jprobe_saved_regs; - memcpy((void *)stack_addr, kcb->jprobes_stack, - MIN_STACK_SIZE(stack_addr)); - preempt_enable_no_resched(); - return 1; - } - return 0; -} - int __kprobes arch_trampoline_kprobe(struct kprobe *p) { return 0; diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c index 14db14152909..cc237fa9b90f 100644 --- a/arch/arm/probes/kprobes/test-core.c +++ b/arch/arm/probes/kprobes/test-core.c @@ -1461,7 +1461,6 @@ fail: print_registers(&result_regs); if (mem) { - pr_err("current_stack=%p\n", current_stack); pr_err("expected_memory:\n"); print_memory(expected_memory, mem_size); pr_err("result_memory:\n"); diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile index bb4118213fee..f4efff9d3afb 100644 --- a/arch/arm/vdso/Makefile +++ b/arch/arm/vdso/Makefile @@ -30,6 +30,9 @@ CFLAGS_vgettimeofday.o = -O2 # Disable gcov profiling for VDSO code GCOV_PROFILE := n +# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. +KCOV_INSTRUMENT := n + # Force dependency $(obj)/vdso.o : $(obj)/vdso.so diff --git a/arch/arm/vfp/Makefile b/arch/arm/vfp/Makefile index a81404c09d5d..94516c40ebd3 100644 --- a/arch/arm/vfp/Makefile +++ b/arch/arm/vfp/Makefile @@ -8,8 +8,5 @@ # asflags-y := -DDEBUG KBUILD_AFLAGS :=$(KBUILD_AFLAGS:-msoft-float=-Wa,-mfpu=softvfp+vfp -mfloat-abi=soft) -LDFLAGS +=--no-warn-mismatch -obj-y += vfp.o - -vfp-$(CONFIG_VFP) += vfpmodule.o entry.o vfphw.o vfpsingle.o vfpdouble.o +obj-y += vfpmodule.o entry.o vfphw.o vfpsingle.o vfpdouble.o diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 35d0f823e823..dc7e6b50ef67 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -596,13 +596,11 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, } /* Sanitise and restore the current VFP state from the provided structures. */ -int vfp_restore_user_hwstate(struct user_vfp __user *ufp, - struct user_vfp_exc __user *ufp_exc) +int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc) { struct thread_info *thread = current_thread_info(); struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; unsigned long fpexc; - int err = 0; /* Disable VFP to avoid corrupting the new thread state. */ vfp_flush_hwstate(thread); @@ -611,17 +609,16 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, * Copy the floating point registers. There can be unused * registers see asm/hwcap.h for details. */ - err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs, - sizeof(hwstate->fpregs)); + memcpy(&hwstate->fpregs, &ufp->fpregs, sizeof(hwstate->fpregs)); /* * Copy the status and control register. */ - __get_user_error(hwstate->fpscr, &ufp->fpscr, err); + hwstate->fpscr = ufp->fpscr; /* * Sanitise and restore the exception registers. */ - __get_user_error(fpexc, &ufp_exc->fpexc, err); + fpexc = ufp_exc->fpexc; /* Ensure the VFP is enabled. */ fpexc |= FPEXC_EN; @@ -630,10 +627,10 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, fpexc &= ~(FPEXC_EX | FPEXC_FP2V); hwstate->fpexc = fpexc; - __get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); - __get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); + hwstate->fpinst = ufp_exc->fpinst; + hwstate->fpinst2 = ufp_exc->fpinst2; - return err ? -EFAULT : 0; + return 0; } /* diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index 8073625371f5..07060e5b5864 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -59,6 +59,9 @@ struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS] __initdata; static __read_mostly unsigned int xen_events_irq; +uint32_t xen_start_flags; +EXPORT_SYMBOL(xen_start_flags); + int xen_remap_domain_gfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *gfn, int nr, @@ -293,9 +296,7 @@ void __init xen_early_init(void) xen_setup_features(); if (xen_feature(XENFEAT_dom0)) - xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED; - else - xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED); + xen_start_flags |= SIF_INITDOMAIN|SIF_PRIVILEGED; if (!console_set_on_cmdline && !xen_initial_domain()) add_preferred_console("hvc", 0, NULL); diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 14f204c45450..3d1011957823 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -74,6 +74,7 @@ config ARM64 select GENERIC_CPU_AUTOPROBE select GENERIC_EARLY_IOREMAP select GENERIC_IDLE_POLL_SETUP + select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_PROBE select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW_LEVEL @@ -103,7 +104,6 @@ config ARM64 select HAVE_ARM_SMCCC select HAVE_EBPF_JIT select HAVE_C_RECORDMCOUNT - select HAVE_CC_STACKPROTECTOR select HAVE_CMPXCHG_DOUBLE select HAVE_CMPXCHG_LOCAL select HAVE_CONTEXT_TRACKING @@ -128,6 +128,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES select HAVE_KRETPROBES @@ -264,9 +265,6 @@ config ARCH_SUPPORTS_UPROBES config ARCH_PROC_KCORE_TEXT def_bool y -config MULTI_IRQ_HANDLER - def_bool y - source "init/Kconfig" source "kernel/Kconfig.freezer" diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 45272266dafb..e7101b19d590 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -10,7 +10,7 @@ # # Copyright (C) 1995-2001 by Russell King -LDFLAGS_vmlinux :=-p --no-undefined -X +LDFLAGS_vmlinux :=--no-undefined -X CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET) GZFLAGS :=-9 @@ -60,15 +60,15 @@ ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ AS += -EB -LD += -EB -LDFLAGS += -maarch64linuxb +# We must use the linux target here, since distributions don't tend to package +# the ELF linker scripts with binutils, and this results in a build failure. +LDFLAGS += -EB -maarch64linuxb UTS_MACHINE := aarch64_be else KBUILD_CPPFLAGS += -mlittle-endian CHECKFLAGS += -D__AARCH64EL__ AS += -EL -LD += -EL -LDFLAGS += -maarch64linux +LDFLAGS += -EL -maarch64linux # See comment above UTS_MACHINE := aarch64 endif diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi index e6b059378dc0..67dac595dc72 100644 --- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi +++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi @@ -309,8 +309,7 @@ interrupts = <0 99 4>; resets = <&rst SPIM0_RESET>; reg-io-width = <4>; - num-chipselect = <4>; - bus-num = <0>; + num-cs = <4>; status = "disabled"; }; @@ -322,8 +321,7 @@ interrupts = <0 100 4>; resets = <&rst SPIM1_RESET>; reg-io-width = <4>; - num-chipselect = <4>; - bus-num = <0>; + num-cs = <4>; status = "disabled"; }; diff --git a/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts b/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts index 4b3331fbfe39..dff9b15eb3c0 100644 --- a/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts +++ b/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts @@ -66,9 +66,22 @@ ðmac { status = "okay"; - phy-mode = "rgmii"; pinctrl-0 = <ð_rgmii_y_pins>; pinctrl-names = "default"; + phy-handle = <ð_phy0>; + phy-mode = "rgmii"; + + mdio { + compatible = "snps,dwmac-mdio"; + #address-cells = <1>; + #size-cells = <0>; + + eth_phy0: ethernet-phy@0 { + /* Realtek RTL8211F (0x001cc916) */ + reg = <0>; + eee-broken-1000t; + }; + }; }; &uart_A { diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi index fee87737a201..67d7115e4eff 100644 --- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi @@ -132,7 +132,7 @@ sd_emmc_b: sd@5000 { compatible = "amlogic,meson-axg-mmc"; - reg = <0x0 0x5000 0x0 0x2000>; + reg = <0x0 0x5000 0x0 0x800>; interrupts = ; status = "disabled"; clocks = <&clkc CLKID_SD_EMMC_B>, @@ -144,7 +144,7 @@ sd_emmc_c: mmc@7000 { compatible = "amlogic,meson-axg-mmc"; - reg = <0x0 0x7000 0x0 0x2000>; + reg = <0x0 0x7000 0x0 0x800>; interrupts = ; status = "disabled"; clocks = <&clkc CLKID_SD_EMMC_C>, diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi index 3c31e21cbed7..b8dc4dbb391b 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi @@ -35,6 +35,12 @@ no-map; }; + /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ + secmon_reserved_alt: secmon@5000000 { + reg = <0x0 0x05000000 0x0 0x300000>; + no-map; + }; + linux,cma { compatible = "shared-dma-pool"; reusable; @@ -457,21 +463,21 @@ sd_emmc_a: mmc@70000 { compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; - reg = <0x0 0x70000 0x0 0x2000>; + reg = <0x0 0x70000 0x0 0x800>; interrupts = ; status = "disabled"; }; sd_emmc_b: mmc@72000 { compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; - reg = <0x0 0x72000 0x0 0x2000>; + reg = <0x0 0x72000 0x0 0x800>; interrupts = ; status = "disabled"; }; sd_emmc_c: mmc@74000 { compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; - reg = <0x0 0x74000 0x0 0x2000>; + reg = <0x0 0x74000 0x0 0x800>; interrupts = ; status = "disabled"; }; diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi index eb327664a4d8..6aaafff674f9 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi @@ -6,7 +6,7 @@ &apb { mali: gpu@c0000 { - compatible = "amlogic,meson-gxbb-mali", "arm,mali-450"; + compatible = "amlogic,meson-gxl-mali", "arm,mali-450"; reg = <0x0 0xc0000 0x0 0x40000>; interrupts = , , diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts index 3e3eb31748a3..f63bceb88caa 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts @@ -234,9 +234,6 @@ bus-width = <4>; cap-sd-highspeed; - sd-uhs-sdr12; - sd-uhs-sdr25; - sd-uhs-sdr50; max-frequency = <100000000>; disable-wp; diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi index 0cfd701809de..a1b31013ab6e 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi @@ -189,3 +189,10 @@ &usb0 { status = "okay"; }; + +&usb2_phy0 { + /* + * HDMI_5V is also used as supply for the USB VBUS. + */ + phy-supply = <&hdmi_5v>; +}; diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi index 27538eea547b..c87a80e9bcc6 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi @@ -13,14 +13,6 @@ / { compatible = "amlogic,meson-gxl"; - reserved-memory { - /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ - secmon_reserved_alt: secmon@5000000 { - reg = <0x0 0x05000000 0x0 0x300000>; - no-map; - }; - }; - soc { usb0: usb@c9000000 { status = "disabled"; diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi index 4a2a6af8e752..4057197048dc 100644 --- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi +++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi @@ -118,7 +118,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <0>; @@ -149,7 +149,7 @@ #interrupt-cells = <1>; interrupt-map-mask = <0 0 0 0>; - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_NONE>; + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; linux,pci-domain = <4>; @@ -566,7 +566,7 @@ reg = <0x66080000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; @@ -594,7 +594,7 @@ reg = <0x660b0000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; diff --git a/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts b/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts index eb6f08cdbd79..77efa28c4dd5 100644 --- a/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts +++ b/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts @@ -43,6 +43,10 @@ enet-phy-lane-swap; }; +&sdio0 { + mmc-ddr-1_8v; +}; + &uart2 { status = "okay"; }; diff --git a/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts b/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts index 5084b037320f..55ba495ef56e 100644 --- a/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts +++ b/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts @@ -42,3 +42,7 @@ &gphy0 { enet-phy-lane-swap; }; + +&sdio0 { + mmc-ddr-1_8v; +}; diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi index 99aaff0b6d72..b203152ad67c 100644 --- a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi +++ b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi @@ -409,7 +409,7 @@ reg = <0x000b0000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; @@ -453,7 +453,7 @@ reg = <0x000e0000 0x100>; #address-cells = <1>; #size-cells = <0>; - interrupts = ; + interrupts = ; clock-frequency = <100000>; status = "disabled"; }; diff --git a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts index c6999624ed8a..68c5a6c819ae 100644 --- a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts +++ b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts @@ -585,6 +585,8 @@ vmmc-supply = <&wlan_en>; ti,non-removable; non-removable; + cap-power-off-card; + keep-power-in-suspend; #address-cells = <0x1>; #size-cells = <0x0>; status = "ok"; diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts index edb4ee0b8896..7f12624f6c8e 100644 --- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts +++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts @@ -322,6 +322,8 @@ dwmmc_2: dwmmc2@f723f000 { bus-width = <0x4>; non-removable; + cap-power-off-card; + keep-power-in-suspend; vmmc-supply = <®_vdd_3v3>; mmc-pwrseq = <&wl1835_pwrseq>; diff --git a/arch/arm64/boot/dts/marvell/armada-cp110.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110.dtsi index 7dabe25f6774..1c6ff8197a88 100644 --- a/arch/arm64/boot/dts/marvell/armada-cp110.dtsi +++ b/arch/arm64/boot/dts/marvell/armada-cp110.dtsi @@ -149,7 +149,7 @@ CP110_LABEL(icu): interrupt-controller@1e0000 { compatible = "marvell,cp110-icu"; - reg = <0x1e0000 0x10>; + reg = <0x1e0000 0x440>; #interrupt-cells = <3>; interrupt-controller; msi-parent = <&gicp>; diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi index 0f829db33efe..4d5ef01f43a3 100644 --- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi +++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi @@ -75,7 +75,7 @@ serial@75b1000 { label = "LS-UART0"; - status = "okay"; + status = "disabled"; pinctrl-names = "default", "sleep"; pinctrl-0 = <&blsp2_uart2_4pins_default>; pinctrl-1 = <&blsp2_uart2_4pins_sleep>; diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi index 650f356f69ca..c2625d15a8c0 100644 --- a/arch/arm64/boot/dts/qcom/msm8916.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi @@ -1191,14 +1191,14 @@ port@0 { reg = <0>; - etf_out: endpoint { + etf_in: endpoint { slave-mode; remote-endpoint = <&funnel0_out>; }; }; port@1 { reg = <0>; - etf_in: endpoint { + etf_out: endpoint { remote-endpoint = <&replicator_in>; }; }; diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts b/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts index 9b4dc41703e3..ae3b5adf32df 100644 --- a/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts +++ b/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts @@ -54,7 +54,7 @@ sound { compatible = "audio-graph-card"; label = "UniPhier LD11"; - widgets = "Headphone", "Headphone Jack"; + widgets = "Headphone", "Headphones"; dais = <&i2s_port2 &i2s_port3 &i2s_port4 diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts b/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts index fe6608ea3277..7919233c9ce2 100644 --- a/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts +++ b/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts @@ -54,7 +54,7 @@ sound { compatible = "audio-graph-card"; label = "UniPhier LD20"; - widgets = "Headphone", "Headphone Jack"; + widgets = "Headphone", "Headphones"; dais = <&i2s_port2 &i2s_port3 &i2s_port4 diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 3cfa8ca26738..f9a186f6af8a 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -47,6 +47,7 @@ CONFIG_ARCH_MVEBU=y CONFIG_ARCH_QCOM=y CONFIG_ARCH_ROCKCHIP=y CONFIG_ARCH_SEATTLE=y +CONFIG_ARCH_SYNQUACER=y CONFIG_ARCH_RENESAS=y CONFIG_ARCH_R8A7795=y CONFIG_ARCH_R8A7796=y @@ -58,7 +59,6 @@ CONFIG_ARCH_R8A77995=y CONFIG_ARCH_STRATIX10=y CONFIG_ARCH_TEGRA=y CONFIG_ARCH_SPRD=y -CONFIG_ARCH_SYNQUACER=y CONFIG_ARCH_THUNDER=y CONFIG_ARCH_THUNDER2=y CONFIG_ARCH_UNIPHIER=y @@ -67,25 +67,23 @@ CONFIG_ARCH_XGENE=y CONFIG_ARCH_ZX=y CONFIG_ARCH_ZYNQMP=y CONFIG_PCI=y -CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_PCI_IOV=y CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI_ACPI=y -CONFIG_PCI_LAYERSCAPE=y -CONFIG_PCI_HISI=y -CONFIG_PCIE_QCOM=y -CONFIG_PCIE_KIRIN=y -CONFIG_PCIE_ARMADA_8K=y -CONFIG_PCIE_HISI_STB=y CONFIG_PCI_AARDVARK=y CONFIG_PCI_TEGRA=y CONFIG_PCIE_RCAR=y -CONFIG_PCIE_ROCKCHIP=y -CONFIG_PCIE_ROCKCHIP_HOST=m CONFIG_PCI_HOST_GENERIC=y CONFIG_PCI_XGENE=y CONFIG_PCI_HOST_THUNDER_PEM=y CONFIG_PCI_HOST_THUNDER_ECAM=y +CONFIG_PCIE_ROCKCHIP_HOST=m +CONFIG_PCI_LAYERSCAPE=y +CONFIG_PCI_HISI=y +CONFIG_PCIE_QCOM=y +CONFIG_PCIE_ARMADA_8K=y +CONFIG_PCIE_KIRIN=y +CONFIG_PCIE_HISI_STB=y CONFIG_ARM64_VA_BITS_48=y CONFIG_SCHED_MC=y CONFIG_NUMA=y @@ -104,8 +102,6 @@ CONFIG_HIBERNATION=y CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y CONFIG_ARM_CPUIDLE=y CONFIG_CPU_FREQ=y -CONFIG_CPU_FREQ_GOV_ATTR_SET=y -CONFIG_CPU_FREQ_GOV_COMMON=y CONFIG_CPU_FREQ_STAT=y CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_USERSPACE=y @@ -113,11 +109,11 @@ CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y CONFIG_CPUFREQ_DT=y +CONFIG_ACPI_CPPC_CPUFREQ=m CONFIG_ARM_ARMADA_37XX_CPUFREQ=y CONFIG_ARM_BIG_LITTLE_CPUFREQ=y CONFIG_ARM_SCPI_CPUFREQ=y CONFIG_ARM_TEGRA186_CPUFREQ=y -CONFIG_ACPI_CPPC_CPUFREQ=m CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y @@ -236,11 +232,6 @@ CONFIG_SMSC911X=y CONFIG_SNI_AVE=y CONFIG_SNI_NETSEC=y CONFIG_STMMAC_ETH=m -CONFIG_DWMAC_IPQ806X=m -CONFIG_DWMAC_MESON=m -CONFIG_DWMAC_ROCKCHIP=m -CONFIG_DWMAC_SUNXI=m -CONFIG_DWMAC_SUN8I=m CONFIG_MDIO_BUS_MUX_MMIOREG=y CONFIG_AT803X_PHY=m CONFIG_MARVELL_PHY=m @@ -269,8 +260,8 @@ CONFIG_WL18XX=m CONFIG_WLCORE_SDIO=m CONFIG_INPUT_EVDEV=y CONFIG_KEYBOARD_ADC=m -CONFIG_KEYBOARD_CROS_EC=y CONFIG_KEYBOARD_GPIO=y +CONFIG_KEYBOARD_CROS_EC=y CONFIG_INPUT_TOUCHSCREEN=y CONFIG_TOUCHSCREEN_ATMEL_MXT=m CONFIG_INPUT_MISC=y @@ -296,17 +287,13 @@ CONFIG_SERIAL_SAMSUNG=y CONFIG_SERIAL_SAMSUNG_CONSOLE=y CONFIG_SERIAL_TEGRA=y CONFIG_SERIAL_SH_SCI=y -CONFIG_SERIAL_SH_SCI_NR_UARTS=11 -CONFIG_SERIAL_SH_SCI_CONSOLE=y CONFIG_SERIAL_MSM=y CONFIG_SERIAL_MSM_CONSOLE=y CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_SERIAL_MVEBU_UART=y CONFIG_SERIAL_DEV_BUS=y -CONFIG_SERIAL_DEV_CTRL_TTYPORT=y CONFIG_VIRTIO_CONSOLE=y -CONFIG_I2C_HID=m CONFIG_I2C_CHARDEV=y CONFIG_I2C_MUX=y CONFIG_I2C_MUX_PCA954x=y @@ -325,26 +312,26 @@ CONFIG_I2C_RCAR=y CONFIG_I2C_CROS_EC_TUNNEL=y CONFIG_SPI=y CONFIG_SPI_ARMADA_3700=y -CONFIG_SPI_MESON_SPICC=m -CONFIG_SPI_MESON_SPIFC=m CONFIG_SPI_BCM2835=m CONFIG_SPI_BCM2835AUX=m +CONFIG_SPI_MESON_SPICC=m +CONFIG_SPI_MESON_SPIFC=m CONFIG_SPI_ORION=y CONFIG_SPI_PL022=y -CONFIG_SPI_QUP=y CONFIG_SPI_ROCKCHIP=y +CONFIG_SPI_QUP=y CONFIG_SPI_S3C64XX=y CONFIG_SPI_SPIDEV=m CONFIG_SPMI=y -CONFIG_PINCTRL_IPQ8074=y CONFIG_PINCTRL_SINGLE=y CONFIG_PINCTRL_MAX77620=y +CONFIG_PINCTRL_IPQ8074=y CONFIG_PINCTRL_MSM8916=y CONFIG_PINCTRL_MSM8994=y CONFIG_PINCTRL_MSM8996=y -CONFIG_PINCTRL_MT7622=y CONFIG_PINCTRL_QDF2XXX=y CONFIG_PINCTRL_QCOM_SPMI_PMIC=y +CONFIG_PINCTRL_MT7622=y CONFIG_GPIO_DWAPB=y CONFIG_GPIO_MB86S7X=y CONFIG_GPIO_PL061=y @@ -368,13 +355,13 @@ CONFIG_SENSORS_INA2XX=m CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y CONFIG_CPU_THERMAL=y CONFIG_THERMAL_EMULATION=y +CONFIG_ROCKCHIP_THERMAL=m +CONFIG_RCAR_GEN3_THERMAL=y CONFIG_ARMADA_THERMAL=y CONFIG_BRCMSTB_THERMAL=m CONFIG_EXYNOS_THERMAL=y -CONFIG_RCAR_GEN3_THERMAL=y -CONFIG_QCOM_TSENS=y -CONFIG_ROCKCHIP_THERMAL=m CONFIG_TEGRA_BPMP_THERMAL=m +CONFIG_QCOM_TSENS=y CONFIG_UNIPHIER_THERMAL=y CONFIG_WATCHDOG=y CONFIG_S3C2410_WATCHDOG=y @@ -395,9 +382,9 @@ CONFIG_MFD_MAX77620=y CONFIG_MFD_SPMI_PMIC=y CONFIG_MFD_RK808=y CONFIG_MFD_SEC_CORE=y +CONFIG_REGULATOR_FIXED_VOLTAGE=y CONFIG_REGULATOR_AXP20X=y CONFIG_REGULATOR_FAN53555=y -CONFIG_REGULATOR_FIXED_VOLTAGE=y CONFIG_REGULATOR_GPIO=y CONFIG_REGULATOR_HI6421V530=y CONFIG_REGULATOR_HI655X=y @@ -407,16 +394,15 @@ CONFIG_REGULATOR_QCOM_SMD_RPM=y CONFIG_REGULATOR_QCOM_SPMI=y CONFIG_REGULATOR_RK808=y CONFIG_REGULATOR_S2MPS11=y +CONFIG_RC_CORE=m +CONFIG_RC_DECODERS=y +CONFIG_RC_DEVICES=y +CONFIG_IR_MESON=m CONFIG_MEDIA_SUPPORT=m CONFIG_MEDIA_CAMERA_SUPPORT=y CONFIG_MEDIA_ANALOG_TV_SUPPORT=y CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y CONFIG_MEDIA_CONTROLLER=y -CONFIG_MEDIA_RC_SUPPORT=y -CONFIG_RC_CORE=m -CONFIG_RC_DEVICES=y -CONFIG_RC_DECODERS=y -CONFIG_IR_MESON=m CONFIG_VIDEO_V4L2_SUBDEV_API=y # CONFIG_DVB_NET is not set CONFIG_V4L_MEM2MEM_DRIVERS=y @@ -441,8 +427,7 @@ CONFIG_ROCKCHIP_DW_HDMI=y CONFIG_ROCKCHIP_DW_MIPI_DSI=y CONFIG_ROCKCHIP_INNO_HDMI=y CONFIG_DRM_RCAR_DU=m -CONFIG_DRM_RCAR_LVDS=y -CONFIG_DRM_RCAR_VSP=y +CONFIG_DRM_RCAR_LVDS=m CONFIG_DRM_TEGRA=m CONFIG_DRM_PANEL_SIMPLE=m CONFIG_DRM_I2C_ADV7511=m @@ -455,7 +440,6 @@ CONFIG_FB_ARMCLCD=y CONFIG_BACKLIGHT_GENERIC=m CONFIG_BACKLIGHT_PWM=m CONFIG_BACKLIGHT_LP855X=m -CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set # CONFIG_LOGO_LINUX_VGA16 is not set @@ -468,6 +452,7 @@ CONFIG_SND_SOC_RCAR=m CONFIG_SND_SOC_AK4613=m CONFIG_SND_SIMPLE_CARD=m CONFIG_SND_AUDIO_GRAPH_CARD=m +CONFIG_I2C_HID=m CONFIG_USB=y CONFIG_USB_OTG=y CONFIG_USB_XHCI_HCD=y @@ -501,12 +486,12 @@ CONFIG_MMC_BLOCK_MINORS=32 CONFIG_MMC_ARMMMCI=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_ACPI=y -CONFIG_MMC_SDHCI_F_SDH30=y CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_OF_ARASAN=y CONFIG_MMC_SDHCI_OF_ESDHC=y CONFIG_MMC_SDHCI_CADENCE=y CONFIG_MMC_SDHCI_TEGRA=y +CONFIG_MMC_SDHCI_F_SDH30=y CONFIG_MMC_MESON_GX=y CONFIG_MMC_SDHCI_MSM=y CONFIG_MMC_SPI=y @@ -524,11 +509,11 @@ CONFIG_LEDS_CLASS=y CONFIG_LEDS_GPIO=y CONFIG_LEDS_PWM=y CONFIG_LEDS_SYSCON=y +CONFIG_LEDS_TRIGGER_DISK=y CONFIG_LEDS_TRIGGER_HEARTBEAT=y CONFIG_LEDS_TRIGGER_CPU=y CONFIG_LEDS_TRIGGER_DEFAULT_ON=y CONFIG_LEDS_TRIGGER_PANIC=y -CONFIG_LEDS_TRIGGER_DISK=y CONFIG_EDAC=y CONFIG_EDAC_GHES=y CONFIG_RTC_CLASS=y @@ -537,13 +522,13 @@ CONFIG_RTC_DRV_RK808=m CONFIG_RTC_DRV_S5M=y CONFIG_RTC_DRV_DS3232=y CONFIG_RTC_DRV_EFI=y +CONFIG_RTC_DRV_CROS_EC=y CONFIG_RTC_DRV_S3C=y CONFIG_RTC_DRV_PL031=y CONFIG_RTC_DRV_SUN6I=y CONFIG_RTC_DRV_ARMADA38X=y CONFIG_RTC_DRV_TEGRA=y CONFIG_RTC_DRV_XGENE=y -CONFIG_RTC_DRV_CROS_EC=y CONFIG_DMADEVICES=y CONFIG_DMA_BCM2835=m CONFIG_K3_DMA=y @@ -579,7 +564,6 @@ CONFIG_HWSPINLOCK_QCOM=y CONFIG_ARM_MHU=y CONFIG_PLATFORM_MHU=y CONFIG_BCM2835_MBOX=y -CONFIG_HI6220_MBOX=y CONFIG_QCOM_APCS_IPC=y CONFIG_ROCKCHIP_IOMMU=y CONFIG_TEGRA_IOMMU_SMMU=y @@ -602,7 +586,6 @@ CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y CONFIG_EXTCON_USB_GPIO=y CONFIG_EXTCON_USBC_CROS_EC=y CONFIG_MEMORY=y -CONFIG_TEGRA_MC=y CONFIG_IIO=y CONFIG_EXYNOS_ADC=y CONFIG_ROCKCHIP_SARADC=m @@ -618,27 +601,27 @@ CONFIG_PWM_RCAR=m CONFIG_PWM_ROCKCHIP=y CONFIG_PWM_SAMSUNG=y CONFIG_PWM_TEGRA=m +CONFIG_PHY_XGENE=y +CONFIG_PHY_SUN4I_USB=y +CONFIG_PHY_HI6220_USB=y CONFIG_PHY_HISTB_COMBPHY=y CONFIG_PHY_HISI_INNO_USB2=y -CONFIG_PHY_RCAR_GEN3_USB2=y -CONFIG_PHY_RCAR_GEN3_USB3=m -CONFIG_PHY_HI6220_USB=y -CONFIG_PHY_QCOM_USB_HS=y -CONFIG_PHY_SUN4I_USB=y CONFIG_PHY_MVEBU_CP110_COMPHY=y CONFIG_PHY_QCOM_QMP=m -CONFIG_PHY_ROCKCHIP_INNO_USB2=y +CONFIG_PHY_QCOM_USB_HS=y +CONFIG_PHY_RCAR_GEN3_USB2=y +CONFIG_PHY_RCAR_GEN3_USB3=m CONFIG_PHY_ROCKCHIP_EMMC=y +CONFIG_PHY_ROCKCHIP_INNO_USB2=y CONFIG_PHY_ROCKCHIP_PCIE=m CONFIG_PHY_ROCKCHIP_TYPEC=y -CONFIG_PHY_XGENE=y CONFIG_PHY_TEGRA_XUSB=y CONFIG_QCOM_L2_PMU=y CONFIG_QCOM_L3_PMU=y -CONFIG_MESON_EFUSE=m CONFIG_QCOM_QFPROM=y CONFIG_ROCKCHIP_EFUSE=y CONFIG_UNIPHIER_EFUSE=y +CONFIG_MESON_EFUSE=m CONFIG_TEE=y CONFIG_OPTEE=y CONFIG_ARM_SCPI_PROTOCOL=y @@ -647,7 +630,6 @@ CONFIG_EFI_CAPSULE_LOADER=y CONFIG_ACPI=y CONFIG_ACPI_APEI=y CONFIG_ACPI_APEI_GHES=y -CONFIG_ACPI_APEI_PCIEAER=y CONFIG_ACPI_APEI_MEMORY_FAILURE=y CONFIG_ACPI_APEI_EINJ=y CONFIG_EXT2_FS=y @@ -682,7 +664,6 @@ CONFIG_DEBUG_INFO=y CONFIG_DEBUG_FS=y CONFIG_MAGIC_SYSRQ=y CONFIG_DEBUG_KERNEL=y -CONFIG_LOCKUP_DETECTOR=y # CONFIG_SCHED_DEBUG is not set # CONFIG_DEBUG_PREEMPT is not set # CONFIG_FTRACE is not set @@ -691,20 +672,15 @@ CONFIG_SECURITY=y CONFIG_CRYPTO_ECHAINIV=y CONFIG_CRYPTO_ANSI_CPRNG=y CONFIG_ARM64_CRYPTO=y -CONFIG_CRYPTO_SHA256_ARM64=m -CONFIG_CRYPTO_SHA512_ARM64=m CONFIG_CRYPTO_SHA1_ARM64_CE=y CONFIG_CRYPTO_SHA2_ARM64_CE=y +CONFIG_CRYPTO_SHA512_ARM64_CE=m +CONFIG_CRYPTO_SHA3_ARM64=m +CONFIG_CRYPTO_SM3_ARM64_CE=m CONFIG_CRYPTO_GHASH_ARM64_CE=y CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m CONFIG_CRYPTO_CRC32_ARM64_CE=m -CONFIG_CRYPTO_AES_ARM64=m -CONFIG_CRYPTO_AES_ARM64_CE=m CONFIG_CRYPTO_AES_ARM64_CE_CCM=y CONFIG_CRYPTO_AES_ARM64_CE_BLK=y -CONFIG_CRYPTO_AES_ARM64_NEON_BLK=m CONFIG_CRYPTO_CHACHA20_NEON=m CONFIG_CRYPTO_AES_ARM64_BS=m -CONFIG_CRYPTO_SHA512_ARM64_CE=m -CONFIG_CRYPTO_SHA3_ARM64=m -CONFIG_CRYPTO_SM3_ARM64_CE=m diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S index 88f5aef7934c..e3a375c4cb83 100644 --- a/arch/arm64/crypto/aes-ce-ccm-core.S +++ b/arch/arm64/crypto/aes-ce-ccm-core.S @@ -19,33 +19,24 @@ * u32 *macp, u8 const rk[], u32 rounds); */ ENTRY(ce_aes_ccm_auth_data) - frame_push 7 - - mov x19, x0 - mov x20, x1 - mov x21, x2 - mov x22, x3 - mov x23, x4 - mov x24, x5 - - ldr w25, [x22] /* leftover from prev round? */ + ldr w8, [x3] /* leftover from prev round? */ ld1 {v0.16b}, [x0] /* load mac */ - cbz w25, 1f - sub w25, w25, #16 + cbz w8, 1f + sub w8, w8, #16 eor v1.16b, v1.16b, v1.16b -0: ldrb w7, [x20], #1 /* get 1 byte of input */ - subs w21, w21, #1 - add w25, w25, #1 +0: ldrb w7, [x1], #1 /* get 1 byte of input */ + subs w2, w2, #1 + add w8, w8, #1 ins v1.b[0], w7 ext v1.16b, v1.16b, v1.16b, #1 /* rotate in the input bytes */ beq 8f /* out of input? */ - cbnz w25, 0b + cbnz w8, 0b eor v0.16b, v0.16b, v1.16b -1: ld1 {v3.4s}, [x23] /* load first round key */ - prfm pldl1strm, [x20] - cmp w24, #12 /* which key size? */ - add x6, x23, #16 - sub w7, w24, #2 /* modified # of rounds */ +1: ld1 {v3.4s}, [x4] /* load first round key */ + prfm pldl1strm, [x1] + cmp w5, #12 /* which key size? */ + add x6, x4, #16 + sub w7, w5, #2 /* modified # of rounds */ bmi 2f bne 5f mov v5.16b, v3.16b @@ -64,43 +55,33 @@ ENTRY(ce_aes_ccm_auth_data) ld1 {v5.4s}, [x6], #16 /* load next round key */ bpl 3b aese v0.16b, v4.16b - subs w21, w21, #16 /* last data? */ + subs w2, w2, #16 /* last data? */ eor v0.16b, v0.16b, v5.16b /* final round */ bmi 6f - ld1 {v1.16b}, [x20], #16 /* load next input block */ + ld1 {v1.16b}, [x1], #16 /* load next input block */ eor v0.16b, v0.16b, v1.16b /* xor with mac */ - beq 6f - - if_will_cond_yield_neon - st1 {v0.16b}, [x19] /* store mac */ - do_cond_yield_neon - ld1 {v0.16b}, [x19] /* reload mac */ - endif_yield_neon - - b 1b -6: st1 {v0.16b}, [x19] /* store mac */ + bne 1b +6: st1 {v0.16b}, [x0] /* store mac */ beq 10f - adds w21, w21, #16 + adds w2, w2, #16 beq 10f - mov w25, w21 -7: ldrb w7, [x20], #1 + mov w8, w2 +7: ldrb w7, [x1], #1 umov w6, v0.b[0] eor w6, w6, w7 - strb w6, [x19], #1 - subs w21, w21, #1 + strb w6, [x0], #1 + subs w2, w2, #1 beq 10f ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ b 7b -8: mov w7, w25 - add w25, w25, #16 +8: mov w7, w8 + add w8, w8, #16 9: ext v1.16b, v1.16b, v1.16b, #1 adds w7, w7, #1 bne 9b eor v0.16b, v0.16b, v1.16b - st1 {v0.16b}, [x19] -10: str w25, [x22] - - frame_pop + st1 {v0.16b}, [x0] +10: str w8, [x3] ret ENDPROC(ce_aes_ccm_auth_data) @@ -145,29 +126,19 @@ ENTRY(ce_aes_ccm_final) ENDPROC(ce_aes_ccm_final) .macro aes_ccm_do_crypt,enc - frame_push 8 - - mov x19, x0 - mov x20, x1 - mov x21, x2 - mov x22, x3 - mov x23, x4 - mov x24, x5 - mov x25, x6 - - ldr x26, [x25, #8] /* load lower ctr */ - ld1 {v0.16b}, [x24] /* load mac */ -CPU_LE( rev x26, x26 ) /* keep swabbed ctr in reg */ + ldr x8, [x6, #8] /* load lower ctr */ + ld1 {v0.16b}, [x5] /* load mac */ +CPU_LE( rev x8, x8 ) /* keep swabbed ctr in reg */ 0: /* outer loop */ - ld1 {v1.8b}, [x25] /* load upper ctr */ - prfm pldl1strm, [x20] - add x26, x26, #1 - rev x9, x26 - cmp w23, #12 /* which key size? */ - sub w7, w23, #2 /* get modified # of rounds */ + ld1 {v1.8b}, [x6] /* load upper ctr */ + prfm pldl1strm, [x1] + add x8, x8, #1 + rev x9, x8 + cmp w4, #12 /* which key size? */ + sub w7, w4, #2 /* get modified # of rounds */ ins v1.d[1], x9 /* no carry in lower ctr */ - ld1 {v3.4s}, [x22] /* load first round key */ - add x10, x22, #16 + ld1 {v3.4s}, [x3] /* load first round key */ + add x10, x3, #16 bmi 1f bne 4f mov v5.16b, v3.16b @@ -194,9 +165,9 @@ CPU_LE( rev x26, x26 ) /* keep swabbed ctr in reg */ bpl 2b aese v0.16b, v4.16b aese v1.16b, v4.16b - subs w21, w21, #16 - bmi 7f /* partial block? */ - ld1 {v2.16b}, [x20], #16 /* load next input block */ + subs w2, w2, #16 + bmi 6f /* partial block? */ + ld1 {v2.16b}, [x1], #16 /* load next input block */ .if \enc == 1 eor v2.16b, v2.16b, v5.16b /* final round enc+mac */ eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */ @@ -205,29 +176,18 @@ CPU_LE( rev x26, x26 ) /* keep swabbed ctr in reg */ eor v1.16b, v2.16b, v5.16b /* final round enc */ .endif eor v0.16b, v0.16b, v2.16b /* xor mac with pt ^ rk[last] */ - st1 {v1.16b}, [x19], #16 /* write output block */ - beq 5f - - if_will_cond_yield_neon - st1 {v0.16b}, [x24] /* store mac */ - do_cond_yield_neon - ld1 {v0.16b}, [x24] /* reload mac */ - endif_yield_neon - - b 0b -5: -CPU_LE( rev x26, x26 ) - st1 {v0.16b}, [x24] /* store mac */ - str x26, [x25, #8] /* store lsb end of ctr (BE) */ - -6: frame_pop - ret - -7: eor v0.16b, v0.16b, v5.16b /* final round mac */ + st1 {v1.16b}, [x0], #16 /* write output block */ + bne 0b +CPU_LE( rev x8, x8 ) + st1 {v0.16b}, [x5] /* store mac */ + str x8, [x6, #8] /* store lsb end of ctr (BE) */ +5: ret + +6: eor v0.16b, v0.16b, v5.16b /* final round mac */ eor v1.16b, v1.16b, v5.16b /* final round enc */ - st1 {v0.16b}, [x24] /* store mac */ - add w21, w21, #16 /* process partial tail block */ -8: ldrb w9, [x20], #1 /* get 1 byte of input */ + st1 {v0.16b}, [x5] /* store mac */ + add w2, w2, #16 /* process partial tail block */ +7: ldrb w9, [x1], #1 /* get 1 byte of input */ umov w6, v1.b[0] /* get top crypted ctr byte */ umov w7, v0.b[0] /* get top mac byte */ .if \enc == 1 @@ -237,13 +197,13 @@ CPU_LE( rev x26, x26 ) eor w9, w9, w6 eor w7, w7, w9 .endif - strb w9, [x19], #1 /* store out byte */ - strb w7, [x24], #1 /* store mac byte */ - subs w21, w21, #1 - beq 6b + strb w9, [x0], #1 /* store out byte */ + strb w7, [x5], #1 /* store mac byte */ + subs w2, w2, #1 + beq 5b ext v0.16b, v0.16b, v0.16b, #1 /* shift out mac byte */ ext v1.16b, v1.16b, v1.16b, #1 /* shift out ctr byte */ - b 8b + b 7b .endm /* diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 253188fb8cb0..e3e50950a863 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -223,8 +223,8 @@ static int ctr_encrypt(struct skcipher_request *req) kernel_neon_begin(); aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, (u8 *)ctx->key_enc, rounds, blocks, walk.iv); - err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } if (walk.nbytes) { u8 __aligned(8) tail[AES_BLOCK_SIZE]; diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index dcffb9e77589..c723647b37db 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -322,55 +322,41 @@ ENDPROC(pmull_ghash_update_p8) .endm .macro pmull_gcm_do_crypt, enc - frame_push 10 + ld1 {SHASH.2d}, [x4] + ld1 {XL.2d}, [x1] + ldr x8, [x5, #8] // load lower counter - mov x19, x0 - mov x20, x1 - mov x21, x2 - mov x22, x3 - mov x23, x4 - mov x24, x5 - mov x25, x6 - mov x26, x7 - .if \enc == 1 - ldr x27, [sp, #96] // first stacked arg - .endif - - ldr x28, [x24, #8] // load lower counter -CPU_LE( rev x28, x28 ) - -0: mov x0, x25 - load_round_keys w26, x0 - ld1 {SHASH.2d}, [x23] - ld1 {XL.2d}, [x20] + load_round_keys w7, x6 movi MASK.16b, #0xe1 ext SHASH2.16b, SHASH.16b, SHASH.16b, #8 +CPU_LE( rev x8, x8 ) shl MASK.2d, MASK.2d, #57 eor SHASH2.16b, SHASH2.16b, SHASH.16b .if \enc == 1 - ld1 {KS.16b}, [x27] + ldr x10, [sp] + ld1 {KS.16b}, [x10] .endif -1: ld1 {CTR.8b}, [x24] // load upper counter - ld1 {INP.16b}, [x22], #16 - rev x9, x28 - add x28, x28, #1 - sub w19, w19, #1 +0: ld1 {CTR.8b}, [x5] // load upper counter + ld1 {INP.16b}, [x3], #16 + rev x9, x8 + add x8, x8, #1 + sub w0, w0, #1 ins CTR.d[1], x9 // set lower counter .if \enc == 1 eor INP.16b, INP.16b, KS.16b // encrypt input - st1 {INP.16b}, [x21], #16 + st1 {INP.16b}, [x2], #16 .endif rev64 T1.16b, INP.16b - cmp w26, #12 - b.ge 4f // AES-192/256? + cmp w7, #12 + b.ge 2f // AES-192/256? -2: enc_round CTR, v21 +1: enc_round CTR, v21 ext T2.16b, XL.16b, XL.16b, #8 ext IN1.16b, T1.16b, T1.16b, #8 @@ -425,39 +411,27 @@ CPU_LE( rev x28, x28 ) .if \enc == 0 eor INP.16b, INP.16b, KS.16b - st1 {INP.16b}, [x21], #16 + st1 {INP.16b}, [x2], #16 .endif - cbz w19, 3f + cbnz w0, 0b - if_will_cond_yield_neon - st1 {XL.2d}, [x20] - .if \enc == 1 - st1 {KS.16b}, [x27] - .endif - do_cond_yield_neon - b 0b - endif_yield_neon +CPU_LE( rev x8, x8 ) + st1 {XL.2d}, [x1] + str x8, [x5, #8] // store lower counter - b 1b - -3: st1 {XL.2d}, [x20] .if \enc == 1 - st1 {KS.16b}, [x27] + st1 {KS.16b}, [x10] .endif -CPU_LE( rev x28, x28 ) - str x28, [x24, #8] // store lower counter - - frame_pop ret -4: b.eq 5f // AES-192? +2: b.eq 3f // AES-192? enc_round CTR, v17 enc_round CTR, v18 -5: enc_round CTR, v19 +3: enc_round CTR, v19 enc_round CTR, v20 - b 2b + b 1b .endm /* diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 7cf0b1aa6ea8..8a10f1d7199a 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -488,9 +488,13 @@ static int gcm_decrypt(struct aead_request *req) err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - if (walk.nbytes) - pmull_gcm_encrypt_block(iv, iv, NULL, + if (walk.nbytes) { + kernel_neon_begin(); + pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc, num_rounds(&ctx->aes_key)); + kernel_neon_end(); + } + } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, num_rounds(&ctx->aes_key)); diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index a91933b1e2e6..4b650ec1d7dd 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -28,7 +28,12 @@ typedef void (*alternative_cb_t)(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); void __init apply_alternatives_all(void); -void apply_alternatives(void *start, size_t length); + +#ifdef CONFIG_MODULES +void apply_alternatives_module(void *start, size_t length); +#else +static inline void apply_alternatives_module(void *start, size_t length) { } +#endif #define ALTINSTR_ENTRY(feature,cb) \ " .word 661b - .\n" /* label */ \ diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index c0235e0ff849..9bca54dda75c 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -40,17 +40,6 @@ #include -#define ___atomic_add_unless(v, a, u, sfx) \ -({ \ - typeof((v)->counter) c, old; \ - \ - c = atomic##sfx##_read(v); \ - while (c != (u) && \ - (old = atomic##sfx##_cmpxchg((v), c, c + (a))) != c) \ - c = old; \ - c; \ - }) - #define ATOMIC_INIT(i) { (i) } #define atomic_read(v) READ_ONCE((v)->counter) @@ -61,21 +50,11 @@ #define atomic_add_return_release atomic_add_return_release #define atomic_add_return atomic_add_return -#define atomic_inc_return_relaxed(v) atomic_add_return_relaxed(1, (v)) -#define atomic_inc_return_acquire(v) atomic_add_return_acquire(1, (v)) -#define atomic_inc_return_release(v) atomic_add_return_release(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - #define atomic_sub_return_relaxed atomic_sub_return_relaxed #define atomic_sub_return_acquire atomic_sub_return_acquire #define atomic_sub_return_release atomic_sub_return_release #define atomic_sub_return atomic_sub_return -#define atomic_dec_return_relaxed(v) atomic_sub_return_relaxed(1, (v)) -#define atomic_dec_return_acquire(v) atomic_sub_return_acquire(1, (v)) -#define atomic_dec_return_release(v) atomic_sub_return_release(1, (v)) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) - #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed #define atomic_fetch_add_acquire atomic_fetch_add_acquire #define atomic_fetch_add_release atomic_fetch_add_release @@ -119,13 +98,6 @@ cmpxchg_release(&((v)->counter), (old), (new)) #define atomic_cmpxchg(v, old, new) cmpxchg(&((v)->counter), (old), (new)) -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) -#define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0) -#define __atomic_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) #define atomic_andnot atomic_andnot /* @@ -140,21 +112,11 @@ #define atomic64_add_return_release atomic64_add_return_release #define atomic64_add_return atomic64_add_return -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1, (v)) -#define atomic64_inc_return_acquire(v) atomic64_add_return_acquire(1, (v)) -#define atomic64_inc_return_release(v) atomic64_add_return_release(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed #define atomic64_sub_return_acquire atomic64_sub_return_acquire #define atomic64_sub_return_release atomic64_sub_return_release #define atomic64_sub_return atomic64_sub_return -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1, (v)) -#define atomic64_dec_return_acquire(v) atomic64_sub_return_acquire(1, (v)) -#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) - #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire #define atomic64_fetch_add_release atomic64_fetch_add_release @@ -195,16 +157,9 @@ #define atomic64_cmpxchg_release atomic_cmpxchg_release #define atomic64_cmpxchg atomic_cmpxchg -#define atomic64_inc(v) atomic64_add(1, (v)) -#define atomic64_dec(v) atomic64_sub(1, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) -#define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) -#define atomic64_add_negative(i, v) (atomic64_add_return((i), (v)) < 0) -#define atomic64_add_unless(v, a, u) (___atomic_add_unless(v, a, u, 64) != u) #define atomic64_andnot atomic64_andnot -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif #endif diff --git a/arch/arm64/include/asm/bitops.h b/arch/arm64/include/asm/bitops.h index 9c19594ce7cb..10d536b1af74 100644 --- a/arch/arm64/include/asm/bitops.h +++ b/arch/arm64/include/asm/bitops.h @@ -17,22 +17,11 @@ #define __ASM_BITOPS_H #include -#include #ifndef _LINUX_BITOPS_H #error only can be included directly #endif -/* - * Little endian assembly atomic bitops. - */ -extern void set_bit(int nr, volatile unsigned long *p); -extern void clear_bit(int nr, volatile unsigned long *p); -extern void change_bit(int nr, volatile unsigned long *p); -extern int test_and_set_bit(int nr, volatile unsigned long *p); -extern int test_and_clear_bit(int nr, volatile unsigned long *p); -extern int test_and_change_bit(int nr, volatile unsigned long *p); - #include #include #include @@ -44,15 +33,11 @@ extern int test_and_change_bit(int nr, volatile unsigned long *p); #include #include -#include +#include +#include #include #include - -/* - * Ext2 is defined to use little-endian byte ordering. - */ -#define ext2_set_bit_atomic(lock, nr, p) test_and_set_bit_le(nr, p) -#define ext2_clear_bit_atomic(lock, nr, p) test_and_clear_bit_le(nr, p) +#include #endif /* __ASM_BITOPS_H */ diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 0094c6653b06..d264a7274811 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -36,7 +36,7 @@ * Start addresses are inclusive and end addresses are exclusive; start * addresses should be rounded down, end addresses up. * - * See Documentation/cachetlb.txt for more information. Please note that + * See Documentation/core-api/cachetlb.rst for more information. Please note that * the implementation assumes non-aliasing VIPT D-cache and (aliasing) * VIPT I-cache. * diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 192d791f1103..7ed320895d1f 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -87,6 +87,9 @@ static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base, #define efi_call_runtime(f, ...) sys_table_arg->runtime->f(__VA_ARGS__) #define efi_is_64bit() (true) +#define efi_table_attr(table, attr, instance) \ + ((table##_t *)instance)->attr + #define efi_call_proto(protocol, f, instance, ...) \ ((protocol##_t *)instance)->f(instance, ##__VA_ARGS__) diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h index 41770766d964..6a53e59ced95 100644 --- a/arch/arm64/include/asm/hw_breakpoint.h +++ b/arch/arm64/include/asm/hw_breakpoint.h @@ -119,13 +119,16 @@ static inline void decode_ctrl_reg(u32 reg, struct task_struct; struct notifier_block; +struct perf_event_attr; struct perf_event; struct pmu; extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, int *gen_len, int *gen_type, int *offset); -extern int arch_check_bp_in_kernelspace(struct perf_event *bp); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp); +extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +extern int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h index a0fee6985e6a..b2b0c6405eb0 100644 --- a/arch/arm64/include/asm/irq.h +++ b/arch/arm64/include/asm/irq.h @@ -8,8 +8,6 @@ struct pt_regs; -extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); - static inline int nr_legacy_irqs(void) { return 0; diff --git a/arch/arm64/include/asm/kprobes.h b/arch/arm64/include/asm/kprobes.h index 6deb8d726041..d5a44cf859e9 100644 --- a/arch/arm64/include/asm/kprobes.h +++ b/arch/arm64/include/asm/kprobes.h @@ -48,7 +48,6 @@ struct kprobe_ctlblk { unsigned long saved_irqflag; struct prev_kprobe prev_kprobe; struct kprobe_step_ctx ss_ctx; - struct pt_regs jprobe_saved_regs; }; void arch_remove_kprobe(struct kprobe *); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fda9a8ca48be..fe8777b12f86 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -306,6 +306,7 @@ struct kvm_vcpu_arch { #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ #define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ +#define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9f82d6b53851..1bdeca8918a6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -224,10 +224,8 @@ static inline void set_pte(pte_t *ptep, pte_t pte) * Only if the new pte is valid and kernel, otherwise TLB maintenance * or update_mmu_cache() have the necessary barriers. */ - if (pte_valid_not_user(pte)) { + if (pte_valid_not_user(pte)) dsb(ishst); - isb(); - } } extern void __sync_icache_dcache(pte_t pteval); @@ -434,7 +432,6 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { WRITE_ONCE(*pmdp, pmd); dsb(ishst); - isb(); } static inline void pmd_clear(pmd_t *pmdp) @@ -485,7 +482,6 @@ static inline void set_pud(pud_t *pudp, pud_t pud) { WRITE_ONCE(*pudp, pud); dsb(ishst); - isb(); } static inline void pud_clear(pud_t *pudp) diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h index fa8b3fe932e6..6495cc51246f 100644 --- a/arch/arm64/include/asm/simd.h +++ b/arch/arm64/include/asm/simd.h @@ -29,20 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy); static __must_check inline bool may_use_simd(void) { /* - * The raw_cpu_read() is racy if called with preemption enabled. - * This is not a bug: kernel_neon_busy is only set when - * preemption is disabled, so we cannot migrate to another CPU - * while it is set, nor can we migrate to a CPU where it is set. - * So, if we find it clear on some CPU then we're guaranteed to - * find it clear on any CPU we could migrate to. - * - * If we are in between kernel_neon_begin()...kernel_neon_end(), - * the flag will be set, but preemption is also disabled, so we - * can't migrate to another CPU and spuriously see it become - * false. + * kernel_neon_busy is only set while preemption is disabled, + * and is clear whenever preemption is enabled. Since + * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy + * cannot change under our feet -- if it's set we cannot be + * migrated, and if it's clear we cannot be migrated to a CPU + * where it is set. */ return !in_irq() && !irqs_disabled() && !in_nmi() && - !raw_cpu_read(kernel_neon_busy); + !this_cpu_read(kernel_neon_busy); } #else /* ! CONFIG_KERNEL_MODE_NEON */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 6171178075dc..a8f84812c6e8 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -728,6 +728,17 @@ asm( asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val)); \ } while (0) +/* + * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the + * set mask are set. Other bits are left as-is. + */ +#define sysreg_clear_set(sysreg, clear, set) do { \ + u64 __scs_val = read_sysreg(sysreg); \ + u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set); \ + if (__scs_new != __scs_val) \ + write_sysreg(__scs_new, sysreg); \ +} while (0) + static inline void config_sctlr_el1(u32 clear, u32 set) { u32 val; diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index ffdaea7954bb..0ad1cf233470 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -37,7 +37,7 @@ static inline void __tlb_remove_table(void *_table) static inline void tlb_flush(struct mmu_gather *tlb) { - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); /* * The ASID allocator will either invalidate the ASID or mark diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 5c4bce4ac381..36fb069fd049 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -122,7 +122,30 @@ static void patch_alternative(struct alt_instr *alt, } } -static void __apply_alternatives(void *alt_region, bool use_linear_alias) +/* + * We provide our own, private D-cache cleaning function so that we don't + * accidentally call into the cache.S code, which is patched by us at + * runtime. + */ +static void clean_dcache_range_nopatch(u64 start, u64 end) +{ + u64 cur, d_size, ctr_el0; + + ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); + d_size = 4 << cpuid_feature_extract_unsigned_field(ctr_el0, + CTR_DMINLINE_SHIFT); + cur = start & ~(d_size - 1); + do { + /* + * We must clean+invalidate to the PoC in order to avoid + * Cortex-A53 errata 826319, 827319, 824069 and 819472 + * (this corresponds to ARM64_WORKAROUND_CLEAN_CACHE) + */ + asm volatile("dc civac, %0" : : "r" (cur) : "memory"); + } while (cur += d_size, cur < end); +} + +static void __apply_alternatives(void *alt_region, bool is_module) { struct alt_instr *alt; struct alt_region *region = alt_region; @@ -145,7 +168,7 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias) pr_info_once("patching kernel code\n"); origptr = ALT_ORIG_PTR(alt); - updptr = use_linear_alias ? lm_alias(origptr) : origptr; + updptr = is_module ? origptr : lm_alias(origptr); nr_inst = alt->orig_len / AARCH64_INSN_SIZE; if (alt->cpufeature < ARM64_CB_PATCH) @@ -155,8 +178,20 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias) alt_cb(alt, origptr, updptr, nr_inst); - flush_icache_range((uintptr_t)origptr, - (uintptr_t)(origptr + nr_inst)); + if (!is_module) { + clean_dcache_range_nopatch((u64)origptr, + (u64)(origptr + nr_inst)); + } + } + + /* + * The core module code takes care of cache maintenance in + * flush_module_icache(). + */ + if (!is_module) { + dsb(ish); + __flush_icache_all(); + isb(); } } @@ -178,7 +213,7 @@ static int __apply_alternatives_multi_stop(void *unused) isb(); } else { BUG_ON(alternatives_applied); - __apply_alternatives(®ion, true); + __apply_alternatives(®ion, false); /* Barriers provided by the cache flushing */ WRITE_ONCE(alternatives_applied, 1); } @@ -192,12 +227,14 @@ void __init apply_alternatives_all(void) stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); } -void apply_alternatives(void *start, size_t length) +#ifdef CONFIG_MODULES +void apply_alternatives_module(void *start, size_t length) { struct alt_region region = { .begin = start, .end = start + length, }; - __apply_alternatives(®ion, false); + __apply_alternatives(®ion, true); } +#endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d2856b129097..c6d80743f4ed 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -937,7 +937,7 @@ static int __init parse_kpti(char *str) __kpti_forced = enabled ? 1 : -1; return 0; } -__setup("kpti=", parse_kpti); +early_param("kpti", parse_kpti); #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ #ifdef CONFIG_ARM64_HW_AFDBM @@ -1351,9 +1351,9 @@ static void __update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, static void update_cpu_capabilities(u16 scope_mask) { - __update_cpu_capabilities(arm64_features, scope_mask, "detected:"); __update_cpu_capabilities(arm64_errata, scope_mask, "enabling workaround for"); + __update_cpu_capabilities(arm64_features, scope_mask, "detected:"); } static int __enable_cpu_capability(void *arg) @@ -1408,8 +1408,8 @@ __enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps, static void __init enable_cpu_capabilities(u16 scope_mask) { - __enable_cpu_capabilities(arm64_features, scope_mask); __enable_cpu_capabilities(arm64_errata, scope_mask); + __enable_cpu_capabilities(arm64_features, scope_mask); } /* diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c index 413dbe530da8..8c9644376326 100644 --- a/arch/arm64/kernel/hw_breakpoint.c +++ b/arch/arm64/kernel/hw_breakpoint.c @@ -343,14 +343,13 @@ static int get_hbp_len(u8 hbp_len) /* * Check whether bp virtual address is in kernel space. */ -int arch_check_bp_in_kernelspace(struct perf_event *bp) +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { unsigned int len; unsigned long va; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - va = info->address; - len = get_hbp_len(info->ctrl.len); + va = hw->address; + len = get_hbp_len(hw->ctrl.len); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -421,53 +420,53 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, /* * Construct an arch_hw_breakpoint from a perf_event. */ -static int arch_build_bp_info(struct perf_event *bp) +static int arch_build_bp_info(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - /* Type */ - switch (bp->attr.bp_type) { + switch (attr->bp_type) { case HW_BREAKPOINT_X: - info->ctrl.type = ARM_BREAKPOINT_EXECUTE; + hw->ctrl.type = ARM_BREAKPOINT_EXECUTE; break; case HW_BREAKPOINT_R: - info->ctrl.type = ARM_BREAKPOINT_LOAD; + hw->ctrl.type = ARM_BREAKPOINT_LOAD; break; case HW_BREAKPOINT_W: - info->ctrl.type = ARM_BREAKPOINT_STORE; + hw->ctrl.type = ARM_BREAKPOINT_STORE; break; case HW_BREAKPOINT_RW: - info->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; + hw->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; break; default: return -EINVAL; } /* Len */ - switch (bp->attr.bp_len) { + switch (attr->bp_len) { case HW_BREAKPOINT_LEN_1: - info->ctrl.len = ARM_BREAKPOINT_LEN_1; + hw->ctrl.len = ARM_BREAKPOINT_LEN_1; break; case HW_BREAKPOINT_LEN_2: - info->ctrl.len = ARM_BREAKPOINT_LEN_2; + hw->ctrl.len = ARM_BREAKPOINT_LEN_2; break; case HW_BREAKPOINT_LEN_3: - info->ctrl.len = ARM_BREAKPOINT_LEN_3; + hw->ctrl.len = ARM_BREAKPOINT_LEN_3; break; case HW_BREAKPOINT_LEN_4: - info->ctrl.len = ARM_BREAKPOINT_LEN_4; + hw->ctrl.len = ARM_BREAKPOINT_LEN_4; break; case HW_BREAKPOINT_LEN_5: - info->ctrl.len = ARM_BREAKPOINT_LEN_5; + hw->ctrl.len = ARM_BREAKPOINT_LEN_5; break; case HW_BREAKPOINT_LEN_6: - info->ctrl.len = ARM_BREAKPOINT_LEN_6; + hw->ctrl.len = ARM_BREAKPOINT_LEN_6; break; case HW_BREAKPOINT_LEN_7: - info->ctrl.len = ARM_BREAKPOINT_LEN_7; + hw->ctrl.len = ARM_BREAKPOINT_LEN_7; break; case HW_BREAKPOINT_LEN_8: - info->ctrl.len = ARM_BREAKPOINT_LEN_8; + hw->ctrl.len = ARM_BREAKPOINT_LEN_8; break; default: return -EINVAL; @@ -478,37 +477,37 @@ static int arch_build_bp_info(struct perf_event *bp) * AArch32 also requires breakpoints of length 2 for Thumb. * Watchpoints can be of length 1, 2, 4 or 8 bytes. */ - if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE) { + if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE) { if (is_compat_bp(bp)) { - if (info->ctrl.len != ARM_BREAKPOINT_LEN_2 && - info->ctrl.len != ARM_BREAKPOINT_LEN_4) + if (hw->ctrl.len != ARM_BREAKPOINT_LEN_2 && + hw->ctrl.len != ARM_BREAKPOINT_LEN_4) return -EINVAL; - } else if (info->ctrl.len != ARM_BREAKPOINT_LEN_4) { + } else if (hw->ctrl.len != ARM_BREAKPOINT_LEN_4) { /* * FIXME: Some tools (I'm looking at you perf) assume * that breakpoints should be sizeof(long). This * is nonsense. For now, we fix up the parameter * but we should probably return -EINVAL instead. */ - info->ctrl.len = ARM_BREAKPOINT_LEN_4; + hw->ctrl.len = ARM_BREAKPOINT_LEN_4; } } /* Address */ - info->address = bp->attr.bp_addr; + hw->address = attr->bp_addr; /* * Privilege * Note that we disallow combined EL0/EL1 breakpoints because * that would complicate the stepping code. */ - if (arch_check_bp_in_kernelspace(bp)) - info->ctrl.privilege = AARCH64_BREAKPOINT_EL1; + if (arch_check_bp_in_kernelspace(hw)) + hw->ctrl.privilege = AARCH64_BREAKPOINT_EL1; else - info->ctrl.privilege = AARCH64_BREAKPOINT_EL0; + hw->ctrl.privilege = AARCH64_BREAKPOINT_EL0; /* Enabled? */ - info->ctrl.enabled = !bp->attr.disabled; + hw->ctrl.enabled = !attr->disabled; return 0; } @@ -516,14 +515,15 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings. */ -int arch_validate_hwbkpt_settings(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); int ret; u64 alignment_mask, offset; /* Build the arch_hw_breakpoint. */ - ret = arch_build_bp_info(bp); + ret = arch_build_bp_info(bp, attr, hw); if (ret) return ret; @@ -537,42 +537,42 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) * that here. */ if (is_compat_bp(bp)) { - if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_8) alignment_mask = 0x7; else alignment_mask = 0x3; - offset = info->address & alignment_mask; + offset = hw->address & alignment_mask; switch (offset) { case 0: /* Aligned */ break; case 1: /* Allow single byte watchpoint. */ - if (info->ctrl.len == ARM_BREAKPOINT_LEN_1) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1) break; case 2: /* Allow halfword watchpoints and breakpoints. */ - if (info->ctrl.len == ARM_BREAKPOINT_LEN_2) + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2) break; default: return -EINVAL; } } else { - if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE) + if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE) alignment_mask = 0x3; else alignment_mask = 0x7; - offset = info->address & alignment_mask; + offset = hw->address & alignment_mask; } - info->address &= ~alignment_mask; - info->ctrl.len <<= offset; + hw->address &= ~alignment_mask; + hw->ctrl.len <<= offset; /* * Disallow per-task kernel breakpoints since these would * complicate the stepping code. */ - if (info->ctrl.privilege == AARCH64_BREAKPOINT_EL1 && bp->hw.target) + if (hw->ctrl.privilege == AARCH64_BREAKPOINT_EL1 && bp->hw.target) return -EINVAL; return 0; diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 60e5fc661f74..780a12f59a8f 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -42,16 +42,6 @@ int arch_show_interrupts(struct seq_file *p, int prec) return 0; } -void (*handle_arch_irq)(struct pt_regs *) = NULL; - -void __init set_handle_irq(void (*handle_irq)(struct pt_regs *)) -{ - if (handle_arch_irq) - return; - - handle_arch_irq = handle_irq; -} - #ifdef CONFIG_VMAP_STACK static void init_irq_stacks(void) { diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 155fd91e78f4..f0f27aeefb73 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -448,9 +448,8 @@ int module_finalize(const Elf_Ehdr *hdr, const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { - if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) { - apply_alternatives((void *)s->sh_addr, s->sh_size); - } + if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) + apply_alternatives_module((void *)s->sh_addr, s->sh_size); #ifdef CONFIG_ARM64_MODULE_PLTS if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name)) diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index d849d9804011..e78c3ef04d95 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p, break; case KPROBE_HIT_SS: case KPROBE_REENTER: - pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr); + pr_warn("Unrecoverable kprobe detected.\n"); dump_kprobe(p); BUG(); break; @@ -395,9 +395,9 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) /* * If we have no pre-handler or it returned 0, we * continue with normal processing. If we have a - * pre-handler and it returned non-zero, it prepped - * for calling the break_handler below on re-entry, - * so get out doing nothing more here. + * pre-handler and it returned non-zero, it will + * modify the execution path and no need to single + * stepping. Let's just reset current kprobe and exit. * * pre_handler can hit a breakpoint and can step thru * before return, keep PSTATE D-flag enabled until @@ -405,16 +405,8 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) */ if (!p->pre_handler || !p->pre_handler(p, regs)) { setup_singlestep(p, regs, kcb, 0); - return; - } - } - } else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) == - BRK64_OPCODE_KPROBES) && cur_kprobe) { - /* We probably hit a jprobe. Call its break handler. */ - if (cur_kprobe->break_handler && - cur_kprobe->break_handler(cur_kprobe, regs)) { - setup_singlestep(cur_kprobe, regs, kcb, 0); - return; + } else + reset_current_kprobe(); } } /* @@ -465,74 +457,6 @@ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) return DBG_HOOK_HANDLED; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - kcb->jprobe_saved_regs = *regs; - /* - * Since we can't be sure where in the stack frame "stacked" - * pass-by-value arguments are stored we just don't try to - * duplicate any of the stack. Do not use jprobes on functions that - * use more than 64 bytes (after padding each to an 8 byte boundary) - * of arguments, or pass individual arguments larger than 16 bytes. - */ - - instruction_pointer_set(regs, (unsigned long) jp->entry); - preempt_disable(); - pause_graph_tracing(); - return 1; -} - -void __kprobes jprobe_return(void) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - /* - * Jprobe handler return by entering break exception, - * encoded same as kprobe, but with following conditions - * -a special PC to identify it from the other kprobes. - * -restore stack addr to original saved pt_regs - */ - asm volatile(" mov sp, %0 \n" - "jprobe_return_break: brk %1 \n" - : - : "r" (kcb->jprobe_saved_regs.sp), - "I" (BRK64_ESR_KPROBES) - : "memory"); - - unreachable(); -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - long stack_addr = kcb->jprobe_saved_regs.sp; - long orig_sp = kernel_stack_pointer(regs); - struct jprobe *jp = container_of(p, struct jprobe, kp); - extern const char jprobe_return_break[]; - - if (instruction_pointer(regs) != (u64) jprobe_return_break) - return 0; - - if (orig_sp != stack_addr) { - struct pt_regs *saved_regs = - (struct pt_regs *)kcb->jprobe_saved_regs.sp; - pr_err("current sp %lx does not match saved sp %lx\n", - orig_sp, stack_addr); - pr_err("Saved registers for jprobe %p\n", jp); - __show_regs(saved_regs); - pr_err("Current registers\n"); - __show_regs(regs); - BUG(); - } - unpause_graph_tracing(); - *regs = kcb->jprobe_saved_regs; - preempt_enable_no_resched(); - return 1; -} - bool arch_within_kprobe_blacklist(unsigned long addr) { if ((addr >= (unsigned long)__kprobes_text_start && diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index f3e2e3aec0b0..2faa9863d2e5 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -179,7 +179,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) * This is the secondary CPU boot entry. We're using this CPUs * idle thread stack, but a set of temporary page tables. */ -asmlinkage void secondary_start_kernel(void) +asmlinkage notrace void secondary_start_kernel(void) { u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; struct mm_struct *mm = &init_mm; diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index dc6ecfa5a2d2..aac7808ce216 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -5,13 +5,14 @@ * Copyright 2018 Arm Limited * Author: Dave Martin */ -#include +#include #include #include #include #include #include #include +#include /* * Called on entry to KVM_RUN unless this vcpu previously ran at least @@ -61,10 +62,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { BUG_ON(!current->mm); - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_HOST_SVE_IN_USE); + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_HOST_SVE_IN_USE | + KVM_ARM64_HOST_SVE_ENABLED); vcpu->arch.flags |= KVM_ARM64_FP_HOST; + if (test_thread_flag(TIF_SVE)) vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE; + + if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) + vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED; } /* @@ -92,19 +99,30 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) { - local_bh_disable(); + unsigned long flags; - update_thread_flag(TIF_SVE, - vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE); + local_irq_save(flags); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { /* Clean guest FP state to memory and invalidate cpu view */ fpsimd_save(); fpsimd_flush_cpu_state(); - } else if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { - /* Ensure user trap controls are correctly restored */ - fpsimd_bind_task_to_cpu(); + } else if (system_supports_sve()) { + /* + * The FPSIMD/SVE state in the CPU has not been touched, and we + * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been + * reset to CPACR_EL1_DEFAULT by the Hyp code, disabling SVE + * for EL0. To avoid spurious traps, restore the trap state + * seen by kvm_arch_vcpu_load_fp(): + */ + if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED) + sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN); + else + sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } - local_bh_enable(); + update_thread_flag(TIF_SVE, + vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE); + + local_irq_restore(flags); } diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 137710f4dac3..68755fd70dcf 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -lib-y := bitops.o clear_user.o delay.o copy_from_user.o \ +lib-y := clear_user.o delay.o copy_from_user.o \ copy_to_user.o copy_in_user.o copy_page.o \ clear_page.o memchr.o memcpy.o memmove.o memset.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S deleted file mode 100644 index 43ac736baa5b..000000000000 --- a/arch/arm64/lib/bitops.S +++ /dev/null @@ -1,76 +0,0 @@ -/* - * Based on arch/arm/lib/bitops.h - * - * Copyright (C) 2013 ARM Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - */ - -#include -#include -#include - -/* - * x0: bits 5:0 bit offset - * bits 31:6 word offset - * x1: address - */ - .macro bitop, name, llsc, lse -ENTRY( \name ) - and w3, w0, #63 // Get bit offset - eor w0, w0, w3 // Clear low bits - mov x2, #1 - add x1, x1, x0, lsr #3 // Get word offset -alt_lse " prfm pstl1strm, [x1]", "nop" - lsl x3, x2, x3 // Create mask - -alt_lse "1: ldxr x2, [x1]", "\lse x3, [x1]" -alt_lse " \llsc x2, x2, x3", "nop" -alt_lse " stxr w0, x2, [x1]", "nop" -alt_lse " cbnz w0, 1b", "nop" - - ret -ENDPROC(\name ) - .endm - - .macro testop, name, llsc, lse -ENTRY( \name ) - and w3, w0, #63 // Get bit offset - eor w0, w0, w3 // Clear low bits - mov x2, #1 - add x1, x1, x0, lsr #3 // Get word offset -alt_lse " prfm pstl1strm, [x1]", "nop" - lsl x4, x2, x3 // Create mask - -alt_lse "1: ldxr x2, [x1]", "\lse x4, x2, [x1]" - lsr x0, x2, x3 -alt_lse " \llsc x2, x2, x4", "nop" -alt_lse " stlxr w5, x2, [x1]", "nop" -alt_lse " cbnz w5, 1b", "nop" -alt_lse " dmb ish", "nop" - - and x0, x0, #1 - ret -ENDPROC(\name ) - .endm - -/* - * Atomic bit operations. - */ - bitop change_bit, eor, steor - bitop clear_bit, bic, stclr - bitop set_bit, orr, stset - - testop test_and_change_bit, eor, ldeoral - testop test_and_clear_bit, bic, ldclral - testop test_and_set_bit, orr, ldsetal diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 49e217ac7e1e..61e93f0b5482 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -583,13 +583,14 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, size >> PAGE_SHIFT); return NULL; } - if (!coherent) - __dma_flush_area(page_to_virt(page), iosize); - addr = dma_common_contiguous_remap(page, size, VM_USERMAP, prot, __builtin_return_address(0)); - if (!addr) { + if (addr) { + memset(addr, 0, size); + if (!coherent) + __dma_flush_area(page_to_virt(page), iosize); + } else { iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs); dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index ecc6818191df..192b3ba07075 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -108,7 +108,6 @@ static pte_t get_clear_flush(struct mm_struct *mm, unsigned long pgsize, unsigned long ncontig) { - struct vm_area_struct vma = { .vm_mm = mm }; pte_t orig_pte = huge_ptep_get(ptep); bool valid = pte_valid(orig_pte); unsigned long i, saddr = addr; @@ -125,8 +124,10 @@ static pte_t get_clear_flush(struct mm_struct *mm, orig_pte = pte_mkdirty(orig_pte); } - if (valid) + if (valid) { + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); flush_tlb_range(&vma, saddr, addr); + } return orig_pte; } @@ -145,7 +146,7 @@ static void clear_flush(struct mm_struct *mm, unsigned long pgsize, unsigned long ncontig) { - struct vm_area_struct vma = { .vm_mm = mm }; + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); unsigned long i, saddr = addr; for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1b18b4722420..9abf8a1e7b25 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -310,7 +310,7 @@ static void __init arm64_memory_present(void) } #endif -static phys_addr_t memory_limit = (phys_addr_t)ULLONG_MAX; +static phys_addr_t memory_limit = PHYS_ADDR_MAX; /* * Limit the memory size that was specified via FDT. @@ -401,7 +401,7 @@ void __init arm64_memblock_init(void) * high up in memory, add back the kernel region that must be accessible * via the linear mapping. */ - if (memory_limit != (phys_addr_t)ULLONG_MAX) { + if (memory_limit != PHYS_ADDR_MAX) { memblock_mem_limit_remove_map(memory_limit); memblock_add(__pa_symbol(_text), (u64)(_end - _text)); } @@ -611,11 +611,13 @@ void __init mem_init(void) BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64); #endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP /* * Make sure we chose the upper bound of sizeof(struct page) - * correctly. + * correctly when sizing the VMEMMAP array. */ BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT)); +#endif if (PAGE_SIZE >= 16384 && get_num_physpages() <= 128) { extern int sysctl_overcommit_memory; @@ -666,7 +668,7 @@ __setup("keepinitrd", keepinitrd_setup); */ static int dump_mem_limit(struct notifier_block *self, unsigned long v, void *p) { - if (memory_limit != (phys_addr_t)ULLONG_MAX) { + if (memory_limit != PHYS_ADDR_MAX) { pr_emerg("Memory Limit: %llu MB\n", memory_limit >> 20); } else { pr_emerg("Memory Limit: none\n"); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 493ff75670ff..8ae5d7ae4af3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -977,12 +977,12 @@ int pmd_clear_huge(pmd_t *pmdp) return 1; } -int pud_free_pmd_page(pud_t *pud) +int pud_free_pmd_page(pud_t *pud, unsigned long addr) { return pud_none(*pud); } -int pmd_free_pte_page(pmd_t *pmd) +int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { return pmd_none(*pmd); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 5f9a73a4452c..03646e6a2ef4 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -217,8 +217,9 @@ ENDPROC(idmap_cpu_replace_ttbr1) .macro __idmap_kpti_put_pgtable_ent_ng, type orr \type, \type, #PTE_NG // Same bit for blocks and pages - str \type, [cur_\()\type\()p] // Update the entry and ensure it - dc civac, cur_\()\type\()p // is visible to all CPUs. + str \type, [cur_\()\type\()p] // Update the entry and ensure + dmb sy // that it is visible to all + dc civac, cur_\()\type\()p // CPUs. .endm /* diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 941e7554e886..c6b6a06231b2 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -2,8 +2,10 @@ #ifndef __ARCH_H8300_ATOMIC__ #define __ARCH_H8300_ATOMIC__ +#include #include #include +#include /* * Atomic operations that C can't guarantee us. Useful for @@ -15,8 +17,6 @@ #define atomic_read(v) READ_ONCE((v)->counter) #define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) -#include - #define ATOMIC_OP_RETURN(op, c_op) \ static inline int atomic_##op##_return(int i, atomic_t *v) \ { \ @@ -69,18 +69,6 @@ ATOMIC_OPS(sub, -=) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - -#define atomic_inc_return(v) atomic_add_return(1, v) -#define atomic_dec_return(v) atomic_sub_return(1, v) - -#define atomic_inc(v) (void)atomic_inc_return(v) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -#define atomic_dec(v) (void)atomic_dec_return(v) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) - static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; @@ -94,7 +82,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int ret; h8300flags flags; @@ -106,5 +94,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) arch_local_irq_restore(flags); return ret; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #endif /* __ARCH_H8300_ATOMIC __ */ diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index fb3dfb2a667e..311b9894ccc8 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -164,7 +164,7 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer to value * @a: amount to add * @u: unless value is equal to u @@ -173,7 +173,7 @@ ATOMIC_OPS(xor) * */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int __oldval; register int tmp; @@ -196,18 +196,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) ); return __oldval; } - -#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) - -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) - -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, (v)) == 0) -#define atomic_add_negative(i, v) (atomic_add_return(i, (v)) < 0) - -#define atomic_inc_return(v) (atomic_add_return(1, v)) -#define atomic_dec_return(v) (atomic_sub_return(1, v)) +#define atomic_fetch_add_unless atomic_fetch_add_unless #endif diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index aef02f7ca8aa..65125d0b02dd 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -30,7 +30,6 @@ /* A handy thing to have if one has the RAM. Declared in head.S */ extern unsigned long empty_zero_page; -extern unsigned long zero_page_mask; /* * The PTE model described here is that of the Hexagon Virtual Machine, diff --git a/arch/hexagon/kernel/setup.c b/arch/hexagon/kernel/setup.c index 6981949f5df3..dc8c7e75b5d1 100644 --- a/arch/hexagon/kernel/setup.c +++ b/arch/hexagon/kernel/setup.c @@ -66,7 +66,7 @@ void __init setup_arch(char **cmdline_p) */ __vmsetvec(_K_VM_event_vector); - printk(KERN_INFO "PHYS_OFFSET=0x%08x\n", PHYS_OFFSET); + printk(KERN_INFO "PHYS_OFFSET=0x%08lx\n", PHYS_OFFSET); /* * Simulator has a few differences from the hardware. diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index 192584d5ac2f..1495d45e472d 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -39,9 +39,6 @@ unsigned long __phys_offset; /* physical kernel offset >> 12 */ /* Set as variable to limit PMD copies */ int max_kernel_seg = 0x303; -/* think this should be (page_size-1) the way it's used...*/ -unsigned long zero_page_mask; - /* indicate pfn's of high memory */ unsigned long highstart_pfn, highend_pfn; diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 2524fb60fbc2..206530d0751b 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,91 +215,10 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - - -static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - -/* - * Atomically add I to V and return TRUE if the resulting value is - * negative. - */ -static __inline__ int -atomic_add_negative (int i, atomic_t *v) -{ - return atomic_add_return(i, v) < 0; -} - -static __inline__ long -atomic64_add_negative (__s64 i, atomic64_t *v) -{ - return atomic64_add_return(i, v) < 0; -} - -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) -#define atomic64_inc_and_test(v) (atomic64_add_return(1, (v)) == 0) - #define atomic_add(i,v) (void)atomic_add_return((i), (v)) #define atomic_sub(i,v) (void)atomic_sub_return((i), (v)) -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) #define atomic64_add(i,v) (void)atomic64_add_return((i), (v)) #define atomic64_sub(i,v) (void)atomic64_sub_return((i), (v)) -#define atomic64_inc(v) atomic64_add(1, (v)) -#define atomic64_dec(v) atomic64_sub(1, (v)) #endif /* _ASM_IA64_ATOMIC_H */ diff --git a/arch/ia64/include/asm/kprobes.h b/arch/ia64/include/asm/kprobes.h index 0302b3664789..580356a2eea6 100644 --- a/arch/ia64/include/asm/kprobes.h +++ b/arch/ia64/include/asm/kprobes.h @@ -82,8 +82,6 @@ struct prev_kprobe { #define ARCH_PREV_KPROBE_SZ 2 struct kprobe_ctlblk { unsigned long kprobe_status; - struct pt_regs jprobe_saved_regs; - unsigned long jprobes_saved_stacked_regs[MAX_PARAM_RSE_SIZE]; unsigned long *bsp; unsigned long cfm; atomic_t prev_kprobe_index; diff --git a/arch/ia64/include/asm/tlb.h b/arch/ia64/include/asm/tlb.h index 44f0ac0df308..516355a774bf 100644 --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -115,12 +115,11 @@ ia64_tlb_flush_mmu_tlbonly(struct mmu_gather *tlb, unsigned long start, unsigned flush_tlb_all(); } else { /* - * XXX fix me: flush_tlb_range() should take an mm pointer instead of a - * vma pointer. + * flush_tlb_range() takes a vma instead of a mm pointer because + * some architectures want the vm_flags for ITLB/DTLB flush. */ - struct vm_area_struct vma; + struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); - vma.vm_mm = tlb->mm; /* flush the address range from the tlb: */ flush_tlb_range(&vma, start, end); /* now flush the virt. page-table area mapping the address range: */ diff --git a/arch/ia64/include/uapi/asm/break.h b/arch/ia64/include/uapi/asm/break.h index 5d742bcb0018..4ca110f0a94b 100644 --- a/arch/ia64/include/uapi/asm/break.h +++ b/arch/ia64/include/uapi/asm/break.h @@ -14,7 +14,6 @@ */ #define __IA64_BREAK_KDB 0x80100 #define __IA64_BREAK_KPROBE 0x81000 /* .. 0x81fff */ -#define __IA64_BREAK_JPROBE 0x82000 /* * OS-specific break numbers: diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile index 498f3da3f225..d0c0ccdd656a 100644 --- a/arch/ia64/kernel/Makefile +++ b/arch/ia64/kernel/Makefile @@ -25,7 +25,7 @@ obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o obj-$(CONFIG_IA64_CYCLONE) += cyclone.o obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o -obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o +obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o diff --git a/arch/ia64/kernel/jprobes.S b/arch/ia64/kernel/jprobes.S deleted file mode 100644 index f69389c7be1d..000000000000 --- a/arch/ia64/kernel/jprobes.S +++ /dev/null @@ -1,90 +0,0 @@ -/* - * Jprobe specific operations - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. - * - * Copyright (C) Intel Corporation, 2005 - * - * 2005-May Rusty Lynch and Anil S Keshavamurthy - * initial implementation - * - * Jprobes (a.k.a. "jump probes" which is built on-top of kprobes) allow a - * probe to be inserted into the beginning of a function call. The fundamental - * difference between a jprobe and a kprobe is the jprobe handler is executed - * in the same context as the target function, while the kprobe handlers - * are executed in interrupt context. - * - * For jprobes we initially gain control by placing a break point in the - * first instruction of the targeted function. When we catch that specific - * break, we: - * * set the return address to our jprobe_inst_return() function - * * jump to the jprobe handler function - * - * Since we fixed up the return address, the jprobe handler will return to our - * jprobe_inst_return() function, giving us control again. At this point we - * are back in the parents frame marker, so we do yet another call to our - * jprobe_break() function to fix up the frame marker as it would normally - * exist in the target function. - * - * Our jprobe_return function then transfers control back to kprobes.c by - * executing a break instruction using one of our reserved numbers. When we - * catch that break in kprobes.c, we continue like we do for a normal kprobe - * by single stepping the emulated instruction, and then returning execution - * to the correct location. - */ -#include -#include - - /* - * void jprobe_break(void) - */ - .section .kprobes.text, "ax" -ENTRY(jprobe_break) - break.m __IA64_BREAK_JPROBE -END(jprobe_break) - - /* - * void jprobe_inst_return(void) - */ -GLOBAL_ENTRY(jprobe_inst_return) - br.call.sptk.many b0=jprobe_break -END(jprobe_inst_return) - -GLOBAL_ENTRY(invalidate_stacked_regs) - movl r16=invalidate_restore_cfm - ;; - mov b6=r16 - ;; - br.ret.sptk.many b6 - ;; -invalidate_restore_cfm: - mov r16=ar.rsc - ;; - mov ar.rsc=r0 - ;; - loadrs - ;; - mov ar.rsc=r16 - ;; - br.cond.sptk.many rp -END(invalidate_stacked_regs) - -GLOBAL_ENTRY(flush_register_stack) - // flush dirty regs to backing store (must be first in insn group) - flushrs - ;; - br.ret.sptk.many rp -END(flush_register_stack) - diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index f5f3a5e6fcd1..aa41bd5cf9b7 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -35,8 +35,6 @@ #include #include -extern void jprobe_inst_return(void); - DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); @@ -480,12 +478,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) */ break; } - kretprobe_assert(ri, orig_ret_address, trampoline_address); - reset_current_kprobe(); kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); @@ -819,14 +814,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) prepare_ss(p, regs); kcb->kprobe_status = KPROBE_REENTER; return 1; - } else if (args->err == __IA64_BREAK_JPROBE) { - /* - * jprobe instrumented function just completed - */ - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - goto ss_probe; - } } else if (!is_ia64_break_inst(regs)) { /* The breakpoint instruction was removed by * another cpu right after we hit, no further @@ -861,15 +848,12 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) set_current_kprobe(p, kcb); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (p->pre_handler && p->pre_handler(p, regs)) - /* - * Our pre-handler is specifically requesting that we just - * do a return. This is used for both the jprobe pre-handler - * and the kretprobe trampoline - */ + if (p->pre_handler && p->pre_handler(p, regs)) { + reset_current_kprobe(); + preempt_enable_no_resched(); return 1; + } -ss_probe: #if !defined(CONFIG_PREEMPT) if (p->ainsn.inst_flag == INST_FLAG_BOOSTABLE && !p->post_handler) { /* Boost up -- we can execute copied instructions directly */ @@ -992,7 +976,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, case DIE_BREAK: /* err is break number from ia64_bad_break() */ if ((args->err >> 12) == (__IA64_BREAK_KPROBE >> 12) - || args->err == __IA64_BREAK_JPROBE || args->err == 0) if (pre_kprobes_handler(args)) ret = NOTIFY_STOP; @@ -1040,74 +1023,6 @@ unsigned long arch_deref_entry_point(void *entry) return ((struct fnptr *)entry)->ip; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - unsigned long addr = arch_deref_entry_point(jp->entry); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - struct param_bsp_cfm pa; - int bytes; - - /* - * Callee owns the argument space and could overwrite it, eg - * tail call optimization. So to be absolutely safe - * we save the argument space before transferring the control - * to instrumented jprobe function which runs in - * the process context - */ - pa.ip = regs->cr_iip; - unw_init_running(ia64_get_bsp_cfm, &pa); - bytes = (char *)ia64_rse_skip_regs(pa.bsp, pa.cfm & 0x3f) - - (char *)pa.bsp; - memcpy( kcb->jprobes_saved_stacked_regs, - pa.bsp, - bytes ); - kcb->bsp = pa.bsp; - kcb->cfm = pa.cfm; - - /* save architectural state */ - kcb->jprobe_saved_regs = *regs; - - /* after rfi, execute the jprobe instrumented function */ - regs->cr_iip = addr & ~0xFULL; - ia64_psr(regs)->ri = addr & 0xf; - regs->r1 = ((struct fnptr *)(jp->entry))->gp; - - /* - * fix the return address to our jprobe_inst_return() function - * in the jprobes.S file - */ - regs->b0 = ((struct fnptr *)(jprobe_inst_return))->ip; - - return 1; -} - -/* ia64 does not need this */ -void __kprobes jprobe_return(void) -{ -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - int bytes; - - /* restoring architectural state */ - *regs = kcb->jprobe_saved_regs; - - /* restoring the original argument space */ - flush_register_stack(); - bytes = (char *)ia64_rse_skip_regs(kcb->bsp, kcb->cfm & 0x3f) - - (char *)kcb->bsp; - memcpy( kcb->bsp, - kcb->jprobes_saved_stacked_regs, - bytes ); - invalidate_stacked_regs(); - - preempt_enable_no_resched(); - return 1; -} - static struct kprobe trampoline_p = { .pre_handler = trampoline_probe_handler }; diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c index 3b38c717008a..46bff1661836 100644 --- a/arch/ia64/kernel/perfmon.c +++ b/arch/ia64/kernel/perfmon.c @@ -2278,17 +2278,15 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t DPRINT(("smpl_buf @%p\n", smpl_buf)); /* allocate vma */ - vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL); + vma = vm_area_alloc(mm); if (!vma) { DPRINT(("Cannot allocate vma\n")); goto error_kmem; } - INIT_LIST_HEAD(&vma->anon_vma_chain); /* * partially initialize the vma for the sampling buffer */ - vma->vm_mm = mm; vma->vm_file = get_file(filp); vma->vm_flags = VM_READ|VM_MAYREAD|VM_DONTEXPAND|VM_DONTDUMP; vma->vm_page_prot = PAGE_READONLY; /* XXX may need to change */ @@ -2346,7 +2344,7 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t return 0; error: - kmem_cache_free(vm_area_cachep, vma); + vm_area_free(vma); error_kmem: pfm_rvfree(smpl_buf, size); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 18278b448530..3b85c3ecac38 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -114,10 +114,9 @@ ia64_init_addr_space (void) * the problem. When the process attempts to write to the register backing store * for the first time, it will get a SEGFAULT in this case. */ - vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL); + vma = vm_area_alloc(current->mm); if (vma) { - INIT_LIST_HEAD(&vma->anon_vma_chain); - vma->vm_mm = current->mm; + vma_set_anonymous(vma); vma->vm_start = current->thread.rbs_bot & PAGE_MASK; vma->vm_end = vma->vm_start + PAGE_SIZE; vma->vm_flags = VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT; @@ -125,7 +124,7 @@ ia64_init_addr_space (void) down_write(¤t->mm->mmap_sem); if (insert_vm_struct(current->mm, vma)) { up_write(¤t->mm->mmap_sem); - kmem_cache_free(vm_area_cachep, vma); + vm_area_free(vma); return; } up_write(¤t->mm->mmap_sem); @@ -133,10 +132,9 @@ ia64_init_addr_space (void) /* map NaT-page at address zero to speed up speculative dereferencing of NULL: */ if (!(current->personality & MMAP_PAGE_ZERO)) { - vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL); + vma = vm_area_alloc(current->mm); if (vma) { - INIT_LIST_HEAD(&vma->anon_vma_chain); - vma->vm_mm = current->mm; + vma_set_anonymous(vma); vma->vm_end = PAGE_SIZE; vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT); vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO | @@ -144,7 +142,7 @@ ia64_init_addr_space (void) down_write(¤t->mm->mmap_sem); if (insert_vm_struct(current->mm, vma)) { up_write(¤t->mm->mmap_sem); - kmem_cache_free(vm_area_cachep, vma); + vm_area_free(vma); return; } up_write(¤t->mm->mmap_sem); @@ -277,7 +275,7 @@ static struct vm_area_struct gate_vma; static int __init gate_vma_init(void) { - gate_vma.vm_mm = NULL; + vma_init(&gate_vma, NULL); gate_vma.vm_start = FIXADDR_USER_START; gate_vma.vm_end = FIXADDR_USER_END; gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index 785612b576f7..b29f93774d95 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -2,6 +2,7 @@ config M68K bool default y + select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA select ARCH_MIGHT_HAVE_PC_PARPORT if ISA select ARCH_NO_COHERENT_DMA_MMAP if !MMU select HAVE_IDE @@ -24,6 +25,10 @@ config M68K select MODULES_USE_ELF_RELA select OLD_SIGSUSPEND3 select OLD_SIGACTION + select DMA_NONCOHERENT_OPS if HAS_DMA + select HAVE_MEMBLOCK + select ARCH_DISCARD_MEMBLOCK + select NO_BOOTMEM config CPU_BIG_ENDIAN def_bool y diff --git a/arch/m68k/apollo/config.c b/arch/m68k/apollo/config.c index b2a6bc63f8cd..aef8d42e078d 100644 --- a/arch/m68k/apollo/config.c +++ b/arch/m68k/apollo/config.c @@ -31,7 +31,6 @@ extern void dn_sched_init(irq_handler_t handler); extern void dn_init_IRQ(void); extern u32 dn_gettimeoffset(void); extern int dn_dummy_hwclk(int, struct rtc_time *); -extern int dn_dummy_set_clock_mmss(unsigned long); extern void dn_dummy_reset(void); #ifdef CONFIG_HEARTBEAT static void dn_heartbeat(int on); @@ -156,7 +155,6 @@ void __init config_apollo(void) arch_gettimeoffset = dn_gettimeoffset; mach_max_dma_address = 0xffffffff; mach_hwclk = dn_dummy_hwclk; /* */ - mach_set_clock_mmss = dn_dummy_set_clock_mmss; /* */ mach_reset = dn_dummy_reset; /* */ #ifdef CONFIG_HEARTBEAT mach_heartbeat = dn_heartbeat; @@ -240,12 +238,6 @@ int dn_dummy_hwclk(int op, struct rtc_time *t) { } -int dn_dummy_set_clock_mmss(unsigned long nowtime) -{ - pr_info("set_clock_mmss\n"); - return 0; -} - void dn_dummy_reset(void) { dn_serial_print("The end !\n"); diff --git a/arch/m68k/atari/config.c b/arch/m68k/atari/config.c index 565c6f06ab0b..bd96702a1ad0 100644 --- a/arch/m68k/atari/config.c +++ b/arch/m68k/atari/config.c @@ -81,9 +81,6 @@ extern void atari_sched_init(irq_handler_t); extern u32 atari_gettimeoffset(void); extern int atari_mste_hwclk (int, struct rtc_time *); extern int atari_tt_hwclk (int, struct rtc_time *); -extern int atari_mste_set_clock_mmss (unsigned long); -extern int atari_tt_set_clock_mmss (unsigned long); - /* ++roman: This is a more elaborate test for an SCC chip, since the plain * Medusa board generates DTACK at the SCC's standard addresses, but a SCC @@ -362,13 +359,11 @@ void __init config_atari(void) ATARIHW_SET(TT_CLK); pr_cont(" TT_CLK"); mach_hwclk = atari_tt_hwclk; - mach_set_clock_mmss = atari_tt_set_clock_mmss; } if (hwreg_present(&mste_rtc.sec_ones)) { ATARIHW_SET(MSTE_CLK); pr_cont(" MSTE_CLK"); mach_hwclk = atari_mste_hwclk; - mach_set_clock_mmss = atari_mste_set_clock_mmss; } if (!MACH_IS_MEDUSA && hwreg_present(&dma_wd.fdc_speed) && hwreg_write(&dma_wd.fdc_speed, 0)) { diff --git a/arch/m68k/atari/time.c b/arch/m68k/atari/time.c index c549b48174ec..9cca64286464 100644 --- a/arch/m68k/atari/time.c +++ b/arch/m68k/atari/time.c @@ -285,69 +285,6 @@ int atari_tt_hwclk( int op, struct rtc_time *t ) return( 0 ); } - -int atari_mste_set_clock_mmss (unsigned long nowtime) -{ - short real_seconds = nowtime % 60, real_minutes = (nowtime / 60) % 60; - struct MSTE_RTC val; - unsigned char rtc_minutes; - - mste_read(&val); - rtc_minutes= val.min_ones + val.min_tens * 10; - if ((rtc_minutes < real_minutes - ? real_minutes - rtc_minutes - : rtc_minutes - real_minutes) < 30) - { - val.sec_ones = real_seconds % 10; - val.sec_tens = real_seconds / 10; - val.min_ones = real_minutes % 10; - val.min_tens = real_minutes / 10; - mste_write(&val); - } - else - return -1; - return 0; -} - -int atari_tt_set_clock_mmss (unsigned long nowtime) -{ - int retval = 0; - short real_seconds = nowtime % 60, real_minutes = (nowtime / 60) % 60; - unsigned char save_control, save_freq_select, rtc_minutes; - - save_control = RTC_READ (RTC_CONTROL); /* tell the clock it's being set */ - RTC_WRITE (RTC_CONTROL, save_control | RTC_SET); - - save_freq_select = RTC_READ (RTC_FREQ_SELECT); /* stop and reset prescaler */ - RTC_WRITE (RTC_FREQ_SELECT, save_freq_select | RTC_DIV_RESET2); - - rtc_minutes = RTC_READ (RTC_MINUTES); - if (!(save_control & RTC_DM_BINARY)) - rtc_minutes = bcd2bin(rtc_minutes); - - /* Since we're only adjusting minutes and seconds, don't interfere - with hour overflow. This avoids messing with unknown time zones - but requires your RTC not to be off by more than 30 minutes. */ - if ((rtc_minutes < real_minutes - ? real_minutes - rtc_minutes - : rtc_minutes - real_minutes) < 30) - { - if (!(save_control & RTC_DM_BINARY)) - { - real_seconds = bin2bcd(real_seconds); - real_minutes = bin2bcd(real_minutes); - } - RTC_WRITE (RTC_SECONDS, real_seconds); - RTC_WRITE (RTC_MINUTES, real_minutes); - } - else - retval = -1; - - RTC_WRITE (RTC_FREQ_SELECT, save_freq_select); - RTC_WRITE (RTC_CONTROL, save_control); - return retval; -} - /* * Local variables: * c-indent-level: 4 diff --git a/arch/m68k/bvme6000/config.c b/arch/m68k/bvme6000/config.c index 2cfff4765040..143ee9fa3893 100644 --- a/arch/m68k/bvme6000/config.c +++ b/arch/m68k/bvme6000/config.c @@ -41,7 +41,6 @@ static void bvme6000_get_model(char *model); extern void bvme6000_sched_init(irq_handler_t handler); extern u32 bvme6000_gettimeoffset(void); extern int bvme6000_hwclk (int, struct rtc_time *); -extern int bvme6000_set_clock_mmss (unsigned long); extern void bvme6000_reset (void); void bvme6000_set_vectors (void); @@ -113,7 +112,6 @@ void __init config_bvme6000(void) mach_init_IRQ = bvme6000_init_IRQ; arch_gettimeoffset = bvme6000_gettimeoffset; mach_hwclk = bvme6000_hwclk; - mach_set_clock_mmss = bvme6000_set_clock_mmss; mach_reset = bvme6000_reset; mach_get_model = bvme6000_get_model; @@ -305,46 +303,3 @@ int bvme6000_hwclk(int op, struct rtc_time *t) return 0; } - -/* - * Set the minutes and seconds from seconds value 'nowtime'. Fail if - * clock is out by > 30 minutes. Logic lifted from atari code. - * Algorithm is to wait for the 10ms register to change, and then to - * wait a short while, and then set it. - */ - -int bvme6000_set_clock_mmss (unsigned long nowtime) -{ - int retval = 0; - short real_seconds = nowtime % 60, real_minutes = (nowtime / 60) % 60; - unsigned char rtc_minutes, rtc_tenms; - volatile RtcPtr_t rtc = (RtcPtr_t)BVME_RTC_BASE; - unsigned char msr = rtc->msr & 0xc0; - unsigned long flags; - volatile int i; - - rtc->msr = 0; /* Ensure clock accessible */ - rtc_minutes = bcd2bin (rtc->bcd_min); - - if ((rtc_minutes < real_minutes - ? real_minutes - rtc_minutes - : rtc_minutes - real_minutes) < 30) - { - local_irq_save(flags); - rtc_tenms = rtc->bcd_tenms; - while (rtc_tenms == rtc->bcd_tenms) - ; - for (i = 0; i < 1000; i++) - ; - rtc->bcd_min = bin2bcd(real_minutes); - rtc->bcd_sec = bin2bcd(real_seconds); - local_irq_restore(flags); - } - else - retval = -1; - - rtc->msr = msr; - - return retval; -} - diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig index a874e54404d1..1d5483f6e457 100644 --- a/arch/m68k/configs/amiga_defconfig +++ b/arch/m68k/configs/amiga_defconfig @@ -52,6 +52,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -98,18 +99,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -122,6 +119,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -200,7 +198,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -231,7 +228,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -260,7 +256,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -301,6 +296,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -356,6 +352,7 @@ CONFIG_A2091_SCSI=y CONFIG_GVP11_SCSI=y CONFIG_SCSI_A4000T=y CONFIG_SCSI_ZORRO7XX=y +CONFIG_SCSI_ZORRO_ESP=y CONFIG_MD=y CONFIG_MD_LINEAR=m CONFIG_BLK_DEV_DM=m @@ -363,6 +360,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -402,8 +400,8 @@ CONFIG_A2065=y CONFIG_ARIADNE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CIRRUS is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set @@ -412,8 +410,10 @@ CONFIG_ARIADNE=y # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set +CONFIG_XSURF100=y CONFIG_HYDRA=y CONFIG_APNE=y CONFIG_ZORRO8390=y @@ -426,9 +426,9 @@ CONFIG_ZORRO8390=y # CONFIG_NET_VENDOR_SMSC is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -478,6 +478,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -499,7 +500,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -600,6 +601,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -622,6 +624,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -657,6 +664,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig index 8ce39e23aa42..52a0af127951 100644 --- a/arch/m68k/configs/apollo_defconfig +++ b/arch/m68k/configs/apollo_defconfig @@ -50,6 +50,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -96,18 +97,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -120,6 +117,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -198,7 +196,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -229,7 +226,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -258,7 +254,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -299,6 +294,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -345,6 +341,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -381,14 +378,15 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_AMAZON is not set # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -400,9 +398,9 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -440,6 +438,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -458,7 +457,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -559,6 +558,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -581,6 +581,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -616,6 +621,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig index 346c4e75edf8..b3103e51268a 100644 --- a/arch/m68k/configs/atari_defconfig +++ b/arch/m68k/configs/atari_defconfig @@ -50,6 +50,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -96,18 +97,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -120,6 +117,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -198,7 +196,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -229,7 +226,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -258,7 +254,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -299,6 +294,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -354,6 +350,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -391,14 +388,15 @@ CONFIG_VETH=m CONFIG_ATARILANCE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set CONFIG_NE2000=y @@ -411,9 +409,9 @@ CONFIG_NE2000=y CONFIG_SMC91X=y # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -480,7 +478,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -581,6 +579,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -603,6 +602,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -638,6 +642,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig index fca9c7aa71a3..fb7d651a4cab 100644 --- a/arch/m68k/configs/bvme6000_defconfig +++ b/arch/m68k/configs/bvme6000_defconfig @@ -48,6 +48,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -94,18 +95,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -118,6 +115,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -196,7 +194,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -227,7 +224,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -256,7 +252,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -297,6 +292,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -344,6 +340,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -380,14 +377,15 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_AMAZON is not set # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set CONFIG_BVME6000_NET=y # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -399,9 +397,9 @@ CONFIG_BVME6000_NET=y # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -433,6 +431,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -450,7 +449,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -551,6 +550,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -573,6 +573,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -608,6 +613,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig index f9eab174915c..6b37f5537c39 100644 --- a/arch/m68k/configs/hp300_defconfig +++ b/arch/m68k/configs/hp300_defconfig @@ -50,6 +50,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -96,18 +97,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -120,6 +117,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -198,7 +196,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -229,7 +226,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -258,7 +254,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -299,6 +294,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -345,6 +341,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -382,14 +379,15 @@ CONFIG_VETH=m CONFIG_HPLANCE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -401,9 +399,9 @@ CONFIG_HPLANCE=y # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -443,6 +441,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -460,7 +459,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -561,6 +560,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -583,6 +583,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -618,6 +623,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig index b52e597899eb..930cc2965a11 100644 --- a/arch/m68k/configs/mac_defconfig +++ b/arch/m68k/configs/mac_defconfig @@ -49,6 +49,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -95,18 +96,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -119,6 +116,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -197,7 +195,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -228,7 +225,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -257,7 +253,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -301,6 +296,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -354,6 +350,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -398,8 +395,8 @@ CONFIG_VETH=m CONFIG_MACMACE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set CONFIG_MAC89x0=y # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set @@ -407,6 +404,7 @@ CONFIG_MAC89x0=y # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set CONFIG_MACSONIC=y # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -420,9 +418,9 @@ CONFIG_MAC8390=y # CONFIG_NET_VENDOR_SMSC is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -465,6 +463,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -482,7 +481,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -583,6 +582,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -605,6 +605,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -640,6 +645,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig index 2a84eeec5b02..e7dd25300127 100644 --- a/arch/m68k/configs/multi_defconfig +++ b/arch/m68k/configs/multi_defconfig @@ -59,6 +59,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -105,18 +106,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -129,6 +126,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -207,7 +205,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -238,7 +235,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -267,7 +263,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -311,6 +306,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -373,6 +369,7 @@ CONFIG_A2091_SCSI=y CONFIG_GVP11_SCSI=y CONFIG_SCSI_A4000T=y CONFIG_SCSI_ZORRO7XX=y +CONFIG_SCSI_ZORRO_ESP=y CONFIG_ATARI_SCSI=y CONFIG_MAC_SCSI=y CONFIG_SCSI_MAC_ESP=y @@ -387,6 +384,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -438,8 +436,8 @@ CONFIG_SUN3LANCE=y CONFIG_MACMACE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set CONFIG_MAC89x0=y # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set @@ -449,9 +447,11 @@ CONFIG_BVME6000_NET=y CONFIG_MVME16x_NET=y # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set CONFIG_MACSONIC=y # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set +CONFIG_XSURF100=y CONFIG_HYDRA=y CONFIG_MAC8390=y CONFIG_NE2000=y @@ -466,9 +466,9 @@ CONFIG_ZORRO8390=y CONFIG_SMC91X=y # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PLIP=m CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m @@ -533,6 +533,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -562,7 +563,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -663,6 +664,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -685,6 +687,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -720,6 +727,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig index 476e69994340..b383327fd77a 100644 --- a/arch/m68k/configs/mvme147_defconfig +++ b/arch/m68k/configs/mvme147_defconfig @@ -47,6 +47,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -93,18 +94,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -117,6 +114,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -195,7 +193,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -226,7 +223,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -255,7 +251,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -296,6 +291,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -343,6 +339,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -380,14 +377,15 @@ CONFIG_VETH=m CONFIG_MVME147_NET=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -399,9 +397,9 @@ CONFIG_MVME147_NET=y # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -433,6 +431,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -450,7 +449,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -551,6 +550,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -573,6 +573,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -608,6 +613,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig index 1477cda9146e..9783d3deb9e9 100644 --- a/arch/m68k/configs/mvme16x_defconfig +++ b/arch/m68k/configs/mvme16x_defconfig @@ -48,6 +48,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -94,18 +95,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -118,6 +115,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -196,7 +194,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -227,7 +224,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -256,7 +252,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -297,6 +292,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -344,6 +340,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -380,14 +377,15 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_AMAZON is not set # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set CONFIG_MVME16x_NET=y # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -399,9 +397,9 @@ CONFIG_MVME16x_NET=y # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -433,6 +431,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -450,7 +449,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -551,6 +550,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -573,6 +573,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -608,6 +613,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig index b3a543dc48a0..a35d10ee10cb 100644 --- a/arch/m68k/configs/q40_defconfig +++ b/arch/m68k/configs/q40_defconfig @@ -48,6 +48,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -94,18 +95,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -118,6 +115,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -196,7 +194,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -227,7 +224,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -256,7 +252,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -297,6 +292,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -350,6 +346,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -388,8 +385,8 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_AMD is not set # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CIRRUS is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set @@ -398,6 +395,7 @@ CONFIG_VETH=m # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set CONFIG_NE2000=y @@ -410,9 +408,9 @@ CONFIG_NE2000=y # CONFIG_NET_VENDOR_SMSC is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PLIP=m CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m @@ -455,6 +453,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -473,7 +472,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -574,6 +573,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -596,6 +596,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -631,6 +636,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig index d543ed5dfa96..573bf922d448 100644 --- a/arch/m68k/configs/sun3_defconfig +++ b/arch/m68k/configs/sun3_defconfig @@ -45,6 +45,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -91,18 +92,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -115,6 +112,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -193,7 +191,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -224,7 +221,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -253,7 +249,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -294,6 +289,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -341,6 +337,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -385,6 +382,7 @@ CONFIG_SUN3LANCE=y CONFIG_SUN3_82586=y # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -397,9 +395,9 @@ CONFIG_SUN3_82586=y # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set # CONFIG_NET_VENDOR_SUN is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -435,6 +433,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -452,7 +451,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -553,6 +552,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -574,6 +574,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -609,6 +614,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig index a67e54246023..efb27a7fcc55 100644 --- a/arch/m68k/configs/sun3x_defconfig +++ b/arch/m68k/configs/sun3x_defconfig @@ -45,6 +45,7 @@ CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_XFRM_MIGRATE=y CONFIG_NET_KEY=y +CONFIG_XDP_SOCKETS=y CONFIG_INET=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y @@ -91,18 +92,14 @@ CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_SET=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y -CONFIG_NFT_EXTHDR=m -CONFIG_NFT_META=m -CONFIG_NFT_RT=m CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_FLOW_OFFLOAD=m -CONFIG_NFT_SET_RBTREE=m -CONFIG_NFT_SET_HASH=m -CONFIG_NFT_SET_BITMAP=m CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m @@ -115,6 +112,7 @@ CONFIG_NFT_REJECT=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB_INET=m +CONFIG_NFT_SOCKET=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m @@ -193,7 +191,6 @@ CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_NF_CONNTRACK_IPV4=m -CONFIG_NF_SOCKET_IPV4=m CONFIG_NFT_CHAIN_ROUTE_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m @@ -224,7 +221,6 @@ CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m CONFIG_NF_CONNTRACK_IPV6=m -CONFIG_NF_SOCKET_IPV6=m CONFIG_NFT_CHAIN_ROUTE_IPV6=m CONFIG_NFT_CHAIN_NAT_IPV6=m CONFIG_NFT_MASQ_IPV6=m @@ -253,7 +249,6 @@ CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m CONFIG_NF_TABLES_BRIDGE=y -CONFIG_NFT_BRIDGE_META=m CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m CONFIG_BRIDGE_NF_EBTABLES=m @@ -294,6 +289,7 @@ CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m CONFIG_DNS_RESOLVER=y CONFIG_BATMAN_ADV=m +# CONFIG_BATMAN_ADV_BATMAN_V is not set CONFIG_BATMAN_ADV_DAT=y CONFIG_BATMAN_ADV_NC=y CONFIG_BATMAN_ADV_MCAST=y @@ -341,6 +337,7 @@ CONFIG_DM_UNSTRIPED=m CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m +CONFIG_DM_WRITECACHE=m CONFIG_DM_ERA=m CONFIG_DM_MIRROR=m CONFIG_DM_RAID=m @@ -378,14 +375,15 @@ CONFIG_VETH=m CONFIG_SUN3LANCE=y # CONFIG_NET_VENDOR_AQUANTIA is not set # CONFIG_NET_VENDOR_ARC is not set -# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_CADENCE is not set # CONFIG_NET_VENDOR_CORTINA is not set # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HUAWEI is not set # CONFIG_NET_VENDOR_INTEL is not set # CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set # CONFIG_NET_VENDOR_NATSEMI is not set # CONFIG_NET_VENDOR_NETRONOME is not set # CONFIG_NET_VENDOR_NI is not set @@ -397,9 +395,9 @@ CONFIG_SUN3LANCE=y # CONFIG_NET_VENDOR_SOLARFLARE is not set # CONFIG_NET_VENDOR_SOCIONEXT is not set # CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set # CONFIG_NET_VENDOR_VIA is not set # CONFIG_NET_VENDOR_WIZNET is not set -# CONFIG_NET_VENDOR_SYNOPSYS is not set CONFIG_PPP=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_DEFLATE=m @@ -435,6 +433,7 @@ CONFIG_HIDRAW=y CONFIG_UHID=m # CONFIG_HID_GENERIC is not set # CONFIG_HID_ITE is not set +# CONFIG_HID_REDRAGON is not set # CONFIG_USB_SUPPORT is not set CONFIG_RTC_CLASS=y # CONFIG_RTC_NVMEM is not set @@ -452,7 +451,7 @@ CONFIG_FS_ENCRYPTION=m CONFIG_FANOTIFY=y CONFIG_QUOTA_NETLINK_INTERFACE=y # CONFIG_PRINT_QUOTA_WARNING is not set -CONFIG_AUTOFS4_FS=m +CONFIG_AUTOFS_FS=m CONFIG_FUSE_FS=m CONFIG_CUSE=m CONFIG_OVERLAY_FS=m @@ -553,6 +552,7 @@ CONFIG_TEST_KSTRTOX=m CONFIG_TEST_PRINTF=m CONFIG_TEST_BITMAP=m CONFIG_TEST_UUID=m +CONFIG_TEST_OVERFLOW=m CONFIG_TEST_RHASHTABLE=m CONFIG_TEST_HASH=m CONFIG_TEST_USER_COPY=m @@ -575,6 +575,11 @@ CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_MCRYPTD=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_CHACHA20POLY1305=m +CONFIG_CRYPTO_AEGIS128=m +CONFIG_CRYPTO_AEGIS128L=m +CONFIG_CRYPTO_AEGIS256=m +CONFIG_CRYPTO_MORUS640=m +CONFIG_CRYPTO_MORUS1280=m CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_PCBC=m @@ -610,6 +615,7 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_842=m CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4HC=m +CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y diff --git a/arch/m68k/include/asm/Kbuild b/arch/m68k/include/asm/Kbuild index 4d8d68c4e3dd..a4b8d3331a9e 100644 --- a/arch/m68k/include/asm/Kbuild +++ b/arch/m68k/include/asm/Kbuild @@ -1,6 +1,7 @@ generic-y += barrier.h generic-y += compat.h generic-y += device.h +generic-y += dma-mapping.h generic-y += emergency-restart.h generic-y += exec.h generic-y += extable.h diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index e993e2860ee1..47228b0d4163 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -126,11 +126,13 @@ static inline void atomic_inc(atomic_t *v) { __asm__ __volatile__("addql #1,%0" : "+m" (*v)); } +#define atomic_inc atomic_inc static inline void atomic_dec(atomic_t *v) { __asm__ __volatile__("subql #1,%0" : "+m" (*v)); } +#define atomic_dec atomic_dec static inline int atomic_dec_and_test(atomic_t *v) { @@ -138,6 +140,7 @@ static inline int atomic_dec_and_test(atomic_t *v) __asm__ __volatile__("subql #1,%1; seq %0" : "=d" (c), "+m" (*v)); return c != 0; } +#define atomic_dec_and_test atomic_dec_and_test static inline int atomic_dec_and_test_lt(atomic_t *v) { @@ -155,6 +158,7 @@ static inline int atomic_inc_and_test(atomic_t *v) __asm__ __volatile__("addql #1,%1; seq %0" : "=d" (c), "+m" (*v)); return c != 0; } +#define atomic_inc_and_test atomic_inc_and_test #ifdef CONFIG_RMW_INSNS @@ -190,9 +194,6 @@ static inline int atomic_xchg(atomic_t *v, int new) #endif /* !CONFIG_RMW_INSNS */ -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - static inline int atomic_sub_and_test(int i, atomic_t *v) { char c; @@ -201,6 +202,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v) : ASM_DI (i)); return c != 0; } +#define atomic_sub_and_test atomic_sub_and_test static inline int atomic_add_negative(int i, atomic_t *v) { @@ -210,20 +212,6 @@ static inline int atomic_add_negative(int i, atomic_t *v) : ASM_DI (i)); return c != 0; } - -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} +#define atomic_add_negative atomic_add_negative #endif /* __ARCH_M68K_ATOMIC __ */ diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h index 93b47b1f6fb4..d979f38af751 100644 --- a/arch/m68k/include/asm/bitops.h +++ b/arch/m68k/include/asm/bitops.h @@ -454,7 +454,7 @@ static inline unsigned long ffz(unsigned long word) */ #if (defined(__mcfisaaplus__) || defined(__mcfisac__)) && \ !defined(CONFIG_M68000) && !defined(CONFIG_MCPU32) -static inline int __ffs(int x) +static inline unsigned long __ffs(unsigned long x) { __asm__ __volatile__ ("bitrev %0; ff1 %0" : "=d" (x) @@ -493,7 +493,11 @@ static inline int ffs(int x) : "dm" (x & -x)); return 32 - cnt; } -#define __ffs(x) (ffs(x) - 1) + +static inline unsigned long __ffs(unsigned long x) +{ + return ffs(x) - 1; +} /* * fls: find last bit set. @@ -515,12 +519,16 @@ static inline int __fls(int x) #endif +/* Simple test-and-set bit locks */ +#define test_and_set_bit_lock test_and_set_bit +#define clear_bit_unlock clear_bit +#define __clear_bit_unlock clear_bit_unlock + #include #include #include #include #include -#include #endif /* __KERNEL__ */ #endif /* _M68K_BITOPS_H */ diff --git a/arch/m68k/include/asm/dma-mapping.h b/arch/m68k/include/asm/dma-mapping.h deleted file mode 100644 index e3722ed04fbb..000000000000 --- a/arch/m68k/include/asm/dma-mapping.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _M68K_DMA_MAPPING_H -#define _M68K_DMA_MAPPING_H - -extern const struct dma_map_ops m68k_dma_ops; - -static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) -{ - return &m68k_dma_ops; -} - -#endif /* _M68K_DMA_MAPPING_H */ diff --git a/arch/m68k/include/asm/io.h b/arch/m68k/include/asm/io.h index ca2849afb087..aabe6420ead2 100644 --- a/arch/m68k/include/asm/io.h +++ b/arch/m68k/include/asm/io.h @@ -1,6 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _M68K_IO_H +#define _M68K_IO_H + #if defined(__uClinux__) || defined(CONFIG_COLDFIRE) #include #else #include #endif + +#include + +#endif /* _M68K_IO_H */ diff --git a/arch/m68k/include/asm/io_mm.h b/arch/m68k/include/asm/io_mm.h index fe485f4f5fac..782b78f8a048 100644 --- a/arch/m68k/include/asm/io_mm.h +++ b/arch/m68k/include/asm/io_mm.h @@ -16,13 +16,11 @@ * isa_readX(),isa_writeX() are for ISA memory */ -#ifndef _IO_H -#define _IO_H +#ifndef _M68K_IO_MM_H +#define _M68K_IO_MM_H #ifdef __KERNEL__ -#define ARCH_HAS_IOREMAP_WT - #include #include #include @@ -369,40 +367,6 @@ static inline void isa_delay(void) #define writew(val, addr) out_le16((addr), (val)) #endif /* CONFIG_ATARI_ROM_ISA */ -#if !defined(CONFIG_ISA) && !defined(CONFIG_ATARI_ROM_ISA) -/* - * We need to define dummy functions for GENERIC_IOMAP support. - */ -#define inb(port) 0xff -#define inb_p(port) 0xff -#define outb(val,port) ((void)0) -#define outb_p(val,port) ((void)0) -#define inw(port) 0xffff -#define inw_p(port) 0xffff -#define outw(val,port) ((void)0) -#define outw_p(val,port) ((void)0) -#define inl(port) 0xffffffffUL -#define inl_p(port) 0xffffffffUL -#define outl(val,port) ((void)0) -#define outl_p(val,port) ((void)0) - -#define insb(port,buf,nr) ((void)0) -#define outsb(port,buf,nr) ((void)0) -#define insw(port,buf,nr) ((void)0) -#define outsw(port,buf,nr) ((void)0) -#define insl(port,buf,nr) ((void)0) -#define outsl(port,buf,nr) ((void)0) - -/* - * These should be valid on any ioremap()ed region - */ -#define readb(addr) in_8(addr) -#define writeb(val,addr) out_8((addr),(val)) -#define readw(addr) in_le16(addr) -#define writew(val,addr) out_le16((addr),(val)) - -#endif /* !CONFIG_ISA && !CONFIG_ATARI_ROM_ISA */ - #define readl(addr) in_le32(addr) #define writel(val,addr) out_le32((addr),(val)) @@ -444,4 +408,4 @@ static inline void isa_delay(void) #define writew_relaxed(b, addr) writew(b, addr) #define writel_relaxed(b, addr) writel(b, addr) -#endif /* _IO_H */ +#endif /* _M68K_IO_MM_H */ diff --git a/arch/m68k/include/asm/io_no.h b/arch/m68k/include/asm/io_no.h index 83a0a6d449f4..0498192e1d98 100644 --- a/arch/m68k/include/asm/io_no.h +++ b/arch/m68k/include/asm/io_no.h @@ -131,19 +131,7 @@ static inline void writel(u32 value, volatile void __iomem *addr) #define PCI_SPACE_LIMIT PCI_IO_MASK #endif /* CONFIG_PCI */ -/* - * These are defined in kmap.h as static inline functions. To maintain - * previous behavior we put these define guards here so io_mm.h doesn't - * see them. - */ -#ifdef CONFIG_MMU -#define memset_io memset_io -#define memcpy_fromio memcpy_fromio -#define memcpy_toio memcpy_toio -#endif - #include #include -#include #endif /* _M68KNOMMU_IO_H */ diff --git a/arch/m68k/include/asm/kmap.h b/arch/m68k/include/asm/kmap.h index 84b8333db8ad..aac7f045f7f0 100644 --- a/arch/m68k/include/asm/kmap.h +++ b/arch/m68k/include/asm/kmap.h @@ -4,6 +4,8 @@ #ifdef CONFIG_MMU +#define ARCH_HAS_IOREMAP_WT + /* Values for nocacheflag and cmode */ #define IOMAP_FULL_CACHING 0 #define IOMAP_NOCACHE_SER 1 @@ -16,6 +18,7 @@ */ extern void __iomem *__ioremap(unsigned long physaddr, unsigned long size, int cacheflag); +#define iounmap iounmap extern void iounmap(void __iomem *addr); extern void __iounmap(void *addr, unsigned long size); @@ -33,31 +36,35 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, } #define ioremap_uc ioremap_nocache +#define ioremap_wt ioremap_wt static inline void __iomem *ioremap_wt(unsigned long physaddr, unsigned long size) { return __ioremap(physaddr, size, IOMAP_WRITETHROUGH); } -#define ioremap_fillcache ioremap_fullcache +#define ioremap_fullcache ioremap_fullcache static inline void __iomem *ioremap_fullcache(unsigned long physaddr, unsigned long size) { return __ioremap(physaddr, size, IOMAP_FULL_CACHING); } +#define memset_io memset_io static inline void memset_io(volatile void __iomem *addr, unsigned char val, int count) { __builtin_memset((void __force *) addr, val, count); } +#define memcpy_fromio memcpy_fromio static inline void memcpy_fromio(void *dst, const volatile void __iomem *src, int count) { __builtin_memcpy(dst, (void __force *) src, count); } +#define memcpy_toio memcpy_toio static inline void memcpy_toio(volatile void __iomem *dst, const void *src, int count) { diff --git a/arch/m68k/include/asm/machdep.h b/arch/m68k/include/asm/machdep.h index 1605da48ebf2..49bd3266b4b1 100644 --- a/arch/m68k/include/asm/machdep.h +++ b/arch/m68k/include/asm/machdep.h @@ -22,7 +22,6 @@ extern int (*mach_hwclk)(int, struct rtc_time*); extern unsigned int (*mach_get_ss)(void); extern int (*mach_get_rtc_pll)(struct rtc_pll_info *); extern int (*mach_set_rtc_pll)(struct rtc_pll_info *); -extern int (*mach_set_clock_mmss)(unsigned long); extern void (*mach_reset)( void ); extern void (*mach_halt)( void ); extern void (*mach_power_off)( void ); diff --git a/arch/m68k/include/asm/macintosh.h b/arch/m68k/include/asm/macintosh.h index 9b840c03ebb7..08cee11180e6 100644 --- a/arch/m68k/include/asm/macintosh.h +++ b/arch/m68k/include/asm/macintosh.h @@ -57,7 +57,6 @@ struct mac_model #define MAC_SCSI_IIFX 5 #define MAC_SCSI_DUO 6 #define MAC_SCSI_LC 7 -#define MAC_SCSI_LATE 8 #define MAC_IDE_NONE 0 #define MAC_IDE_QUADRA 1 diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 8b707c249026..12fe700632f4 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -44,6 +44,7 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page, unsigned long address) { + pgtable_page_dtor(page); __free_page(page); } @@ -74,8 +75,9 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm, return page; } -extern inline void pte_free(struct mm_struct *mm, struct page *page) +static inline void pte_free(struct mm_struct *mm, struct page *page) { + pgtable_page_dtor(page); __free_page(page); } diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index e644c4daf540..6bbe52025de3 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -18,7 +18,7 @@ extern unsigned long memory_end; #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE #define __pa(vaddr) ((unsigned long)(vaddr)) -#define __va(paddr) ((void *)(paddr)) +#define __va(paddr) ((void *)((unsigned long)(paddr))) #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) #define pfn_to_virt(pfn) __va((pfn) << PAGE_SHIFT) diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c index 463572c4943f..e99993c57d6b 100644 --- a/arch/m68k/kernel/dma.c +++ b/arch/m68k/kernel/dma.c @@ -6,7 +6,7 @@ #undef DEBUG -#include +#include #include #include #include @@ -19,7 +19,7 @@ #if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE) -static void *m68k_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, +void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t flag, unsigned long attrs) { struct page *page, **map; @@ -62,7 +62,7 @@ static void *m68k_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, return addr; } -static void m68k_dma_free(struct device *dev, size_t size, void *addr, +void arch_dma_free(struct device *dev, size_t size, void *addr, dma_addr_t handle, unsigned long attrs) { pr_debug("dma_free_coherent: %p, %x\n", addr, handle); @@ -73,8 +73,8 @@ static void m68k_dma_free(struct device *dev, size_t size, void *addr, #include -static void *m68k_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, + gfp_t gfp, unsigned long attrs) { void *ret; @@ -89,7 +89,7 @@ static void *m68k_dma_alloc(struct device *dev, size_t size, return ret; } -static void m68k_dma_free(struct device *dev, size_t size, void *vaddr, +void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { free_pages((unsigned long)vaddr, get_order(size)); @@ -97,8 +97,8 @@ static void m68k_dma_free(struct device *dev, size_t size, void *vaddr, #endif /* CONFIG_MMU && !CONFIG_COLDFIRE */ -static void m68k_dma_sync_single_for_device(struct device *dev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_device(struct device *dev, phys_addr_t handle, + size_t size, enum dma_data_direction dir) { switch (dir) { case DMA_BIDIRECTIONAL: @@ -115,58 +115,6 @@ static void m68k_dma_sync_single_for_device(struct device *dev, } } -static void m68k_dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sglist, int nents, enum dma_data_direction dir) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nents, i) { - dma_sync_single_for_device(dev, sg->dma_address, sg->length, - dir); - } -} - -static dma_addr_t m68k_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - dma_addr_t handle = page_to_phys(page) + offset; - - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - dma_sync_single_for_device(dev, handle, size, dir); - - return handle; -} - -static int m68k_dma_map_sg(struct device *dev, struct scatterlist *sglist, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nents, i) { - sg->dma_address = sg_phys(sg); - - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - continue; - - dma_sync_single_for_device(dev, sg->dma_address, sg->length, - dir); - } - return nents; -} - -const struct dma_map_ops m68k_dma_ops = { - .alloc = m68k_dma_alloc, - .free = m68k_dma_free, - .map_page = m68k_dma_map_page, - .map_sg = m68k_dma_map_sg, - .sync_single_for_device = m68k_dma_sync_single_for_device, - .sync_sg_for_device = m68k_dma_sync_sg_for_device, -}; -EXPORT_SYMBOL(m68k_dma_ops); - void arch_setup_pdev_archdata(struct platform_device *pdev) { if (pdev->dev.coherent_dma_mask == DMA_MASK_NONE && diff --git a/arch/m68k/kernel/setup_mm.c b/arch/m68k/kernel/setup_mm.c index f35e3ebd6331..5d3596c180f9 100644 --- a/arch/m68k/kernel/setup_mm.c +++ b/arch/m68k/kernel/setup_mm.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -88,7 +89,6 @@ void (*mach_get_hardware_list) (struct seq_file *m); /* machine dependent timer functions */ int (*mach_hwclk) (int, struct rtc_time*); EXPORT_SYMBOL(mach_hwclk); -int (*mach_set_clock_mmss) (unsigned long); unsigned int (*mach_get_ss)(void); int (*mach_get_rtc_pll)(struct rtc_pll_info *); int (*mach_set_rtc_pll)(struct rtc_pll_info *); @@ -165,6 +165,8 @@ static void __init m68k_parse_bootinfo(const struct bi_record *record) be32_to_cpu(m->addr); m68k_memory[m68k_num_memory].size = be32_to_cpu(m->size); + memblock_add(m68k_memory[m68k_num_memory].addr, + m68k_memory[m68k_num_memory].size); m68k_num_memory++; } else pr_warn("%s: too many memory chunks\n", @@ -224,10 +226,6 @@ static void __init m68k_parse_bootinfo(const struct bi_record *record) void __init setup_arch(char **cmdline_p) { -#ifndef CONFIG_SUN3 - int i; -#endif - /* The bootinfo is located right after the kernel */ if (!CPU_IS_COLDFIRE) m68k_parse_bootinfo((const struct bi_record *)_end); @@ -356,14 +354,9 @@ void __init setup_arch(char **cmdline_p) #endif #ifndef CONFIG_SUN3 - for (i = 1; i < m68k_num_memory; i++) - free_bootmem_node(NODE_DATA(i), m68k_memory[i].addr, - m68k_memory[i].size); #ifdef CONFIG_BLK_DEV_INITRD if (m68k_ramdisk.size) { - reserve_bootmem_node(__virt_to_node(phys_to_virt(m68k_ramdisk.addr)), - m68k_ramdisk.addr, m68k_ramdisk.size, - BOOTMEM_DEFAULT); + memblock_reserve(m68k_ramdisk.addr, m68k_ramdisk.size); initrd_start = (unsigned long)phys_to_virt(m68k_ramdisk.addr); initrd_end = initrd_start + m68k_ramdisk.size; pr_info("initrd: %08lx - %08lx\n", initrd_start, initrd_end); diff --git a/arch/m68k/kernel/setup_no.c b/arch/m68k/kernel/setup_no.c index a98af1018201..cfd5475bfc31 100644 --- a/arch/m68k/kernel/setup_no.c +++ b/arch/m68k/kernel/setup_no.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -51,7 +52,6 @@ char __initdata command_line[COMMAND_LINE_SIZE]; /* machine dependent timer functions */ void (*mach_sched_init)(irq_handler_t handler) __initdata = NULL; -int (*mach_set_clock_mmss)(unsigned long); int (*mach_hwclk) (int, struct rtc_time*); /* machine dependent reboot functions */ @@ -86,8 +86,6 @@ void (*mach_power_off)(void); void __init setup_arch(char **cmdline_p) { - int bootmap_size; - memory_start = PAGE_ALIGN(_ramstart); memory_end = _ramend; @@ -142,6 +140,8 @@ void __init setup_arch(char **cmdline_p) pr_debug("MEMORY -> ROMFS=0x%p-0x%06lx MEM=0x%06lx-0x%06lx\n ", __bss_stop, memory_start, memory_start, memory_end); + memblock_add(memory_start, memory_end - memory_start); + /* Keep a copy of command line */ *cmdline_p = &command_line[0]; memcpy(boot_command_line, command_line, COMMAND_LINE_SIZE); @@ -158,23 +158,10 @@ void __init setup_arch(char **cmdline_p) min_low_pfn = PFN_DOWN(memory_start); max_pfn = max_low_pfn = PFN_DOWN(memory_end); - bootmap_size = init_bootmem_node( - NODE_DATA(0), - min_low_pfn, /* map goes here */ - PFN_DOWN(PAGE_OFFSET), - max_pfn); - /* - * Free the usable memory, we have to make sure we do not free - * the bootmem bitmap so we then reserve it after freeing it :-) - */ - free_bootmem(memory_start, memory_end - memory_start); - reserve_bootmem(memory_start, bootmap_size, BOOTMEM_DEFAULT); - #if defined(CONFIG_UBOOT) && defined(CONFIG_BLK_DEV_INITRD) if ((initrd_start > 0) && (initrd_start < initrd_end) && (initrd_end < memory_end)) - reserve_bootmem(initrd_start, initrd_end - initrd_start, - BOOTMEM_DEFAULT); + memblock_reserve(initrd_start, initrd_end - initrd_start); #endif /* if defined(CONFIG_BLK_DEV_INITRD) */ /* diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c index e522307db47c..b02d7254b73a 100644 --- a/arch/m68k/mac/config.c +++ b/arch/m68k/mac/config.c @@ -57,7 +57,6 @@ static unsigned long mac_orig_videoaddr; /* Mac specific timer functions */ extern u32 mac_gettimeoffset(void); extern int mac_hwclk(int, struct rtc_time *); -extern int mac_set_clock_mmss(unsigned long); extern void iop_preinit(void); extern void iop_init(void); extern void via_init(void); @@ -158,7 +157,6 @@ void __init config_mac(void) mach_get_model = mac_get_model; arch_gettimeoffset = mac_gettimeoffset; mach_hwclk = mac_hwclk; - mach_set_clock_mmss = mac_set_clock_mmss; mach_reset = mac_reset; mach_halt = mac_poweroff; mach_power_off = mac_poweroff; @@ -709,7 +707,7 @@ static struct mac_model mac_data_table[] = { .name = "PowerBook 520", .adb_type = MAC_ADB_PB2, .via_type = MAC_VIA_QUADRA, - .scsi_type = MAC_SCSI_LATE, + .scsi_type = MAC_SCSI_OLD, .scc_type = MAC_SCC_QUADRA, .ether_type = MAC_ETHER_SONIC, .floppy_type = MAC_FLOPPY_SWIM_ADDR2, @@ -943,18 +941,6 @@ static const struct resource mac_scsi_old_rsrc[] __initconst = { }, }; -static const struct resource mac_scsi_late_rsrc[] __initconst = { - { - .flags = IORESOURCE_IRQ, - .start = IRQ_MAC_SCSI, - .end = IRQ_MAC_SCSI, - }, { - .flags = IORESOURCE_MEM, - .start = 0x50010000, - .end = 0x50011FFF, - }, -}; - static const struct resource mac_scsi_ccl_rsrc[] __initconst = { { .flags = IORESOURCE_IRQ, @@ -1064,11 +1050,6 @@ int __init mac_platform_init(void) platform_device_register_simple("mac_scsi", 0, mac_scsi_old_rsrc, ARRAY_SIZE(mac_scsi_old_rsrc)); break; - case MAC_SCSI_LATE: - /* XXX PDMA support for PowerBook 500 series needs testing */ - platform_device_register_simple("mac_scsi", 0, - mac_scsi_late_rsrc, ARRAY_SIZE(mac_scsi_late_rsrc)); - break; case MAC_SCSI_LC: /* Addresses from Mac LC data in Designing Cards & Drivers 3ed. * Also from the Developer Notes for Classic II, LC III, diff --git a/arch/m68k/mac/misc.c b/arch/m68k/mac/misc.c index c68054361615..19e9d8eef1f2 100644 --- a/arch/m68k/mac/misc.c +++ b/arch/m68k/mac/misc.c @@ -26,33 +26,38 @@ #include -/* Offset between Unix time (1970-based) and Mac time (1904-based) */ +/* + * Offset between Unix time (1970-based) and Mac time (1904-based). Cuda and PMU + * times wrap in 2040. If we need to handle later times, the read_time functions + * need to be changed to interpret wrapped times as post-2040. + */ #define RTC_OFFSET 2082844800 static void (*rom_reset)(void); #ifdef CONFIG_ADB_CUDA -static long cuda_read_time(void) +static time64_t cuda_read_time(void) { struct adb_request req; - long time; + time64_t time; if (cuda_request(&req, NULL, 2, CUDA_PACKET, CUDA_GET_TIME) < 0) return 0; while (!req.complete) cuda_poll(); - time = (req.reply[3] << 24) | (req.reply[4] << 16) | - (req.reply[5] << 8) | req.reply[6]; + time = (u32)((req.reply[3] << 24) | (req.reply[4] << 16) | + (req.reply[5] << 8) | req.reply[6]); + return time - RTC_OFFSET; } -static void cuda_write_time(long data) +static void cuda_write_time(time64_t time) { struct adb_request req; + u32 data = lower_32_bits(time + RTC_OFFSET); - data += RTC_OFFSET; if (cuda_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME, (data >> 24) & 0xFF, (data >> 16) & 0xFF, (data >> 8) & 0xFF, data & 0xFF) < 0) @@ -86,26 +91,27 @@ static void cuda_write_pram(int offset, __u8 data) #endif /* CONFIG_ADB_CUDA */ #ifdef CONFIG_ADB_PMU68K -static long pmu_read_time(void) +static time64_t pmu_read_time(void) { struct adb_request req; - long time; + time64_t time; if (pmu_request(&req, NULL, 1, PMU_READ_RTC) < 0) return 0; while (!req.complete) pmu_poll(); - time = (req.reply[1] << 24) | (req.reply[2] << 16) | - (req.reply[3] << 8) | req.reply[4]; + time = (u32)((req.reply[1] << 24) | (req.reply[2] << 16) | + (req.reply[3] << 8) | req.reply[4]); + return time - RTC_OFFSET; } -static void pmu_write_time(long data) +static void pmu_write_time(time64_t time) { struct adb_request req; + u32 data = lower_32_bits(time + RTC_OFFSET); - data += RTC_OFFSET; if (pmu_request(&req, NULL, 5, PMU_SET_RTC, (data >> 24) & 0xFF, (data >> 16) & 0xFF, (data >> 8) & 0xFF, data & 0xFF) < 0) @@ -245,11 +251,11 @@ static void via_write_pram(int offset, __u8 data) * is basically any machine with Mac II-style ADB. */ -static long via_read_time(void) +static time64_t via_read_time(void) { union { __u8 cdata[4]; - long idata; + __u32 idata; } result, last_result; int count = 1; @@ -270,7 +276,7 @@ static long via_read_time(void) via_pram_command(0x8D, &result.cdata[0]); if (result.idata == last_result.idata) - return result.idata - RTC_OFFSET; + return (time64_t)result.idata - RTC_OFFSET; if (++count > 10) break; @@ -278,8 +284,8 @@ static long via_read_time(void) last_result.idata = result.idata; } - pr_err("via_read_time: failed to read a stable value; got 0x%08lx then 0x%08lx\n", - last_result.idata, result.idata); + pr_err("%s: failed to read a stable value; got 0x%08x then 0x%08x\n", + __func__, last_result.idata, result.idata); return 0; } @@ -291,11 +297,11 @@ static long via_read_time(void) * is basically any machine with Mac II-style ADB. */ -static void via_write_time(long time) +static void via_write_time(time64_t time) { union { __u8 cdata[4]; - long idata; + __u32 idata; } data; __u8 temp; @@ -304,7 +310,7 @@ static void via_write_time(long time) temp = 0x55; via_pram_command(0x35, &temp); - data.idata = time + RTC_OFFSET; + data.idata = lower_32_bits(time + RTC_OFFSET); via_pram_command(0x01, &data.cdata[3]); via_pram_command(0x05, &data.cdata[2]); via_pram_command(0x09, &data.cdata[1]); @@ -585,12 +591,15 @@ void mac_reset(void) * This function translates seconds since 1970 into a proper date. * * Algorithm cribbed from glibc2.1, __offtime(). + * + * This is roughly same as rtc_time64_to_tm(), which we should probably + * use here, but it's only available when CONFIG_RTC_LIB is enabled. */ #define SECS_PER_MINUTE (60) #define SECS_PER_HOUR (SECS_PER_MINUTE * 60) #define SECS_PER_DAY (SECS_PER_HOUR * 24) -static void unmktime(unsigned long time, long offset, +static void unmktime(time64_t time, long offset, int *yearp, int *monp, int *dayp, int *hourp, int *minp, int *secp) { @@ -602,11 +611,10 @@ static void unmktime(unsigned long time, long offset, /* Leap years. */ { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 } }; - long int days, rem, y, wday, yday; + int days, rem, y, wday, yday; const unsigned short int *ip; - days = time / SECS_PER_DAY; - rem = time % SECS_PER_DAY; + days = div_u64_rem(time, SECS_PER_DAY, &rem); rem += offset; while (rem < 0) { rem += SECS_PER_DAY; @@ -657,7 +665,7 @@ static void unmktime(unsigned long time, long offset, int mac_hwclk(int op, struct rtc_time *t) { - unsigned long now; + time64_t now; if (!op) { /* read */ switch (macintosh_config->adb_type) { @@ -693,8 +701,8 @@ int mac_hwclk(int op, struct rtc_time *t) __func__, t->tm_year + 1900, t->tm_mon + 1, t->tm_mday, t->tm_hour, t->tm_min, t->tm_sec); - now = mktime(t->tm_year + 1900, t->tm_mon + 1, t->tm_mday, - t->tm_hour, t->tm_min, t->tm_sec); + now = mktime64(t->tm_year + 1900, t->tm_mon + 1, t->tm_mday, + t->tm_hour, t->tm_min, t->tm_sec); switch (macintosh_config->adb_type) { case MAC_ADB_IOP: @@ -719,19 +727,3 @@ int mac_hwclk(int op, struct rtc_time *t) } return 0; } - -/* - * Set minutes/seconds in the hardware clock - */ - -int mac_set_clock_mmss (unsigned long nowtime) -{ - struct rtc_time now; - - mac_hwclk(0, &now); - now.tm_sec = nowtime % 60; - now.tm_min = (nowtime / 60) % 60; - mac_hwclk(1, &now); - - return 0; -} diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c index 8827b7f91402..38e2b272c220 100644 --- a/arch/m68k/mm/init.c +++ b/arch/m68k/mm/init.c @@ -71,7 +71,6 @@ void __init m68k_setup_node(int node) pg_data_table[i] = pg_data_map + node; } #endif - pg_data_map[node].bdata = bootmem_node_data + node; node_set_online(node); } diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c index 2925d795d71a..70dde040779b 100644 --- a/arch/m68k/mm/mcfmmu.c +++ b/arch/m68k/mm/mcfmmu.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -153,31 +154,31 @@ int cf_tlb_miss(struct pt_regs *regs, int write, int dtlb, int extension_word) void __init cf_bootmem_alloc(void) { - unsigned long start_pfn; unsigned long memstart; /* _rambase and _ramend will be naturally page aligned */ m68k_memory[0].addr = _rambase; m68k_memory[0].size = _ramend - _rambase; + memblock_add(m68k_memory[0].addr, m68k_memory[0].size); + /* compute total pages in system */ num_pages = PFN_DOWN(_ramend - _rambase); /* page numbers */ memstart = PAGE_ALIGN(_ramstart); min_low_pfn = PFN_DOWN(_rambase); - start_pfn = PFN_DOWN(memstart); max_pfn = max_low_pfn = PFN_DOWN(_ramend); high_memory = (void *)_ramend; + /* Reserve kernel text/data/bss */ + memblock_reserve(memstart, memstart - _rambase); + m68k_virt_to_node_shift = fls(_ramend - 1) - 6; module_fixup(NULL, __start_fixup, __stop_fixup); - /* setup bootmem data */ + /* setup node data */ m68k_setup_node(0); - memstart += init_bootmem_node(NODE_DATA(0), start_pfn, - min_low_pfn, max_low_pfn); - free_bootmem_node(NODE_DATA(0), memstart, _ramend - memstart); } /* diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index e490ecc7842c..4e17ecb5928a 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -208,7 +209,7 @@ void __init paging_init(void) { unsigned long zones_size[MAX_NR_ZONES] = { 0, }; unsigned long min_addr, max_addr; - unsigned long addr, size, end; + unsigned long addr; int i; #ifdef DEBUG @@ -253,34 +254,20 @@ void __init paging_init(void) min_low_pfn = availmem >> PAGE_SHIFT; max_pfn = max_low_pfn = max_addr >> PAGE_SHIFT; - for (i = 0; i < m68k_num_memory; i++) { - addr = m68k_memory[i].addr; - end = addr + m68k_memory[i].size; - m68k_setup_node(i); - availmem = PAGE_ALIGN(availmem); - availmem += init_bootmem_node(NODE_DATA(i), - availmem >> PAGE_SHIFT, - addr >> PAGE_SHIFT, - end >> PAGE_SHIFT); - } + /* Reserve kernel text/data/bss and the memory allocated in head.S */ + memblock_reserve(m68k_memory[0].addr, availmem - m68k_memory[0].addr); /* * Map the physical memory available into the kernel virtual - * address space. First initialize the bootmem allocator with - * the memory we already mapped, so map_node() has something - * to allocate. + * address space. Make sure memblock will not try to allocate + * pages beyond the memory we already mapped in head.S */ - addr = m68k_memory[0].addr; - size = m68k_memory[0].size; - free_bootmem_node(NODE_DATA(0), availmem, - min(m68k_init_mapped_size, size) - (availmem - addr)); - map_node(0); - if (size > m68k_init_mapped_size) - free_bootmem_node(NODE_DATA(0), addr + m68k_init_mapped_size, - size - m68k_init_mapped_size); - - for (i = 1; i < m68k_num_memory; i++) + memblock_set_bottom_up(true); + + for (i = 0; i < m68k_num_memory; i++) { + m68k_setup_node(i); map_node(i); + } flush_tlb_all(); diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c index f8a710fd84cd..adea549d240e 100644 --- a/arch/m68k/mvme147/config.c +++ b/arch/m68k/mvme147/config.c @@ -40,7 +40,6 @@ static void mvme147_get_model(char *model); extern void mvme147_sched_init(irq_handler_t handler); extern u32 mvme147_gettimeoffset(void); extern int mvme147_hwclk (int, struct rtc_time *); -extern int mvme147_set_clock_mmss (unsigned long); extern void mvme147_reset (void); @@ -92,7 +91,6 @@ void __init config_mvme147(void) mach_init_IRQ = mvme147_init_IRQ; arch_gettimeoffset = mvme147_gettimeoffset; mach_hwclk = mvme147_hwclk; - mach_set_clock_mmss = mvme147_set_clock_mmss; mach_reset = mvme147_reset; mach_get_model = mvme147_get_model; @@ -164,8 +162,3 @@ int mvme147_hwclk(int op, struct rtc_time *t) } return 0; } - -int mvme147_set_clock_mmss (unsigned long nowtime) -{ - return 0; -} diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c index 4ffd9ef98de4..6ee36a5b528d 100644 --- a/arch/m68k/mvme16x/config.c +++ b/arch/m68k/mvme16x/config.c @@ -46,7 +46,6 @@ static void mvme16x_get_model(char *model); extern void mvme16x_sched_init(irq_handler_t handler); extern u32 mvme16x_gettimeoffset(void); extern int mvme16x_hwclk (int, struct rtc_time *); -extern int mvme16x_set_clock_mmss (unsigned long); extern void mvme16x_reset (void); int bcd2int (unsigned char b); @@ -280,7 +279,6 @@ void __init config_mvme16x(void) mach_init_IRQ = mvme16x_init_IRQ; arch_gettimeoffset = mvme16x_gettimeoffset; mach_hwclk = mvme16x_hwclk; - mach_set_clock_mmss = mvme16x_set_clock_mmss; mach_reset = mvme16x_reset; mach_get_model = mvme16x_get_model; mach_get_hardware_list = mvme16x_get_hardware_list; @@ -411,9 +409,3 @@ int mvme16x_hwclk(int op, struct rtc_time *t) } return 0; } - -int mvme16x_set_clock_mmss (unsigned long nowtime) -{ - return 0; -} - diff --git a/arch/m68k/q40/config.c b/arch/m68k/q40/config.c index 71c0867ecf20..96810d91da2b 100644 --- a/arch/m68k/q40/config.c +++ b/arch/m68k/q40/config.c @@ -43,7 +43,6 @@ extern void q40_sched_init(irq_handler_t handler); static u32 q40_gettimeoffset(void); static int q40_hwclk(int, struct rtc_time *); static unsigned int q40_get_ss(void); -static int q40_set_clock_mmss(unsigned long); static int q40_get_rtc_pll(struct rtc_pll_info *pll); static int q40_set_rtc_pll(struct rtc_pll_info *pll); @@ -175,7 +174,6 @@ void __init config_q40(void) mach_get_ss = q40_get_ss; mach_get_rtc_pll = q40_get_rtc_pll; mach_set_rtc_pll = q40_set_rtc_pll; - mach_set_clock_mmss = q40_set_clock_mmss; mach_reset = q40_reset; mach_get_model = q40_get_model; @@ -267,34 +265,6 @@ static unsigned int q40_get_ss(void) return bcd2bin(Q40_RTC_SECS); } -/* - * Set the minutes and seconds from seconds value 'nowtime'. Fail if - * clock is out by > 30 minutes. Logic lifted from atari code. - */ - -static int q40_set_clock_mmss(unsigned long nowtime) -{ - int retval = 0; - short real_seconds = nowtime % 60, real_minutes = (nowtime / 60) % 60; - - int rtc_minutes; - - rtc_minutes = bcd2bin(Q40_RTC_MINS); - - if ((rtc_minutes < real_minutes ? - real_minutes - rtc_minutes : - rtc_minutes - real_minutes) < 30) { - Q40_RTC_CTRL |= Q40_RTC_WRITE; - Q40_RTC_MINS = bin2bcd(real_minutes); - Q40_RTC_SECS = bin2bcd(real_seconds); - Q40_RTC_CTRL &= ~(Q40_RTC_WRITE); - } else - retval = -1; - - return retval; -} - - /* get and set PLL calibration of RTC clock */ #define Q40_RTC_PLL_MASK ((1<<5)-1) #define Q40_RTC_PLL_SIGN (1<<5) diff --git a/arch/m68k/sun3/config.c b/arch/m68k/sun3/config.c index 1d28d380e8cc..79a2bb857906 100644 --- a/arch/m68k/sun3/config.c +++ b/arch/m68k/sun3/config.c @@ -123,10 +123,6 @@ static void __init sun3_bootmem_alloc(unsigned long memory_start, availmem = memory_start; m68k_setup_node(0); - availmem += init_bootmem(start_page, num_pages); - availmem = (availmem + (PAGE_SIZE-1)) & PAGE_MASK; - - free_bootmem(__pa(availmem), memory_end - (availmem)); } diff --git a/arch/microblaze/Kconfig.debug b/arch/microblaze/Kconfig.debug index 331a3bb66297..93a737c8d1a6 100644 --- a/arch/microblaze/Kconfig.debug +++ b/arch/microblaze/Kconfig.debug @@ -8,11 +8,4 @@ config TRACE_IRQFLAGS_SUPPORT source "lib/Kconfig.debug" -config HEART_BEAT - bool "Heart beat function for kernel" - default n - help - This option turns on/off heart beat kernel functionality. - First GPIO node is taken. - endmenu diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index ffea82a16d2c..b091de77b15b 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -19,7 +19,7 @@ #include #include -/* Look at Documentation/cachetlb.txt */ +/* Look at Documentation/core-api/cachetlb.rst */ /* * Cache handling functions. diff --git a/arch/microblaze/include/asm/setup.h b/arch/microblaze/include/asm/setup.h index d5384f6f36f7..ce9b7b786156 100644 --- a/arch/microblaze/include/asm/setup.h +++ b/arch/microblaze/include/asm/setup.h @@ -19,15 +19,10 @@ extern char cmd_line[COMMAND_LINE_SIZE]; extern char *klimit; -void microblaze_heartbeat(void); -void microblaze_setup_heartbeat(void); - # ifdef CONFIG_MMU extern void mmu_reset(void); # endif /* CONFIG_MMU */ -extern void of_platform_reset_gpio_probe(void); - void time_init(void); void init_IRQ(void); void machine_early_init(const char *cmdline, unsigned int ram, diff --git a/arch/microblaze/include/asm/unistd.h b/arch/microblaze/include/asm/unistd.h index 9774e1d9507b..a62d09420a47 100644 --- a/arch/microblaze/include/asm/unistd.h +++ b/arch/microblaze/include/asm/unistd.h @@ -38,6 +38,6 @@ #endif /* __ASSEMBLY__ */ -#define __NR_syscalls 399 +#define __NR_syscalls 401 #endif /* _ASM_MICROBLAZE_UNISTD_H */ diff --git a/arch/microblaze/include/uapi/asm/unistd.h b/arch/microblaze/include/uapi/asm/unistd.h index eb156f914793..7a9f16a76413 100644 --- a/arch/microblaze/include/uapi/asm/unistd.h +++ b/arch/microblaze/include/uapi/asm/unistd.h @@ -415,5 +415,7 @@ #define __NR_pkey_alloc 396 #define __NR_pkey_free 397 #define __NR_statx 398 +#define __NR_io_pgetevents 399 +#define __NR_rseq 400 #endif /* _UAPI_ASM_MICROBLAZE_UNISTD_H */ diff --git a/arch/microblaze/kernel/Makefile b/arch/microblaze/kernel/Makefile index 7e99cf6984a1..dd71637437f4 100644 --- a/arch/microblaze/kernel/Makefile +++ b/arch/microblaze/kernel/Makefile @@ -8,7 +8,6 @@ ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_timer.o = -pg CFLAGS_REMOVE_intc.o = -pg CFLAGS_REMOVE_early_printk.o = -pg -CFLAGS_REMOVE_heartbeat.o = -pg CFLAGS_REMOVE_ftrace.o = -pg CFLAGS_REMOVE_process.o = -pg endif @@ -17,12 +16,11 @@ extra-y := head.o vmlinux.lds obj-y += dma.o exceptions.o \ hw_exception_handler.o irq.o \ - platform.o process.o prom.o ptrace.o \ + process.o prom.o ptrace.o \ reset.o setup.o signal.o sys_microblaze.o timer.o traps.o unwind.o obj-y += cpu/ -obj-$(CONFIG_HEART_BEAT) += heartbeat.o obj-$(CONFIG_MODULES) += microblaze_ksyms.o module.o obj-$(CONFIG_MMU) += misc.o obj-$(CONFIG_STACKTRACE) += stacktrace.o diff --git a/arch/microblaze/kernel/heartbeat.c b/arch/microblaze/kernel/heartbeat.c deleted file mode 100644 index 2022130139d2..000000000000 --- a/arch/microblaze/kernel/heartbeat.c +++ /dev/null @@ -1,72 +0,0 @@ -/* - * Copyright (C) 2007-2009 Michal Simek - * Copyright (C) 2007-2009 PetaLogix - * Copyright (C) 2006 Atmark Techno, Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - */ - -#include -#include -#include - -#include -#include -#include - -static unsigned int base_addr; - -void microblaze_heartbeat(void) -{ - static unsigned int cnt, period, dist; - - if (base_addr) { - if (cnt == 0 || cnt == dist) - out_be32(base_addr, 1); - else if (cnt == 7 || cnt == dist + 7) - out_be32(base_addr, 0); - - if (++cnt > period) { - cnt = 0; - /* - * The hyperbolic function below modifies the heartbeat - * period length in dependency of the current (5min) - * load. It goes through the points f(0)=126, f(1)=86, - * f(5)=51, f(inf)->30. - */ - period = ((672 << FSHIFT) / (5 * avenrun[0] + - (7 << FSHIFT))) + 30; - dist = period / 4; - } - } -} - -void microblaze_setup_heartbeat(void) -{ - struct device_node *gpio = NULL; - int *prop; - int j; - const char * const gpio_list[] = { - "xlnx,xps-gpio-1.00.a", - NULL - }; - - for (j = 0; gpio_list[j] != NULL; j++) { - gpio = of_find_compatible_node(NULL, NULL, gpio_list[j]); - if (gpio) - break; - } - - if (gpio) { - base_addr = be32_to_cpup(of_get_property(gpio, "reg", NULL)); - base_addr = (unsigned long) ioremap(base_addr, PAGE_SIZE); - pr_notice("Heartbeat GPIO at 0x%x\n", base_addr); - - /* GPIO is configured as output */ - prop = (int *) of_get_property(gpio, "xlnx,is-bidir", NULL); - if (prop) - out_be32(base_addr + 4, 0); - } -} diff --git a/arch/microblaze/kernel/platform.c b/arch/microblaze/kernel/platform.c deleted file mode 100644 index 2540d60610d9..000000000000 --- a/arch/microblaze/kernel/platform.c +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Copyright 2008 Michal Simek - * - * based on virtex.c file - * - * Copyright 2007 Secret Lab Technologies Ltd. - * - * This file is licensed under the terms of the GNU General Public License - * version 2. This program is licensed "as is" without any warranty of any - * kind, whether express or implied. - */ - -#include -#include -#include - -static struct of_device_id xilinx_of_bus_ids[] __initdata = { - { .compatible = "simple-bus", }, - { .compatible = "xlnx,compound", }, - {} -}; - -static int __init microblaze_device_probe(void) -{ - of_platform_bus_probe(NULL, xilinx_of_bus_ids, NULL); - of_platform_reset_gpio_probe(); - return 0; -} -device_initcall(microblaze_device_probe); diff --git a/arch/microblaze/kernel/reset.c b/arch/microblaze/kernel/reset.c index bab4c8330ef4..fcbe1daf6316 100644 --- a/arch/microblaze/kernel/reset.c +++ b/arch/microblaze/kernel/reset.c @@ -18,7 +18,7 @@ static int handle; /* reset pin handle */ static unsigned int reset_val; -void of_platform_reset_gpio_probe(void) +static int of_platform_reset_gpio_probe(void) { int ret; handle = of_get_named_gpio(of_find_node_by_path("/"), @@ -27,13 +27,13 @@ void of_platform_reset_gpio_probe(void) if (!gpio_is_valid(handle)) { pr_info("Skipping unavailable RESET gpio %d (%s)\n", handle, "reset"); - return; + return -ENODEV; } ret = gpio_request(handle, "reset"); if (ret < 0) { pr_info("GPIO pin is already allocated\n"); - return; + return ret; } /* get current setup value */ @@ -51,11 +51,12 @@ void of_platform_reset_gpio_probe(void) pr_info("RESET: Registered gpio device: %d, current val: %d\n", handle, reset_val); - return; + return 0; err: gpio_free(handle); - return; + return ret; } +device_initcall(of_platform_reset_gpio_probe); static void gpio_system_reset(void) diff --git a/arch/microblaze/kernel/syscall_table.S b/arch/microblaze/kernel/syscall_table.S index 56bcf313121f..6ab650593792 100644 --- a/arch/microblaze/kernel/syscall_table.S +++ b/arch/microblaze/kernel/syscall_table.S @@ -400,3 +400,5 @@ ENTRY(sys_call_table) .long sys_pkey_alloc .long sys_pkey_free .long sys_statx + .long sys_io_pgetevents + .long sys_rseq diff --git a/arch/microblaze/kernel/timer.c b/arch/microblaze/kernel/timer.c index 7de941cbbd94..a6683484b3a1 100644 --- a/arch/microblaze/kernel/timer.c +++ b/arch/microblaze/kernel/timer.c @@ -156,9 +156,6 @@ static inline void timer_ack(void) static irqreturn_t timer_interrupt(int irq, void *dev_id) { struct clock_event_device *evt = &clockevent_xilinx_timer; -#ifdef CONFIG_HEART_BEAT - microblaze_heartbeat(); -#endif timer_ack(); evt->event_handler(evt); return IRQ_HANDLED; @@ -318,10 +315,6 @@ static int __init xilinx_timer_init(struct device_node *timer) return ret; } -#ifdef CONFIG_HEART_BEAT - microblaze_setup_heartbeat(); -#endif - ret = xilinx_clocksource_init(); if (ret) return ret; diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index fe98e459a416..642a56e2a1ea 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -16,6 +16,7 @@ config MIPS select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select CPU_PM if CPU_IDLE + select DMA_DIRECT_OPS select GENERIC_ATOMIC64 if !64BIT select GENERIC_CLOCKEVENTS select GENERIC_CMOS_UPDATE @@ -41,7 +42,6 @@ config MIPS select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES && 64BIT select HAVE_CBPF_JIT if (!64BIT && !CPU_MICROMIPS) select HAVE_EBPF_JIT if (64BIT && !CPU_MICROMIPS) - select HAVE_CC_STACKPROTECTOR select HAVE_CONTEXT_TRACKING select HAVE_COPY_THREAD_TLS select HAVE_C_RECORDMCOUNT @@ -66,6 +66,8 @@ config MIPS select HAVE_OPROFILE select HAVE_PERF_EVENTS select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RSEQ + select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP select IRQ_FORCED_THREADING @@ -96,6 +98,7 @@ config MIPS_GENERIC select HW_HAS_PCI select IRQ_MIPS_CPU select LIBFDT + select MIPS_AUTO_PFN_OFFSET select MIPS_CPU_SCACHE select MIPS_GIC select MIPS_L1_CACHE_SHIFT_7 @@ -192,6 +195,7 @@ config ATH79 select CSRC_R4K select DMA_NONCOHERENT select GPIOLIB + select PINCTRL select HAVE_CLK select COMMON_CLK select CLKDEV_LOOKUP @@ -210,6 +214,8 @@ config ATH79 config BMIPS_GENERIC bool "Broadcom Generic BMIPS kernel" + select ARCH_HAS_SYNC_DMA_FOR_CPU_ALL + select ARCH_HAS_PHYS_TO_DMA select BOOT_RAW select NO_EXCEPT_FILL select USE_OF @@ -437,7 +443,6 @@ config MACH_LOONGSON32 config MACH_LOONGSON64 bool "Loongson-2/3 family of machines" - select ARCH_HAS_PHYS_TO_DMA select SYS_SUPPORTS_ZBOOT help This enables the support of Loongson-2/3 family of machines. @@ -661,11 +666,11 @@ config SGI_IP22 config SGI_IP27 bool "SGI IP27 (Origin200/2000)" + select ARCH_HAS_PHYS_TO_DMA select FW_ARC select FW_ARC64 select BOOT_ELF64 select DEFAULT_SGI_PARTITION - select DMA_COHERENT select SYS_HAS_EARLY_PRINTK select HW_HAS_PCI select NR_CPUS_DEFAULT_64 @@ -720,6 +725,7 @@ config SGI_IP28 config SGI_IP32 bool "SGI IP32 (O2)" + select ARCH_HAS_PHYS_TO_DMA select FW_ARC select FW_ARC32 select BOOT_ELF32 @@ -742,7 +748,6 @@ config SGI_IP32 config SIBYTE_CRHINE bool "Sibyte BCM91120C-CRhine" select BOOT_ELF32 - select DMA_COHERENT select SIBYTE_BCM1120 select SWAP_IO_SPACE select SYS_HAS_CPU_SB1 @@ -752,7 +757,6 @@ config SIBYTE_CRHINE config SIBYTE_CARMEL bool "Sibyte BCM91120x-Carmel" select BOOT_ELF32 - select DMA_COHERENT select SIBYTE_BCM1120 select SWAP_IO_SPACE select SYS_HAS_CPU_SB1 @@ -762,7 +766,6 @@ config SIBYTE_CARMEL config SIBYTE_CRHONE bool "Sibyte BCM91125C-CRhone" select BOOT_ELF32 - select DMA_COHERENT select SIBYTE_BCM1125 select SWAP_IO_SPACE select SYS_HAS_CPU_SB1 @@ -773,7 +776,6 @@ config SIBYTE_CRHONE config SIBYTE_RHONE bool "Sibyte BCM91125E-Rhone" select BOOT_ELF32 - select DMA_COHERENT select SIBYTE_BCM1125H select SWAP_IO_SPACE select SYS_HAS_CPU_SB1 @@ -783,7 +785,6 @@ config SIBYTE_RHONE config SIBYTE_SWARM bool "Sibyte BCM91250A-SWARM" select BOOT_ELF32 - select DMA_COHERENT select HAVE_PATA_PLATFORM select SIBYTE_SB1250 select SWAP_IO_SPACE @@ -796,7 +797,6 @@ config SIBYTE_SWARM config SIBYTE_LITTLESUR bool "Sibyte BCM91250C2-LittleSur" select BOOT_ELF32 - select DMA_COHERENT select HAVE_PATA_PLATFORM select SIBYTE_SB1250 select SWAP_IO_SPACE @@ -808,7 +808,6 @@ config SIBYTE_LITTLESUR config SIBYTE_SENTOSA bool "Sibyte BCM91250E-Sentosa" select BOOT_ELF32 - select DMA_COHERENT select SIBYTE_SB1250 select SWAP_IO_SPACE select SYS_HAS_CPU_SB1 @@ -818,7 +817,6 @@ config SIBYTE_SENTOSA config SIBYTE_BIGSUR bool "Sibyte BCM91480B-BigSur" select BOOT_ELF32 - select DMA_COHERENT select NR_CPUS_DEFAULT_4 select SIBYTE_BCM1x80 select SWAP_IO_SPACE @@ -894,8 +892,8 @@ config CAVIUM_OCTEON_SOC bool "Cavium Networks Octeon SoC based boards" select CEVT_R4K select ARCH_HAS_PHYS_TO_DMA + select HAS_RAPIDIO select PHYS_ADDR_T_64BIT - select DMA_COHERENT select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_BIG_ENDIAN select EDAC_SUPPORT @@ -944,7 +942,6 @@ config NLM_XLR_BOARD select PHYS_ADDR_T_64BIT select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_HIGHMEM - select DMA_COHERENT select NR_CPUS_DEFAULT_32 select CEVT_R4K select CSRC_R4K @@ -972,7 +969,6 @@ config NLM_XLP_BOARD select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_LITTLE_ENDIAN select SYS_SUPPORTS_HIGHMEM - select DMA_COHERENT select NR_CPUS_DEFAULT_32 select CEVT_R4K select CSRC_R4K @@ -991,7 +987,6 @@ config MIPS_PARAVIRT bool "Para-Virtualized guest system" select CEVT_R4K select CSRC_R4K - select DMA_COHERENT select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_BIG_ENDIAN @@ -1117,12 +1112,14 @@ config DMA_PERDEV_COHERENT bool select DMA_MAYBE_COHERENT -config DMA_COHERENT - bool - config DMA_NONCOHERENT bool + select ARCH_HAS_SYNC_DMA_FOR_DEVICE + select ARCH_HAS_SYNC_DMA_FOR_CPU select NEED_DMA_MAP_STATE + select DMA_NONCOHERENT_MMAP + select DMA_NONCOHERENT_CACHE_SYNC + select DMA_NONCOHERENT_OPS config SYS_HAS_EARLY_PRINTK bool @@ -1364,6 +1361,7 @@ choice config CPU_LOONGSON3 bool "Loongson 3 CPU" depends on SYS_HAS_CPU_LOONGSON3 + select ARCH_HAS_PHYS_TO_DMA select CPU_SUPPORTS_64BIT_KERNEL select CPU_SUPPORTS_HIGHMEM select CPU_SUPPORTS_HUGEPAGES @@ -1426,7 +1424,8 @@ config CPU_LOONGSON1B select LEDS_GPIO_REGISTER help The Loongson 1B is a 32-bit SoC, which implements the MIPS32 - release 2 instruction set. + Release 1 instruction set and part of the MIPS32 Release 2 + instruction set. config CPU_LOONGSON1C bool "Loongson 1C" @@ -1435,7 +1434,8 @@ config CPU_LOONGSON1C select LEDS_GPIO_REGISTER help The Loongson 1C is a 32-bit SoC, which implements the MIPS32 - release 2 instruction set. + Release 1 instruction set and part of the MIPS32 Release 2 + instruction set. config CPU_MIPS32_R1 bool "MIPS32 Release 1" @@ -1830,11 +1830,12 @@ config CPU_LOONGSON2 select CPU_SUPPORTS_64BIT_KERNEL select CPU_SUPPORTS_HIGHMEM select CPU_SUPPORTS_HUGEPAGES + select ARCH_HAS_PHYS_TO_DMA config CPU_LOONGSON1 bool select CPU_MIPS32 - select CPU_MIPSR2 + select CPU_MIPSR1 select CPU_HAS_PREFETCH select CPU_SUPPORTS_32BIT_KERNEL select CPU_SUPPORTS_HIGHMEM @@ -1978,12 +1979,6 @@ config SYS_HAS_CPU_XLR config SYS_HAS_CPU_XLP bool -config MIPS_MALTA_PM - depends on MIPS_MALTA - depends on PCI - bool - default y - # # CPU may reorder R->R, R->W, W->R, W->W # Reordering beyond LL and SC is handled in WEAK_REORDERING_BEYOND_LLSC @@ -2993,6 +2988,9 @@ config PGTABLE_LEVELS default 3 if 64BIT && !PAGE_SIZE_64KB default 2 +config MIPS_AUTO_PFN_OFFSET + bool + source "init/Kconfig" source "kernel/Kconfig.freezer" @@ -3114,10 +3112,13 @@ config ZONE_DMA32 source "drivers/pcmcia/Kconfig" +config HAS_RAPIDIO + bool + default n + config RAPIDIO tristate "RapidIO support" - depends on PCI - default n + depends on HAS_RAPIDIO || PCI help If you say Y here, the kernel will include drivers and infrastructure code to support RapidIO interconnect devices. diff --git a/arch/mips/Makefile b/arch/mips/Makefile index e2122cca4ae2..5425df002a6b 100644 --- a/arch/mips/Makefile +++ b/arch/mips/Makefile @@ -122,12 +122,22 @@ cflags-y += -ffreestanding # are used, so we kludge that here. A bug has been filed at # http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29413. # +# clang doesn't suffer from these issues and our checks against -dumpmachine +# don't work so well when cross compiling, since without providing --target +# clang's output will be based upon the build machine. So for clang we simply +# unconditionally specify -EB or -EL as appropriate. +# +ifeq ($(cc-name),clang) +cflags-$(CONFIG_CPU_BIG_ENDIAN) += -EB +cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -EL +else undef-all += -UMIPSEB -U_MIPSEB -U__MIPSEB -U__MIPSEB__ undef-all += -UMIPSEL -U_MIPSEL -U__MIPSEL -U__MIPSEL__ predef-be += -DMIPSEB -D_MIPSEB -D__MIPSEB -D__MIPSEB__ predef-le += -DMIPSEL -D_MIPSEL -D__MIPSEL -D__MIPSEL__ cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' && echo -EB $(undef-all) $(predef-be)) cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' || echo -EL $(undef-all) $(predef-le)) +endif cflags-$(CONFIG_SB1XXX_CORELIS) += $(call cc-option,-mno-sched-prolog) \ -fno-omit-frame-pointer @@ -155,15 +165,11 @@ cflags-$(CONFIG_CPU_R4300) += -march=r4300 -Wa,--trap cflags-$(CONFIG_CPU_VR41XX) += -march=r4100 -Wa,--trap cflags-$(CONFIG_CPU_R4X00) += -march=r4600 -Wa,--trap cflags-$(CONFIG_CPU_TX49XX) += -march=r4600 -Wa,--trap -cflags-$(CONFIG_CPU_MIPS32_R1) += $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \ - -Wa,-mips32 -Wa,--trap -cflags-$(CONFIG_CPU_MIPS32_R2) += $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \ - -Wa,-mips32r2 -Wa,--trap +cflags-$(CONFIG_CPU_MIPS32_R1) += -march=mips32 -Wa,--trap +cflags-$(CONFIG_CPU_MIPS32_R2) += -march=mips32r2 -Wa,--trap cflags-$(CONFIG_CPU_MIPS32_R6) += -march=mips32r6 -Wa,--trap -modd-spreg -cflags-$(CONFIG_CPU_MIPS64_R1) += $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \ - -Wa,-mips64 -Wa,--trap -cflags-$(CONFIG_CPU_MIPS64_R2) += $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \ - -Wa,-mips64r2 -Wa,--trap +cflags-$(CONFIG_CPU_MIPS64_R1) += -march=mips64 -Wa,--trap +cflags-$(CONFIG_CPU_MIPS64_R2) += -march=mips64r2 -Wa,--trap cflags-$(CONFIG_CPU_MIPS64_R6) += -march=mips64r6 -Wa,--trap cflags-$(CONFIG_CPU_R5000) += -march=r5000 -Wa,--trap cflags-$(CONFIG_CPU_R5432) += $(call cc-option,-march=r5400,-march=r5000) \ diff --git a/arch/mips/alchemy/board-gpr.c b/arch/mips/alchemy/board-gpr.c index fa75d75b5ba9..ddff9a02513d 100644 --- a/arch/mips/alchemy/board-gpr.c +++ b/arch/mips/alchemy/board-gpr.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -60,7 +61,7 @@ void __init prom_init(void) add_memory_region(0, memsize, BOOT_MEM_RAM); } -void prom_putchar(unsigned char c) +void prom_putchar(char c) { alchemy_uart_putchar(AU1000_UART0_PHYS_ADDR, c); } diff --git a/arch/mips/alchemy/board-mtx1.c b/arch/mips/alchemy/board-mtx1.c index aab55aaf3d62..d625e6f99ae7 100644 --- a/arch/mips/alchemy/board-mtx1.c +++ b/arch/mips/alchemy/board-mtx1.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -58,7 +59,7 @@ void __init prom_init(void) add_memory_region(0, memsize, BOOT_MEM_RAM); } -void prom_putchar(unsigned char c) +void prom_putchar(char c) { alchemy_uart_putchar(AU1000_UART0_PHYS_ADDR, c); } diff --git a/arch/mips/alchemy/board-xxs1500.c b/arch/mips/alchemy/board-xxs1500.c index 0fc53e08a894..5f05b8714385 100644 --- a/arch/mips/alchemy/board-xxs1500.c +++ b/arch/mips/alchemy/board-xxs1500.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -55,7 +56,7 @@ void __init prom_init(void) add_memory_region(0, memsize, BOOT_MEM_RAM); } -void prom_putchar(unsigned char c) +void prom_putchar(char c) { alchemy_uart_putchar(AU1000_UART0_PHYS_ADDR, c); } diff --git a/arch/mips/alchemy/devboards/platform.c b/arch/mips/alchemy/devboards/platform.c index 203854ddd1bb..8d4b65c3268a 100644 --- a/arch/mips/alchemy/devboards/platform.c +++ b/arch/mips/alchemy/devboards/platform.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -36,7 +37,7 @@ void __init prom_init(void) add_memory_region(0, memsize, BOOT_MEM_RAM); } -void prom_putchar(unsigned char c) +void prom_putchar(char c) { if (alchemy_get_cputype() == ALCHEMY_CPU_AU1300) alchemy_uart_putchar(AU1300_UART2_PHYS_ADDR, c); diff --git a/arch/mips/ar7/clock.c b/arch/mips/ar7/clock.c index 0137656107a9..6b64fd96dba8 100644 --- a/arch/mips/ar7/clock.c +++ b/arch/mips/ar7/clock.c @@ -476,3 +476,32 @@ void __init ar7_init_clocks(void) /* adjust vbus clock rate */ vbus_clk.rate = bus_clk.rate / 2; } + +/* dummy functions, should not be called */ +long clk_round_rate(struct clk *clk, unsigned long rate) +{ + WARN_ON(clk); + return 0; +} +EXPORT_SYMBOL(clk_round_rate); + +int clk_set_rate(struct clk *clk, unsigned long rate) +{ + WARN_ON(clk); + return 0; +} +EXPORT_SYMBOL(clk_set_rate); + +int clk_set_parent(struct clk *clk, struct clk *parent) +{ + WARN_ON(clk); + return 0; +} +EXPORT_SYMBOL(clk_set_parent); + +struct clk *clk_get_parent(struct clk *clk) +{ + WARN_ON(clk); + return NULL; +} +EXPORT_SYMBOL(clk_get_parent); diff --git a/arch/mips/ar7/prom.c b/arch/mips/ar7/prom.c index dd53987a690f..2ec8d9ac91ec 100644 --- a/arch/mips/ar7/prom.c +++ b/arch/mips/ar7/prom.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -259,10 +260,9 @@ static inline void serial_out(int offset, int value) writel(value, (void *)PORT(offset)); } -int prom_putchar(char c) +void prom_putchar(char c) { while ((serial_in(UART_LSR) & UART_LSR_TEMT) == 0) ; serial_out(UART_TX, c); - return 1; } diff --git a/arch/mips/ath25/Kconfig b/arch/mips/ath25/Kconfig index 7070b4bcd01d..2c1dfd06c366 100644 --- a/arch/mips/ath25/Kconfig +++ b/arch/mips/ath25/Kconfig @@ -12,6 +12,7 @@ config SOC_AR2315 config PCI_AR2315 bool "Atheros AR2315 PCI controller support" depends on SOC_AR2315 + select ARCH_HAS_PHYS_TO_DMA select HW_HAS_PCI select PCI default y diff --git a/arch/mips/ath25/board.c b/arch/mips/ath25/board.c index 6d11ae581ea7..989e71015ee6 100644 --- a/arch/mips/ath25/board.c +++ b/arch/mips/ath25/board.c @@ -146,10 +146,10 @@ int __init ath25_find_config(phys_addr_t base, unsigned long size) pr_info("Fixing up empty mac addresses\n"); config->reset_config_gpio = 0xffff; config->sys_led_gpio = 0xffff; - random_ether_addr(config->wlan0_mac); + eth_random_addr(config->wlan0_mac); config->wlan0_mac[0] &= ~0x06; - random_ether_addr(config->enet0_mac); - random_ether_addr(config->enet1_mac); + eth_random_addr(config->enet0_mac); + eth_random_addr(config->enet1_mac); } } diff --git a/arch/mips/ath25/early_printk.c b/arch/mips/ath25/early_printk.c index 36035b628161..d534761e9cda 100644 --- a/arch/mips/ath25/early_printk.c +++ b/arch/mips/ath25/early_printk.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "devices.h" #include "ar2315_regs.h" @@ -25,7 +26,7 @@ static inline unsigned char prom_uart_rr(void __iomem *base, unsigned reg) return __raw_readl(base + 4 * reg); } -void prom_putchar(unsigned char ch) +void prom_putchar(char ch) { static void __iomem *base; @@ -38,7 +39,7 @@ void prom_putchar(unsigned char ch) while ((prom_uart_rr(base, UART_LSR) & UART_LSR_THRE) == 0) ; - prom_uart_wr(base, UART_TX, ch); + prom_uart_wr(base, UART_TX, (unsigned char)ch); while ((prom_uart_rr(base, UART_LSR) & UART_LSR_THRE) == 0) ; } diff --git a/arch/mips/ath79/clock.c b/arch/mips/ath79/clock.c index 6b1000b6a6a6..cf9158e3c2d9 100644 --- a/arch/mips/ath79/clock.c +++ b/arch/mips/ath79/clock.c @@ -355,6 +355,91 @@ static void __init ar934x_clocks_init(void) iounmap(dpll_base); } +static void __init qca953x_clocks_init(void) +{ + unsigned long ref_rate; + unsigned long cpu_rate; + unsigned long ddr_rate; + unsigned long ahb_rate; + u32 pll, out_div, ref_div, nint, frac, clk_ctrl, postdiv; + u32 cpu_pll, ddr_pll; + u32 bootstrap; + + bootstrap = ath79_reset_rr(QCA953X_RESET_REG_BOOTSTRAP); + if (bootstrap & QCA953X_BOOTSTRAP_REF_CLK_40) + ref_rate = 40 * 1000 * 1000; + else + ref_rate = 25 * 1000 * 1000; + + pll = ath79_pll_rr(QCA953X_PLL_CPU_CONFIG_REG); + out_div = (pll >> QCA953X_PLL_CPU_CONFIG_OUTDIV_SHIFT) & + QCA953X_PLL_CPU_CONFIG_OUTDIV_MASK; + ref_div = (pll >> QCA953X_PLL_CPU_CONFIG_REFDIV_SHIFT) & + QCA953X_PLL_CPU_CONFIG_REFDIV_MASK; + nint = (pll >> QCA953X_PLL_CPU_CONFIG_NINT_SHIFT) & + QCA953X_PLL_CPU_CONFIG_NINT_MASK; + frac = (pll >> QCA953X_PLL_CPU_CONFIG_NFRAC_SHIFT) & + QCA953X_PLL_CPU_CONFIG_NFRAC_MASK; + + cpu_pll = nint * ref_rate / ref_div; + cpu_pll += frac * (ref_rate >> 6) / ref_div; + cpu_pll /= (1 << out_div); + + pll = ath79_pll_rr(QCA953X_PLL_DDR_CONFIG_REG); + out_div = (pll >> QCA953X_PLL_DDR_CONFIG_OUTDIV_SHIFT) & + QCA953X_PLL_DDR_CONFIG_OUTDIV_MASK; + ref_div = (pll >> QCA953X_PLL_DDR_CONFIG_REFDIV_SHIFT) & + QCA953X_PLL_DDR_CONFIG_REFDIV_MASK; + nint = (pll >> QCA953X_PLL_DDR_CONFIG_NINT_SHIFT) & + QCA953X_PLL_DDR_CONFIG_NINT_MASK; + frac = (pll >> QCA953X_PLL_DDR_CONFIG_NFRAC_SHIFT) & + QCA953X_PLL_DDR_CONFIG_NFRAC_MASK; + + ddr_pll = nint * ref_rate / ref_div; + ddr_pll += frac * (ref_rate >> 6) / (ref_div << 4); + ddr_pll /= (1 << out_div); + + clk_ctrl = ath79_pll_rr(QCA953X_PLL_CLK_CTRL_REG); + + postdiv = (clk_ctrl >> QCA953X_PLL_CLK_CTRL_CPU_POST_DIV_SHIFT) & + QCA953X_PLL_CLK_CTRL_CPU_POST_DIV_MASK; + + if (clk_ctrl & QCA953X_PLL_CLK_CTRL_CPU_PLL_BYPASS) + cpu_rate = ref_rate; + else if (clk_ctrl & QCA953X_PLL_CLK_CTRL_CPUCLK_FROM_CPUPLL) + cpu_rate = cpu_pll / (postdiv + 1); + else + cpu_rate = ddr_pll / (postdiv + 1); + + postdiv = (clk_ctrl >> QCA953X_PLL_CLK_CTRL_DDR_POST_DIV_SHIFT) & + QCA953X_PLL_CLK_CTRL_DDR_POST_DIV_MASK; + + if (clk_ctrl & QCA953X_PLL_CLK_CTRL_DDR_PLL_BYPASS) + ddr_rate = ref_rate; + else if (clk_ctrl & QCA953X_PLL_CLK_CTRL_DDRCLK_FROM_DDRPLL) + ddr_rate = ddr_pll / (postdiv + 1); + else + ddr_rate = cpu_pll / (postdiv + 1); + + postdiv = (clk_ctrl >> QCA953X_PLL_CLK_CTRL_AHB_POST_DIV_SHIFT) & + QCA953X_PLL_CLK_CTRL_AHB_POST_DIV_MASK; + + if (clk_ctrl & QCA953X_PLL_CLK_CTRL_AHB_PLL_BYPASS) + ahb_rate = ref_rate; + else if (clk_ctrl & QCA953X_PLL_CLK_CTRL_AHBCLK_FROM_DDRPLL) + ahb_rate = ddr_pll / (postdiv + 1); + else + ahb_rate = cpu_pll / (postdiv + 1); + + ath79_add_sys_clkdev("ref", ref_rate); + ath79_add_sys_clkdev("cpu", cpu_rate); + ath79_add_sys_clkdev("ddr", ddr_rate); + ath79_add_sys_clkdev("ahb", ahb_rate); + + clk_add_alias("wdt", NULL, "ref", NULL); + clk_add_alias("uart", NULL, "ref", NULL); +} + static void __init qca955x_clocks_init(void) { unsigned long ref_rate; @@ -440,6 +525,110 @@ static void __init qca955x_clocks_init(void) clk_add_alias("uart", NULL, "ref", NULL); } +static void __init qca956x_clocks_init(void) +{ + unsigned long ref_rate; + unsigned long cpu_rate; + unsigned long ddr_rate; + unsigned long ahb_rate; + u32 pll, out_div, ref_div, nint, hfrac, lfrac, clk_ctrl, postdiv; + u32 cpu_pll, ddr_pll; + u32 bootstrap; + + /* + * QCA956x timer init workaround has to be applied right before setting + * up the clock. Else, there will be no jiffies + */ + u32 misc; + + misc = ath79_reset_rr(AR71XX_RESET_REG_MISC_INT_ENABLE); + misc |= MISC_INT_MIPS_SI_TIMERINT_MASK; + ath79_reset_wr(AR71XX_RESET_REG_MISC_INT_ENABLE, misc); + + bootstrap = ath79_reset_rr(QCA956X_RESET_REG_BOOTSTRAP); + if (bootstrap & QCA956X_BOOTSTRAP_REF_CLK_40) + ref_rate = 40 * 1000 * 1000; + else + ref_rate = 25 * 1000 * 1000; + + pll = ath79_pll_rr(QCA956X_PLL_CPU_CONFIG_REG); + out_div = (pll >> QCA956X_PLL_CPU_CONFIG_OUTDIV_SHIFT) & + QCA956X_PLL_CPU_CONFIG_OUTDIV_MASK; + ref_div = (pll >> QCA956X_PLL_CPU_CONFIG_REFDIV_SHIFT) & + QCA956X_PLL_CPU_CONFIG_REFDIV_MASK; + + pll = ath79_pll_rr(QCA956X_PLL_CPU_CONFIG1_REG); + nint = (pll >> QCA956X_PLL_CPU_CONFIG1_NINT_SHIFT) & + QCA956X_PLL_CPU_CONFIG1_NINT_MASK; + hfrac = (pll >> QCA956X_PLL_CPU_CONFIG1_NFRAC_H_SHIFT) & + QCA956X_PLL_CPU_CONFIG1_NFRAC_H_MASK; + lfrac = (pll >> QCA956X_PLL_CPU_CONFIG1_NFRAC_L_SHIFT) & + QCA956X_PLL_CPU_CONFIG1_NFRAC_L_MASK; + + cpu_pll = nint * ref_rate / ref_div; + cpu_pll += (lfrac * ref_rate) / ((ref_div * 25) << 13); + cpu_pll += (hfrac >> 13) * ref_rate / ref_div; + cpu_pll /= (1 << out_div); + + pll = ath79_pll_rr(QCA956X_PLL_DDR_CONFIG_REG); + out_div = (pll >> QCA956X_PLL_DDR_CONFIG_OUTDIV_SHIFT) & + QCA956X_PLL_DDR_CONFIG_OUTDIV_MASK; + ref_div = (pll >> QCA956X_PLL_DDR_CONFIG_REFDIV_SHIFT) & + QCA956X_PLL_DDR_CONFIG_REFDIV_MASK; + pll = ath79_pll_rr(QCA956X_PLL_DDR_CONFIG1_REG); + nint = (pll >> QCA956X_PLL_DDR_CONFIG1_NINT_SHIFT) & + QCA956X_PLL_DDR_CONFIG1_NINT_MASK; + hfrac = (pll >> QCA956X_PLL_DDR_CONFIG1_NFRAC_H_SHIFT) & + QCA956X_PLL_DDR_CONFIG1_NFRAC_H_MASK; + lfrac = (pll >> QCA956X_PLL_DDR_CONFIG1_NFRAC_L_SHIFT) & + QCA956X_PLL_DDR_CONFIG1_NFRAC_L_MASK; + + ddr_pll = nint * ref_rate / ref_div; + ddr_pll += (lfrac * ref_rate) / ((ref_div * 25) << 13); + ddr_pll += (hfrac >> 13) * ref_rate / ref_div; + ddr_pll /= (1 << out_div); + + clk_ctrl = ath79_pll_rr(QCA956X_PLL_CLK_CTRL_REG); + + postdiv = (clk_ctrl >> QCA956X_PLL_CLK_CTRL_CPU_POST_DIV_SHIFT) & + QCA956X_PLL_CLK_CTRL_CPU_POST_DIV_MASK; + + if (clk_ctrl & QCA956X_PLL_CLK_CTRL_CPU_PLL_BYPASS) + cpu_rate = ref_rate; + else if (clk_ctrl & QCA956X_PLL_CLK_CTRL_CPU_DDRCLK_FROM_CPUPLL) + cpu_rate = ddr_pll / (postdiv + 1); + else + cpu_rate = cpu_pll / (postdiv + 1); + + postdiv = (clk_ctrl >> QCA956X_PLL_CLK_CTRL_DDR_POST_DIV_SHIFT) & + QCA956X_PLL_CLK_CTRL_DDR_POST_DIV_MASK; + + if (clk_ctrl & QCA956X_PLL_CLK_CTRL_DDR_PLL_BYPASS) + ddr_rate = ref_rate; + else if (clk_ctrl & QCA956X_PLL_CLK_CTRL_CPU_DDRCLK_FROM_DDRPLL) + ddr_rate = cpu_pll / (postdiv + 1); + else + ddr_rate = ddr_pll / (postdiv + 1); + + postdiv = (clk_ctrl >> QCA956X_PLL_CLK_CTRL_AHB_POST_DIV_SHIFT) & + QCA956X_PLL_CLK_CTRL_AHB_POST_DIV_MASK; + + if (clk_ctrl & QCA956X_PLL_CLK_CTRL_AHB_PLL_BYPASS) + ahb_rate = ref_rate; + else if (clk_ctrl & QCA956X_PLL_CLK_CTRL_AHBCLK_FROM_DDRPLL) + ahb_rate = ddr_pll / (postdiv + 1); + else + ahb_rate = cpu_pll / (postdiv + 1); + + ath79_add_sys_clkdev("ref", ref_rate); + ath79_add_sys_clkdev("cpu", cpu_rate); + ath79_add_sys_clkdev("ddr", ddr_rate); + ath79_add_sys_clkdev("ahb", ahb_rate); + + clk_add_alias("wdt", NULL, "ref", NULL); + clk_add_alias("uart", NULL, "ref", NULL); +} + void __init ath79_clocks_init(void) { if (soc_is_ar71xx()) @@ -450,8 +639,12 @@ void __init ath79_clocks_init(void) ar933x_clocks_init(); else if (soc_is_ar934x()) ar934x_clocks_init(); + else if (soc_is_qca953x()) + qca953x_clocks_init(); else if (soc_is_qca955x()) qca955x_clocks_init(); + else if (soc_is_qca956x() || soc_is_tp9343()) + qca956x_clocks_init(); else BUG(); } diff --git a/arch/mips/ath79/common.c b/arch/mips/ath79/common.c index 10a405d593df..cd6055f9e7a0 100644 --- a/arch/mips/ath79/common.c +++ b/arch/mips/ath79/common.c @@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(ath79_ddr_ctrl_init); void ath79_ddr_wb_flush(u32 reg) { - void __iomem *flush_reg = ath79_ddr_wb_flush_base + reg; + void __iomem *flush_reg = ath79_ddr_wb_flush_base + (reg * 4); /* Flush the DDR write buffer. */ __raw_writel(0x1, flush_reg); @@ -103,8 +103,12 @@ void ath79_device_reset_set(u32 mask) reg = AR933X_RESET_REG_RESET_MODULE; else if (soc_is_ar934x()) reg = AR934X_RESET_REG_RESET_MODULE; + else if (soc_is_qca953x()) + reg = QCA953X_RESET_REG_RESET_MODULE; else if (soc_is_qca955x()) reg = QCA955X_RESET_REG_RESET_MODULE; + else if (soc_is_qca956x() || soc_is_tp9343()) + reg = QCA956X_RESET_REG_RESET_MODULE; else BUG(); @@ -131,8 +135,12 @@ void ath79_device_reset_clear(u32 mask) reg = AR933X_RESET_REG_RESET_MODULE; else if (soc_is_ar934x()) reg = AR934X_RESET_REG_RESET_MODULE; + else if (soc_is_qca953x()) + reg = QCA953X_RESET_REG_RESET_MODULE; else if (soc_is_qca955x()) reg = QCA955X_RESET_REG_RESET_MODULE; + else if (soc_is_qca956x() || soc_is_tp9343()) + reg = QCA956X_RESET_REG_RESET_MODULE; else BUG(); diff --git a/arch/mips/ath79/early_printk.c b/arch/mips/ath79/early_printk.c index d1adc59af5bf..4b1063117ef7 100644 --- a/arch/mips/ath79/early_printk.c +++ b/arch/mips/ath79/early_printk.c @@ -13,12 +13,13 @@ #include #include #include +#include #include #include #include -static void (*_prom_putchar) (unsigned char); +static void (*_prom_putchar)(char); static inline void prom_putchar_wait(void __iomem *reg, u32 mask, u32 val) { @@ -33,31 +34,72 @@ static inline void prom_putchar_wait(void __iomem *reg, u32 mask, u32 val) #define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE) -static void prom_putchar_ar71xx(unsigned char ch) +static void prom_putchar_ar71xx(char ch) { void __iomem *base = (void __iomem *)(KSEG1ADDR(AR71XX_UART_BASE)); prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY); - __raw_writel(ch, base + UART_TX * 4); + __raw_writel((unsigned char)ch, base + UART_TX * 4); prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY); } -static void prom_putchar_ar933x(unsigned char ch) +static void prom_putchar_ar933x(char ch) { void __iomem *base = (void __iomem *)(KSEG1ADDR(AR933X_UART_BASE)); prom_putchar_wait(base + AR933X_UART_DATA_REG, AR933X_UART_DATA_TX_CSR, AR933X_UART_DATA_TX_CSR); - __raw_writel(AR933X_UART_DATA_TX_CSR | ch, base + AR933X_UART_DATA_REG); + __raw_writel(AR933X_UART_DATA_TX_CSR | (unsigned char)ch, + base + AR933X_UART_DATA_REG); prom_putchar_wait(base + AR933X_UART_DATA_REG, AR933X_UART_DATA_TX_CSR, AR933X_UART_DATA_TX_CSR); } -static void prom_putchar_dummy(unsigned char ch) +static void prom_putchar_dummy(char ch) { /* nothing to do */ } +static void prom_enable_uart(u32 id) +{ + void __iomem *gpio_base; + u32 uart_en; + u32 t; + + switch (id) { + case REV_ID_MAJOR_AR71XX: + uart_en = AR71XX_GPIO_FUNC_UART_EN; + break; + + case REV_ID_MAJOR_AR7240: + case REV_ID_MAJOR_AR7241: + case REV_ID_MAJOR_AR7242: + uart_en = AR724X_GPIO_FUNC_UART_EN; + break; + + case REV_ID_MAJOR_AR913X: + uart_en = AR913X_GPIO_FUNC_UART_EN; + break; + + case REV_ID_MAJOR_AR9330: + case REV_ID_MAJOR_AR9331: + uart_en = AR933X_GPIO_FUNC_UART_EN; + break; + + case REV_ID_MAJOR_AR9341: + case REV_ID_MAJOR_AR9342: + case REV_ID_MAJOR_AR9344: + /* TODO */ + default: + return; + } + + gpio_base = (void __iomem *)KSEG1ADDR(AR71XX_GPIO_BASE); + t = __raw_readl(gpio_base + AR71XX_GPIO_REG_FUNC); + t |= uart_en; + __raw_writel(t, gpio_base + AR71XX_GPIO_REG_FUNC); +} + static void prom_putchar_init(void) { void __iomem *base; @@ -76,8 +118,12 @@ static void prom_putchar_init(void) case REV_ID_MAJOR_AR9341: case REV_ID_MAJOR_AR9342: case REV_ID_MAJOR_AR9344: + case REV_ID_MAJOR_QCA9533: + case REV_ID_MAJOR_QCA9533_V2: case REV_ID_MAJOR_QCA9556: case REV_ID_MAJOR_QCA9558: + case REV_ID_MAJOR_TP9343: + case REV_ID_MAJOR_QCA956X: _prom_putchar = prom_putchar_ar71xx; break; @@ -88,11 +134,13 @@ static void prom_putchar_init(void) default: _prom_putchar = prom_putchar_dummy; - break; + return; } + + prom_enable_uart(id); } -void prom_putchar(unsigned char ch) +void prom_putchar(char ch) { if (!_prom_putchar) prom_putchar_init(); diff --git a/arch/mips/ath79/mach-pb44.c b/arch/mips/ath79/mach-pb44.c index 6b2c6f3baefa..75fb96ca61db 100644 --- a/arch/mips/ath79/mach-pb44.c +++ b/arch/mips/ath79/mach-pb44.c @@ -34,7 +34,7 @@ #define PB44_KEYS_DEBOUNCE_INTERVAL (3 * PB44_KEYS_POLL_INTERVAL) static struct gpiod_lookup_table pb44_i2c_gpiod_table = { - .dev_id = "i2c-gpio", + .dev_id = "i2c-gpio.0", .table = { GPIO_LOOKUP_IDX("ath79-gpio", PB44_GPIO_I2C_SDA, NULL, 0, GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN), diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c index f206dafbb0a3..4c7a93f4039a 100644 --- a/arch/mips/ath79/setup.c +++ b/arch/mips/ath79/setup.c @@ -40,6 +40,7 @@ static char ath79_sys_type[ATH79_SYS_TYPE_LEN]; static void ath79_restart(char *command) { + local_irq_disable(); ath79_device_reset_set(AR71XX_RESET_FULL_CHIP); for (;;) if (cpu_wait) @@ -59,6 +60,7 @@ static void __init ath79_detect_sys_type(void) u32 major; u32 minor; u32 rev = 0; + u32 ver = 1; id = ath79_reset_rr(AR71XX_RESET_REG_REV_ID); major = id & REV_ID_MAJOR_MASK; @@ -151,6 +153,17 @@ static void __init ath79_detect_sys_type(void) rev = id & AR934X_REV_ID_REVISION_MASK; break; + case REV_ID_MAJOR_QCA9533_V2: + ver = 2; + ath79_soc_rev = 2; + /* drop through */ + + case REV_ID_MAJOR_QCA9533: + ath79_soc = ATH79_SOC_QCA9533; + chip = "9533"; + rev = id & QCA953X_REV_ID_REVISION_MASK; + break; + case REV_ID_MAJOR_QCA9556: ath79_soc = ATH79_SOC_QCA9556; chip = "9556"; @@ -163,14 +176,30 @@ static void __init ath79_detect_sys_type(void) rev = id & QCA955X_REV_ID_REVISION_MASK; break; + case REV_ID_MAJOR_QCA956X: + ath79_soc = ATH79_SOC_QCA956X; + chip = "956X"; + rev = id & QCA956X_REV_ID_REVISION_MASK; + break; + + case REV_ID_MAJOR_TP9343: + ath79_soc = ATH79_SOC_TP9343; + chip = "9343"; + rev = id & QCA956X_REV_ID_REVISION_MASK; + break; + default: panic("ath79: unknown SoC, id:0x%08x", id); } - ath79_soc_rev = rev; + if (ver == 1) + ath79_soc_rev = rev; - if (soc_is_qca955x()) - sprintf(ath79_sys_type, "Qualcomm Atheros QCA%s rev %u", + if (soc_is_qca953x() || soc_is_qca955x() || soc_is_qca956x()) + sprintf(ath79_sys_type, "Qualcomm Atheros QCA%s ver %u rev %u", + chip, ver, rev); + else if (soc_is_tp9343()) + sprintf(ath79_sys_type, "Qualcomm Atheros TP%s rev %u", chip, rev); else sprintf(ath79_sys_type, "Atheros AR%s rev %u", chip, rev); diff --git a/arch/mips/bcm63xx/early_printk.c b/arch/mips/bcm63xx/early_printk.c index 6092226a6d76..9e9ec27c282f 100644 --- a/arch/mips/bcm63xx/early_printk.c +++ b/arch/mips/bcm63xx/early_printk.c @@ -8,6 +8,7 @@ #include #include +#include static void wait_xfered(void) { diff --git a/arch/mips/bmips/dma.c b/arch/mips/bmips/dma.c index 6dec30842b2f..3d13c77c125f 100644 --- a/arch/mips/bmips/dma.c +++ b/arch/mips/bmips/dma.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include /* * BCM338x has configurable address translation windows which allow the @@ -40,7 +40,7 @@ static struct bmips_dma_range *bmips_dma_ranges; #define FLUSH_RAC 0x100 -static dma_addr_t bmips_phys_to_dma(struct device *dev, phys_addr_t pa) +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t pa) { struct bmips_dma_range *r; @@ -52,17 +52,7 @@ static dma_addr_t bmips_phys_to_dma(struct device *dev, phys_addr_t pa) return pa; } -dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, size_t size) -{ - return bmips_phys_to_dma(dev, virt_to_phys(addr)); -} - -dma_addr_t plat_map_dma_mem_page(struct device *dev, struct page *page) -{ - return bmips_phys_to_dma(dev, page_to_phys(page)); -} - -unsigned long plat_dma_addr_to_phys(struct device *dev, dma_addr_t dma_addr) +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr) { struct bmips_dma_range *r; @@ -74,6 +64,22 @@ unsigned long plat_dma_addr_to_phys(struct device *dev, dma_addr_t dma_addr) return dma_addr; } +void arch_sync_dma_for_cpu_all(struct device *dev) +{ + void __iomem *cbr = BMIPS_GET_CBR(); + u32 cfg; + + if (boot_cpu_type() != CPU_BMIPS3300 && + boot_cpu_type() != CPU_BMIPS4350 && + boot_cpu_type() != CPU_BMIPS4380) + return; + + /* Flush stale data out of the readahead cache */ + cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG); + __raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG); + __raw_readl(cbr + BMIPS_RAC_CONFIG); +} + static int __init bmips_init_dma_ranges(void) { struct device_node *np = diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c index 3b6f687f177c..231fc5ce375e 100644 --- a/arch/mips/bmips/setup.c +++ b/arch/mips/bmips/setup.c @@ -202,13 +202,6 @@ void __init device_tree_init(void) of_node_put(np); } -int __init plat_of_setup(void) -{ - return __dt_register_buses("simple-bus", NULL); -} - -arch_initcall(plat_of_setup); - static int __init plat_dev_init(void) { of_clk_init(NULL); diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile index c22da16d67b8..35704c28a28b 100644 --- a/arch/mips/boot/Makefile +++ b/arch/mips/boot/Makefile @@ -105,28 +105,29 @@ $(obj)/uImage: $(obj)/uImage.$(suffix-y) # Flattened Image Tree (.itb) images # -targets += vmlinux.itb -targets += vmlinux.gz.itb -targets += vmlinux.bz2.itb -targets += vmlinux.lzma.itb -targets += vmlinux.lzo.itb - ifeq ($(ADDR_BITS),32) - itb_addr_cells = 1 +itb_addr_cells = 1 endif ifeq ($(ADDR_BITS),64) - itb_addr_cells = 2 +itb_addr_cells = 2 endif +targets += vmlinux.its.S + quiet_cmd_its_cat = CAT $@ - cmd_its_cat = cat $^ >$@ + cmd_its_cat = cat $(filter-out $(PHONY), $^) >$@ -$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) +$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) FORCE $(call if_changed,its_cat) +targets += vmlinux.its +targets += vmlinux.gz.its +targets += vmlinux.bz2.its +targets += vmlinux.lzmo.its +targets += vmlinux.lzo.its + quiet_cmd_cpp_its_S = ITS $@ - cmd_cpp_its_S = $(CPP) $(cpp_flags) -P -C -o $@ $< \ - -D__ASSEMBLY__ \ + cmd_cpp_its_S = $(CPP) -P -C -o $@ $< \ -DKERNEL_NAME="\"Linux $(KERNELRELEASE)\"" \ -DVMLINUX_BINARY="\"$(3)\"" \ -DVMLINUX_COMPRESSION="\"$(2)\"" \ @@ -136,19 +137,25 @@ quiet_cmd_cpp_its_S = ITS $@ -DADDR_CELLS=$(itb_addr_cells) $(obj)/vmlinux.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE - $(call if_changed_dep,cpp_its_S,none,vmlinux.bin) + $(call if_changed,cpp_its_S,none,vmlinux.bin) $(obj)/vmlinux.gz.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE - $(call if_changed_dep,cpp_its_S,gzip,vmlinux.bin.gz) + $(call if_changed,cpp_its_S,gzip,vmlinux.bin.gz) $(obj)/vmlinux.bz2.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE - $(call if_changed_dep,cpp_its_S,bzip2,vmlinux.bin.bz2) + $(call if_changed,cpp_its_S,bzip2,vmlinux.bin.bz2) $(obj)/vmlinux.lzma.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE - $(call if_changed_dep,cpp_its_S,lzma,vmlinux.bin.lzma) + $(call if_changed,cpp_its_S,lzma,vmlinux.bin.lzma) $(obj)/vmlinux.lzo.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE - $(call if_changed_dep,cpp_its_S,lzo,vmlinux.bin.lzo) + $(call if_changed,cpp_its_S,lzo,vmlinux.bin.lzo) + +targets += vmlinux.itb +targets += vmlinux.gz.itb +targets += vmlinux.bz2.itb +targets += vmlinux.lzma.itb +targets += vmlinux.lzo.itb quiet_cmd_itb-image = ITB $@ cmd_itb-image = \ @@ -162,14 +169,5 @@ quiet_cmd_itb-image = ITB $@ $(obj)/vmlinux.itb: $(obj)/vmlinux.its $(obj)/vmlinux.bin FORCE $(call if_changed,itb-image,$<) -$(obj)/vmlinux.gz.itb: $(obj)/vmlinux.gz.its $(obj)/vmlinux.bin.gz FORCE - $(call if_changed,itb-image,$<) - -$(obj)/vmlinux.bz2.itb: $(obj)/vmlinux.bz2.its $(obj)/vmlinux.bin.bz2 FORCE - $(call if_changed,itb-image,$<) - -$(obj)/vmlinux.lzma.itb: $(obj)/vmlinux.lzma.its $(obj)/vmlinux.bin.lzma FORCE - $(call if_changed,itb-image,$<) - -$(obj)/vmlinux.lzo.itb: $(obj)/vmlinux.lzo.its $(obj)/vmlinux.bin.lzo FORCE +$(obj)/vmlinux.%.itb: $(obj)/vmlinux.%.its $(obj)/vmlinux.bin.% FORCE $(call if_changed,itb-image,$<) diff --git a/arch/mips/boot/compressed/uart-prom.c b/arch/mips/boot/compressed/uart-prom.c index d6f0fee0a151..a8a0a32e05d1 100644 --- a/arch/mips/boot/compressed/uart-prom.c +++ b/arch/mips/boot/compressed/uart-prom.c @@ -1,6 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 - -extern void prom_putchar(unsigned char ch); +#include void putc(char c) { diff --git a/arch/mips/boot/dts/ingenic/jz4780.dtsi b/arch/mips/boot/dts/ingenic/jz4780.dtsi index aa4e8f75ff5d..ce93d57f1b4d 100644 --- a/arch/mips/boot/dts/ingenic/jz4780.dtsi +++ b/arch/mips/boot/dts/ingenic/jz4780.dtsi @@ -155,6 +155,25 @@ }; }; + spi_gpio { + compatible = "spi-gpio"; + #address-cells = <1>; + #size-cells = <0>; + num-chipselects = <2>; + + gpio-miso = <&gpe 14 0>; + gpio-sck = <&gpe 15 0>; + gpio-mosi = <&gpe 17 0>; + cs-gpios = <&gpe 16 0 + &gpe 18 0>; + + spidev@0 { + compatible = "spidev"; + reg = <0>; + spi-max-frequency = <1000000>; + }; + }; + uart0: serial@10030000 { compatible = "ingenic,jz4780-uart"; reg = <0x10030000 0x100>; diff --git a/arch/mips/boot/dts/mscc/Makefile b/arch/mips/boot/dts/mscc/Makefile index 3c6aed9f5439..9a9bb7ea0503 100644 --- a/arch/mips/boot/dts/mscc/Makefile +++ b/arch/mips/boot/dts/mscc/Makefile @@ -1,3 +1,3 @@ -dtb-$(CONFIG_LEGACY_BOARD_OCELOT) += ocelot_pcb123.dtb +dtb-$(CONFIG_MSCC_OCELOT) += ocelot_pcb123.dtb obj-$(CONFIG_BUILTIN_DTB) += $(addsuffix .o, $(dtb-y)) diff --git a/arch/mips/boot/dts/mscc/ocelot.dtsi b/arch/mips/boot/dts/mscc/ocelot.dtsi index 4f33dbc67348..f7eb612b46ba 100644 --- a/arch/mips/boot/dts/mscc/ocelot.dtsi +++ b/arch/mips/boot/dts/mscc/ocelot.dtsi @@ -91,6 +91,17 @@ status = "disabled"; }; + spi: spi@101000 { + compatible = "mscc,ocelot-spi", "snps,dw-apb-ssi"; + #address-cells = <1>; + #size-cells = <0>; + reg = <0x101000 0x100>, <0x3c 0x18>; + interrupts = <9>; + clocks = <&ahb_clk>; + + status = "disabled"; + }; + switch@1010000 { compatible = "mscc,vsc7514-switch"; reg = <0x1010000 0x10000>, @@ -168,6 +179,9 @@ gpio-controller; #gpio-cells = <2>; gpio-ranges = <&gpio 0 0 22>; + interrupt-controller; + interrupts = <13>; + #interrupt-cells = <2>; uart_pins: uart-pins { pins = "GPIO_6", "GPIO_7"; @@ -178,13 +192,18 @@ pins = "GPIO_12", "GPIO_13"; function = "uart2"; }; + + miim1: miim1 { + pins = "GPIO_14", "GPIO_15"; + function = "miim1"; + }; }; mdio0: mdio@107009c { #address-cells = <1>; #size-cells = <0>; compatible = "mscc,ocelot-miim"; - reg = <0x107009c 0x36>, <0x10700f0 0x8>; + reg = <0x107009c 0x24>, <0x10700f0 0x8>; interrupts = <14>; status = "disabled"; @@ -201,5 +220,16 @@ reg = <3>; }; }; + + mdio1: mdio@10700c0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "mscc,ocelot-miim"; + reg = <0x10700c0 0x24>; + interrupts = <15>; + pinctrl-names = "default"; + pinctrl-0 = <&miim1>; + status = "disabled"; + }; }; }; diff --git a/arch/mips/boot/dts/mscc/ocelot_pcb123.dts b/arch/mips/boot/dts/mscc/ocelot_pcb123.dts index 4ccd65379059..2266027759f9 100644 --- a/arch/mips/boot/dts/mscc/ocelot_pcb123.dts +++ b/arch/mips/boot/dts/mscc/ocelot_pcb123.dts @@ -26,6 +26,16 @@ status = "okay"; }; +&spi { + status = "okay"; + + flash@0 { + compatible = "macronix,mx25l25635f", "jedec,spi-nor"; + spi-max-frequency = <20000000>; + reg = <0>; + }; +}; + &mdio0 { status = "okay"; }; diff --git a/arch/mips/boot/dts/qca/ar9132.dtsi b/arch/mips/boot/dts/qca/ar9132.dtsi index 1fe561c5f90e..61dcfa5b6ca7 100644 --- a/arch/mips/boot/dts/qca/ar9132.dtsi +++ b/arch/mips/boot/dts/qca/ar9132.dtsi @@ -161,7 +161,7 @@ usb_phy: usb-phy { compatible = "qca,ar7100-usb-phy"; - reset-names = "usb-phy", "usb-suspend-override"; + reset-names = "phy", "suspend-override"; resets = <&rst 4>, <&rst 3>; #phy-cells = <0>; diff --git a/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts b/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts index 3931033e47c8..7fccf6357225 100644 --- a/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts +++ b/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts @@ -22,11 +22,10 @@ }; gpio-keys { - compatible = "gpio-keys-polled"; + compatible = "gpio-keys"; #address-cells = <1>; #size-cells = <0>; - poll-interval = <20>; button@0 { label = "reset"; linux,code = ; diff --git a/arch/mips/boot/dts/qca/ar9331.dtsi b/arch/mips/boot/dts/qca/ar9331.dtsi index efd5f0722206..2bae201aa365 100644 --- a/arch/mips/boot/dts/qca/ar9331.dtsi +++ b/arch/mips/boot/dts/qca/ar9331.dtsi @@ -146,7 +146,7 @@ usb_phy: usb-phy { compatible = "qca,ar7100-usb-phy"; - reset-names = "usb-phy", "usb-suspend-override"; + reset-names = "phy", "suspend-override"; resets = <&rst 4>, <&rst 3>; #phy-cells = <0>; diff --git a/arch/mips/boot/dts/qca/ar9331_dpt_module.dts b/arch/mips/boot/dts/qca/ar9331_dpt_module.dts index d4e4502daaa8..e7af2cf5f4c1 100644 --- a/arch/mips/boot/dts/qca/ar9331_dpt_module.dts +++ b/arch/mips/boot/dts/qca/ar9331_dpt_module.dts @@ -29,11 +29,10 @@ }; }; - gpio-keys-polled { - compatible = "gpio-keys-polled"; + gpio-keys { + compatible = "gpio-keys"; #address-cells = <1>; #size-cells = <0>; - poll-interval = <100>; button@0 { label = "reset"; diff --git a/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts b/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts index 4f95ccf17c4c..d38aa73f1a2e 100644 --- a/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts +++ b/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts @@ -47,11 +47,10 @@ }; }; - gpio-keys-polled { - compatible = "gpio-keys-polled"; + gpio-keys { + compatible = "gpio-keys"; #address-cells = <1>; #size-cells = <0>; - poll-interval = <100>; button@0 { label = "jumpstart"; diff --git a/arch/mips/boot/dts/qca/ar9331_omega.dts b/arch/mips/boot/dts/qca/ar9331_omega.dts index f70f79c4d0d5..11778abacf66 100644 --- a/arch/mips/boot/dts/qca/ar9331_omega.dts +++ b/arch/mips/boot/dts/qca/ar9331_omega.dts @@ -29,11 +29,10 @@ }; }; - gpio-keys-polled { - compatible = "gpio-keys-polled"; + gpio-keys { + compatible = "gpio-keys"; #address-cells = <1>; #size-cells = <0>; - poll-interval = <100>; button@0 { label = "reset"; diff --git a/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts b/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts index 748131aea22e..c8290d36cfbe 100644 --- a/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts +++ b/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts @@ -47,11 +47,10 @@ }; }; - gpio-keys-polled { - compatible = "gpio-keys-polled"; + gpio-keys { + compatible = "gpio-keys"; #address-cells = <1>; #size-cells = <0>; - poll-interval = <100>; button@0 { label = "wps"; diff --git a/arch/mips/boot/ecoff.h b/arch/mips/boot/ecoff.h index b3e73c22c345..5be79ebfc3f8 100644 --- a/arch/mips/boot/ecoff.h +++ b/arch/mips/boot/ecoff.h @@ -2,14 +2,17 @@ /* * Some ECOFF definitions. */ + +#include + typedef struct filehdr { - unsigned short f_magic; /* magic number */ - unsigned short f_nscns; /* number of sections */ - long f_timdat; /* time & date stamp */ - long f_symptr; /* file pointer to symbolic header */ - long f_nsyms; /* sizeof(symbolic hdr) */ - unsigned short f_opthdr; /* sizeof(optional hdr) */ - unsigned short f_flags; /* flags */ + uint16_t f_magic; /* magic number */ + uint16_t f_nscns; /* number of sections */ + int32_t f_timdat; /* time & date stamp */ + int32_t f_symptr; /* file pointer to symbolic header */ + int32_t f_nsyms; /* sizeof(symbolic hdr) */ + uint16_t f_opthdr; /* sizeof(optional hdr) */ + uint16_t f_flags; /* flags */ } FILHDR; #define FILHSZ sizeof(FILHDR) @@ -18,32 +21,32 @@ typedef struct filehdr { typedef struct scnhdr { char s_name[8]; /* section name */ - long s_paddr; /* physical address, aliased s_nlib */ - long s_vaddr; /* virtual address */ - long s_size; /* section size */ - long s_scnptr; /* file ptr to raw data for section */ - long s_relptr; /* file ptr to relocation */ - long s_lnnoptr; /* file ptr to gp histogram */ - unsigned short s_nreloc; /* number of relocation entries */ - unsigned short s_nlnno; /* number of gp histogram entries */ - long s_flags; /* flags */ + int32_t s_paddr; /* physical address, aliased s_nlib */ + int32_t s_vaddr; /* virtual address */ + int32_t s_size; /* section size */ + int32_t s_scnptr; /* file ptr to raw data for section */ + int32_t s_relptr; /* file ptr to relocation */ + int32_t s_lnnoptr; /* file ptr to gp histogram */ + uint16_t s_nreloc; /* number of relocation entries */ + uint16_t s_nlnno; /* number of gp histogram entries */ + int32_t s_flags; /* flags */ } SCNHDR; #define SCNHSZ sizeof(SCNHDR) -#define SCNROUND ((long)16) +#define SCNROUND ((int32_t)16) typedef struct aouthdr { - short magic; /* see above */ - short vstamp; /* version stamp */ - long tsize; /* text size in bytes, padded to DW bdry*/ - long dsize; /* initialized data " " */ - long bsize; /* uninitialized data " " */ - long entry; /* entry pt. */ - long text_start; /* base of text used for this file */ - long data_start; /* base of data used for this file */ - long bss_start; /* base of bss used for this file */ - long gprmask; /* general purpose register mask */ - long cprmask[4]; /* co-processor register masks */ - long gp_value; /* the gp value used for this object */ + int16_t magic; /* see above */ + int16_t vstamp; /* version stamp */ + int32_t tsize; /* text size in bytes, padded to DW bdry*/ + int32_t dsize; /* initialized data " " */ + int32_t bsize; /* uninitialized data " " */ + int32_t entry; /* entry pt. */ + int32_t text_start; /* base of text used for this file */ + int32_t data_start; /* base of data used for this file */ + int32_t bss_start; /* base of bss used for this file */ + int32_t gprmask; /* general purpose register mask */ + int32_t cprmask[4]; /* co-processor register masks */ + int32_t gp_value; /* the gp value used for this object */ } AOUTHDR; #define AOUTHSZ sizeof(AOUTHDR) diff --git a/arch/mips/boot/elf2ecoff.c b/arch/mips/boot/elf2ecoff.c index 266c8137e859..6972b97235da 100644 --- a/arch/mips/boot/elf2ecoff.c +++ b/arch/mips/boot/elf2ecoff.c @@ -43,6 +43,8 @@ #include #include #include +#include +#include #include "ecoff.h" @@ -55,8 +57,8 @@ /* -------------------------------------------------------------------- */ struct sect { - unsigned long vaddr; - unsigned long len; + uint32_t vaddr; + uint32_t len; }; int *symTypeTable; @@ -153,16 +155,16 @@ static char *saveRead(int file, off_t offset, off_t len, char *name) } #define swab16(x) \ - ((unsigned short)( \ - (((unsigned short)(x) & (unsigned short)0x00ffU) << 8) | \ - (((unsigned short)(x) & (unsigned short)0xff00U) >> 8) )) + ((uint16_t)( \ + (((uint16_t)(x) & (uint16_t)0x00ffU) << 8) | \ + (((uint16_t)(x) & (uint16_t)0xff00U) >> 8) )) #define swab32(x) \ ((unsigned int)( \ - (((unsigned int)(x) & (unsigned int)0x000000ffUL) << 24) | \ - (((unsigned int)(x) & (unsigned int)0x0000ff00UL) << 8) | \ - (((unsigned int)(x) & (unsigned int)0x00ff0000UL) >> 8) | \ - (((unsigned int)(x) & (unsigned int)0xff000000UL) >> 24) )) + (((uint32_t)(x) & (uint32_t)0x000000ffUL) << 24) | \ + (((uint32_t)(x) & (uint32_t)0x0000ff00UL) << 8) | \ + (((uint32_t)(x) & (uint32_t)0x00ff0000UL) >> 8) | \ + (((uint32_t)(x) & (uint32_t)0xff000000UL) >> 24) )) static void convert_elf_hdr(Elf32_Ehdr * e) { @@ -274,7 +276,7 @@ int main(int argc, char *argv[]) struct aouthdr eah; struct scnhdr esecs[6]; int infile, outfile; - unsigned long cur_vma = ULONG_MAX; + uint32_t cur_vma = UINT32_MAX; int addflag = 0; int nosecs; @@ -518,7 +520,7 @@ int main(int argc, char *argv[]) for (i = 0; i < nosecs; i++) { printf - ("Section %d: %s phys %lx size %lx file offset %lx\n", + ("Section %d: %s phys %"PRIx32" size %"PRIx32"\t file offset %"PRIx32"\n", i, esecs[i].s_name, esecs[i].s_paddr, esecs[i].s_size, esecs[i].s_scnptr); } @@ -564,17 +566,16 @@ int main(int argc, char *argv[]) the section can be loaded before copying. */ if (ph[i].p_type == PT_LOAD && ph[i].p_filesz) { if (cur_vma != ph[i].p_vaddr) { - unsigned long gap = - ph[i].p_vaddr - cur_vma; + uint32_t gap = ph[i].p_vaddr - cur_vma; char obuf[1024]; if (gap > 65536) { fprintf(stderr, - "Intersegment gap (%ld bytes) too large.\n", + "Intersegment gap (%"PRId32" bytes) too large.\n", gap); exit(1); } fprintf(stderr, - "Warning: %ld byte intersegment gap.\n", + "Warning: %d byte intersegment gap.\n", gap); memset(obuf, 0, sizeof obuf); while (gap) { diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c index 7b335ab21697..236833be6fbe 100644 --- a/arch/mips/cavium-octeon/dma-octeon.c +++ b/arch/mips/cavium-octeon/dma-octeon.c @@ -11,9 +11,7 @@ * Copyright (C) 2010 Cavium Networks, Inc. */ #include -#include #include -#include #include #include #include @@ -24,10 +22,16 @@ #include #ifdef CONFIG_PCI +#include #include #include #include +struct octeon_dma_map_ops { + dma_addr_t (*phys_to_dma)(struct device *dev, phys_addr_t paddr); + phys_addr_t (*dma_to_phys)(struct device *dev, dma_addr_t daddr); +}; + static dma_addr_t octeon_hole_phys_to_dma(phys_addr_t paddr) { if (paddr >= CVMX_PCIE_BAR1_PHYS_BASE && paddr < (CVMX_PCIE_BAR1_PHYS_BASE + CVMX_PCIE_BAR1_PHYS_SIZE)) @@ -61,6 +65,11 @@ static phys_addr_t octeon_gen1_dma_to_phys(struct device *dev, dma_addr_t daddr) return daddr; } +static const struct octeon_dma_map_ops octeon_gen1_ops = { + .phys_to_dma = octeon_gen1_phys_to_dma, + .dma_to_phys = octeon_gen1_dma_to_phys, +}; + static dma_addr_t octeon_gen2_phys_to_dma(struct device *dev, phys_addr_t paddr) { return octeon_hole_phys_to_dma(paddr); @@ -71,6 +80,11 @@ static phys_addr_t octeon_gen2_dma_to_phys(struct device *dev, dma_addr_t daddr) return octeon_hole_dma_to_phys(daddr); } +static const struct octeon_dma_map_ops octeon_gen2_ops = { + .phys_to_dma = octeon_gen2_phys_to_dma, + .dma_to_phys = octeon_gen2_dma_to_phys, +}; + static dma_addr_t octeon_big_phys_to_dma(struct device *dev, phys_addr_t paddr) { if (paddr >= 0x410000000ull && paddr < 0x420000000ull) @@ -93,6 +107,11 @@ static phys_addr_t octeon_big_dma_to_phys(struct device *dev, dma_addr_t daddr) return daddr; } +static const struct octeon_dma_map_ops octeon_big_ops = { + .phys_to_dma = octeon_big_phys_to_dma, + .dma_to_phys = octeon_big_dma_to_phys, +}; + static dma_addr_t octeon_small_phys_to_dma(struct device *dev, phys_addr_t paddr) { @@ -121,105 +140,51 @@ static phys_addr_t octeon_small_dma_to_phys(struct device *dev, return daddr; } -#endif /* CONFIG_PCI */ - -static dma_addr_t octeon_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction direction, - unsigned long attrs) -{ - dma_addr_t daddr = swiotlb_map_page(dev, page, offset, size, - direction, attrs); - mb(); - - return daddr; -} - -static int octeon_dma_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction, unsigned long attrs) -{ - int r = swiotlb_map_sg_attrs(dev, sg, nents, direction, attrs); - mb(); - return r; -} - -static void octeon_dma_sync_single_for_device(struct device *dev, - dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) -{ - swiotlb_sync_single_for_device(dev, dma_handle, size, direction); - mb(); -} - -static void octeon_dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sg, int nelems, enum dma_data_direction direction) -{ - swiotlb_sync_sg_for_device(dev, sg, nelems, direction); - mb(); -} - -static void *octeon_dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - void *ret = swiotlb_alloc(dev, size, dma_handle, gfp, attrs); - - mb(); +static const struct octeon_dma_map_ops octeon_small_ops = { + .phys_to_dma = octeon_small_phys_to_dma, + .dma_to_phys = octeon_small_dma_to_phys, +}; - return ret; -} +static const struct octeon_dma_map_ops *octeon_pci_dma_ops; -static dma_addr_t octeon_unity_phys_to_dma(struct device *dev, phys_addr_t paddr) -{ - return paddr; -} - -static phys_addr_t octeon_unity_dma_to_phys(struct device *dev, dma_addr_t daddr) +void __init octeon_pci_dma_init(void) { - return daddr; + switch (octeon_dma_bar_type) { + case OCTEON_DMA_BAR_TYPE_PCIE: + octeon_pci_dma_ops = &octeon_gen1_ops; + break; + case OCTEON_DMA_BAR_TYPE_PCIE2: + octeon_pci_dma_ops = &octeon_gen2_ops; + break; + case OCTEON_DMA_BAR_TYPE_BIG: + octeon_pci_dma_ops = &octeon_big_ops; + break; + case OCTEON_DMA_BAR_TYPE_SMALL: + octeon_pci_dma_ops = &octeon_small_ops; + break; + default: + BUG(); + } } - -struct octeon_dma_map_ops { - const struct dma_map_ops dma_map_ops; - dma_addr_t (*phys_to_dma)(struct device *dev, phys_addr_t paddr); - phys_addr_t (*dma_to_phys)(struct device *dev, dma_addr_t daddr); -}; +#endif /* CONFIG_PCI */ dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) { - struct octeon_dma_map_ops *ops = container_of(get_dma_ops(dev), - struct octeon_dma_map_ops, - dma_map_ops); - - return ops->phys_to_dma(dev, paddr); +#ifdef CONFIG_PCI + if (dev && dev_is_pci(dev)) + return octeon_pci_dma_ops->phys_to_dma(dev, paddr); +#endif + return paddr; } -EXPORT_SYMBOL(__phys_to_dma); phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr) { - struct octeon_dma_map_ops *ops = container_of(get_dma_ops(dev), - struct octeon_dma_map_ops, - dma_map_ops); - - return ops->dma_to_phys(dev, daddr); +#ifdef CONFIG_PCI + if (dev && dev_is_pci(dev)) + return octeon_pci_dma_ops->dma_to_phys(dev, daddr); +#endif + return daddr; } -EXPORT_SYMBOL(__dma_to_phys); - -static struct octeon_dma_map_ops octeon_linear_dma_map_ops = { - .dma_map_ops = { - .alloc = octeon_dma_alloc_coherent, - .free = swiotlb_free, - .map_page = octeon_dma_map_page, - .unmap_page = swiotlb_unmap_page, - .map_sg = octeon_dma_map_sg, - .unmap_sg = swiotlb_unmap_sg_attrs, - .sync_single_for_cpu = swiotlb_sync_single_for_cpu, - .sync_single_for_device = octeon_dma_sync_single_for_device, - .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, - .sync_sg_for_device = octeon_dma_sync_sg_for_device, - .mapping_error = swiotlb_dma_mapping_error, - .dma_supported = swiotlb_dma_supported - }, - .phys_to_dma = octeon_unity_phys_to_dma, - .dma_to_phys = octeon_unity_dma_to_phys -}; char *octeon_swiotlb; @@ -283,52 +248,4 @@ void __init plat_swiotlb_setup(void) if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM) panic("Cannot allocate SWIOTLB buffer"); - - mips_dma_map_ops = &octeon_linear_dma_map_ops.dma_map_ops; } - -#ifdef CONFIG_PCI -static struct octeon_dma_map_ops _octeon_pci_dma_map_ops = { - .dma_map_ops = { - .alloc = octeon_dma_alloc_coherent, - .free = swiotlb_free, - .map_page = octeon_dma_map_page, - .unmap_page = swiotlb_unmap_page, - .map_sg = octeon_dma_map_sg, - .unmap_sg = swiotlb_unmap_sg_attrs, - .sync_single_for_cpu = swiotlb_sync_single_for_cpu, - .sync_single_for_device = octeon_dma_sync_single_for_device, - .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, - .sync_sg_for_device = octeon_dma_sync_sg_for_device, - .mapping_error = swiotlb_dma_mapping_error, - .dma_supported = swiotlb_dma_supported - }, -}; - -const struct dma_map_ops *octeon_pci_dma_map_ops; - -void __init octeon_pci_dma_init(void) -{ - switch (octeon_dma_bar_type) { - case OCTEON_DMA_BAR_TYPE_PCIE2: - _octeon_pci_dma_map_ops.phys_to_dma = octeon_gen2_phys_to_dma; - _octeon_pci_dma_map_ops.dma_to_phys = octeon_gen2_dma_to_phys; - break; - case OCTEON_DMA_BAR_TYPE_PCIE: - _octeon_pci_dma_map_ops.phys_to_dma = octeon_gen1_phys_to_dma; - _octeon_pci_dma_map_ops.dma_to_phys = octeon_gen1_dma_to_phys; - break; - case OCTEON_DMA_BAR_TYPE_BIG: - _octeon_pci_dma_map_ops.phys_to_dma = octeon_big_phys_to_dma; - _octeon_pci_dma_map_ops.dma_to_phys = octeon_big_dma_to_phys; - break; - case OCTEON_DMA_BAR_TYPE_SMALL: - _octeon_pci_dma_map_ops.phys_to_dma = octeon_small_phys_to_dma; - _octeon_pci_dma_map_ops.dma_to_phys = octeon_small_dma_to_phys; - break; - default: - BUG(); - } - octeon_pci_dma_map_ops = &_octeon_pci_dma_map_ops.dma_map_ops; -} -#endif /* CONFIG_PCI */ diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-rgmii.c b/arch/mips/cavium-octeon/executive/cvmx-helper-rgmii.c index d18ed5af62f4..b8898e2b8a6f 100644 --- a/arch/mips/cavium-octeon/executive/cvmx-helper-rgmii.c +++ b/arch/mips/cavium-octeon/executive/cvmx-helper-rgmii.c @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2008 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -42,9 +42,6 @@ #include #include -void __cvmx_interrupt_gmxx_enable(int interface); -void __cvmx_interrupt_asxx_enable(int block); - /** * Probe RGMII ports and determine the number present * diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-sgmii.c b/arch/mips/cavium-octeon/executive/cvmx-helper-sgmii.c index 578283350776..a176358c5a21 100644 --- a/arch/mips/cavium-octeon/executive/cvmx-helper-sgmii.c +++ b/arch/mips/cavium-octeon/executive/cvmx-helper-sgmii.c @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2008 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -39,10 +39,7 @@ #include #include - -void __cvmx_interrupt_gmxx_enable(int interface); -void __cvmx_interrupt_pcsx_intx_en_reg_enable(int index, int block); -void __cvmx_interrupt_pcsxx_int_en_reg_enable(int index); +#include /** * Perform initialization required only once for an SGMII port. diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-spi.c b/arch/mips/cavium-octeon/executive/cvmx-helper-spi.c index ef16aa00167b..2a574d272671 100644 --- a/arch/mips/cavium-octeon/executive/cvmx-helper-spi.c +++ b/arch/mips/cavium-octeon/executive/cvmx-helper-spi.c @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2008 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -25,10 +25,6 @@ * Contact Cavium Networks for more information ***********************license end**************************************/ -void __cvmx_interrupt_gmxx_enable(int interface); -void __cvmx_interrupt_spxx_int_msk_enable(int index); -void __cvmx_interrupt_stxx_int_msk_enable(int index); - /* * Functions for SPI initialization, configuration, * and monitoring. @@ -41,6 +37,8 @@ void __cvmx_interrupt_stxx_int_msk_enable(int index); #include #include +#include +#include /* * CVMX_HELPER_SPI_TIMEOUT is used to determine how long the SPI diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-xaui.c b/arch/mips/cavium-octeon/executive/cvmx-helper-xaui.c index 19d54e02c185..2bb6912a580d 100644 --- a/arch/mips/cavium-octeon/executive/cvmx-helper-xaui.c +++ b/arch/mips/cavium-octeon/executive/cvmx-helper-xaui.c @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2008 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -39,12 +39,9 @@ #include #include +#include #include -void __cvmx_interrupt_gmxx_enable(int interface); -void __cvmx_interrupt_pcsx_intx_en_reg_enable(int index, int block); -void __cvmx_interrupt_pcsxx_int_en_reg_enable(int index); - int __cvmx_helper_xaui_enumerate(int interface) { union cvmx_gmxx_hg2_control gmx_hg2_control; diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c index b3aec101a65d..8272d8c648ca 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -814,7 +814,7 @@ static int octeon_irq_ciu_set_affinity(struct irq_data *data, pen = &per_cpu(octeon_irq_ciu1_en_mirror, cpu); if (cpumask_test_cpu(cpu, dest) && enable_one) { - enable_one = 0; + enable_one = false; __set_bit(cd->bit, pen); } else { __clear_bit(cd->bit, pen); diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c index 8505db478904..807cadaf554e 100644 --- a/arch/mips/cavium-octeon/octeon-platform.c +++ b/arch/mips/cavium-octeon/octeon-platform.c @@ -322,6 +322,7 @@ static int __init octeon_ehci_device_init(void) return 0; pd = of_find_device_by_node(ehci_node); + of_node_put(ehci_node); if (!pd) return 0; @@ -384,6 +385,7 @@ static int __init octeon_ohci_device_init(void) return 0; pd = of_find_device_by_node(ohci_node); + of_node_put(ohci_node); if (!pd) return 0; @@ -1067,6 +1069,6 @@ end_led: static int __init octeon_publish_devices(void) { - return of_platform_bus_probe(NULL, octeon_ids, NULL); + return of_platform_populate(NULL, octeon_ids, NULL, NULL); } arch_initcall(octeon_publish_devices); diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c index a8034d0dcade..c2426232db06 100644 --- a/arch/mips/cavium-octeon/setup.c +++ b/arch/mips/cavium-octeon/setup.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -1108,7 +1109,7 @@ void __init plat_mem_setup(void) * Emit one character to the boot UART. Exported for use by the * watchdog timer. */ -int prom_putchar(char c) +void prom_putchar(char c) { uint64_t lsrval; @@ -1119,7 +1120,6 @@ int prom_putchar(char c) /* Write the byte */ cvmx_write_csr(CVMX_MIO_UARTX_THR(octeon_uart), c & 0xffull); - return 1; } EXPORT_SYMBOL(prom_putchar); @@ -1154,11 +1154,7 @@ void __init prom_free_prom_memory(void) } void __init octeon_fill_mac_addresses(void); -int octeon_prune_device_tree(void); -extern const char __appended_dtb; -extern const char __dtb_octeon_3xxx_begin; -extern const char __dtb_octeon_68xx_begin; void __init device_tree_init(void) { const void *fdt; diff --git a/arch/mips/configs/ci20_defconfig b/arch/mips/configs/ci20_defconfig index be23fd25eeaa..030ff9c205fb 100644 --- a/arch/mips/configs/ci20_defconfig +++ b/arch/mips/configs/ci20_defconfig @@ -92,6 +92,8 @@ CONFIG_SERIAL_OF_PLATFORM=y # CONFIG_HW_RANDOM is not set CONFIG_I2C=y CONFIG_I2C_JZ4780=y +CONFIG_SPI=y +CONFIG_SPI_GPIO=y CONFIG_GPIO_SYSFS=y CONFIG_GPIO_INGENIC=y # CONFIG_HWMON is not set diff --git a/arch/mips/configs/generic_defconfig b/arch/mips/configs/generic_defconfig index 26b1cd5ffbf5..684c9dcba126 100644 --- a/arch/mips/configs/generic_defconfig +++ b/arch/mips/configs/generic_defconfig @@ -43,9 +43,6 @@ CONFIG_NETFILTER=y CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS_MOUNT=y CONFIG_SCSI=y -# CONFIG_INPUT_MOUSEDEV is not set -# CONFIG_INPUT_KEYBOARD is not set -# CONFIG_INPUT_MOUSE is not set # CONFIG_SERIO is not set CONFIG_HW_RANDOM=y # CONFIG_HWMON is not set diff --git a/arch/mips/configs/malta_defconfig b/arch/mips/configs/malta_defconfig index df8a9a15ca83..81058295d35f 100644 --- a/arch/mips/configs/malta_defconfig +++ b/arch/mips/configs/malta_defconfig @@ -317,6 +317,7 @@ CONFIG_MOUSE_PS2_ELANTECH=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/malta_kvm_defconfig b/arch/mips/configs/malta_kvm_defconfig index 14df9ef15d40..5c10cddc39d3 100644 --- a/arch/mips/configs/malta_kvm_defconfig +++ b/arch/mips/configs/malta_kvm_defconfig @@ -328,6 +328,7 @@ CONFIG_INPUT_MOUSEDEV=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/malta_kvm_guest_defconfig b/arch/mips/configs/malta_kvm_guest_defconfig index 25092e344574..bb694f5065f1 100644 --- a/arch/mips/configs/malta_kvm_guest_defconfig +++ b/arch/mips/configs/malta_kvm_guest_defconfig @@ -330,6 +330,7 @@ CONFIG_INPUT_MOUSEDEV=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/malta_qemu_32r6_defconfig b/arch/mips/configs/malta_qemu_32r6_defconfig index 210bf609f785..5b5306b80576 100644 --- a/arch/mips/configs/malta_qemu_32r6_defconfig +++ b/arch/mips/configs/malta_qemu_32r6_defconfig @@ -133,6 +133,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_HW_RANDOM=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/maltaaprp_defconfig b/arch/mips/configs/maltaaprp_defconfig index e5934aa98397..85543599448f 100644 --- a/arch/mips/configs/maltaaprp_defconfig +++ b/arch/mips/configs/maltaaprp_defconfig @@ -133,6 +133,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_HW_RANDOM=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/maltasmvp_defconfig b/arch/mips/configs/maltasmvp_defconfig index cb2ca11c1789..067bb84ac916 100644 --- a/arch/mips/configs/maltasmvp_defconfig +++ b/arch/mips/configs/maltasmvp_defconfig @@ -134,6 +134,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_HW_RANDOM=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/maltasmvp_eva_defconfig b/arch/mips/configs/maltasmvp_eva_defconfig index be29fcec69fc..dfc78c3172a3 100644 --- a/arch/mips/configs/maltasmvp_eva_defconfig +++ b/arch/mips/configs/maltasmvp_eva_defconfig @@ -137,6 +137,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_HW_RANDOM=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/maltaup_defconfig b/arch/mips/configs/maltaup_defconfig index 40462d4c90a0..50a2288c69f8 100644 --- a/arch/mips/configs/maltaup_defconfig +++ b/arch/mips/configs/maltaup_defconfig @@ -132,6 +132,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_HW_RANDOM=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/configs/maltaup_xpa_defconfig b/arch/mips/configs/maltaup_xpa_defconfig index 4e50176cb3df..99a19cf5f9ba 100644 --- a/arch/mips/configs/maltaup_xpa_defconfig +++ b/arch/mips/configs/maltaup_xpa_defconfig @@ -326,6 +326,7 @@ CONFIG_MOUSE_PS2_ELANTECH=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_POWER_RESET=y +CONFIG_POWER_RESET_PIIX4_POWEROFF=y CONFIG_POWER_RESET_SYSCON=y # CONFIG_HWMON is not set CONFIG_FB=y diff --git a/arch/mips/fw/arc/arc_con.c b/arch/mips/fw/arc/arc_con.c index 769d4b9ac82e..365e3913231e 100644 --- a/arch/mips/fw/arc/arc_con.c +++ b/arch/mips/fw/arc/arc_con.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static void prom_console_write(struct console *co, const char *s, diff --git a/arch/mips/fw/arc/promlib.c b/arch/mips/fw/arc/promlib.c index 7e8ba5ce95be..be381307fbb0 100644 --- a/arch/mips/fw/arc/promlib.c +++ b/arch/mips/fw/arc/promlib.c @@ -9,6 +9,7 @@ #include #include #include +#include /* * IP22 boardcache is not compatible with board caches. Thus we disable it diff --git a/arch/mips/fw/sni/sniprom.c b/arch/mips/fw/sni/sniprom.c index 6aa264b9856a..8772617b64ce 100644 --- a/arch/mips/fw/sni/sniprom.c +++ b/arch/mips/fw/sni/sniprom.c @@ -19,6 +19,7 @@ #include #include #include +#include /* special SNI prom calls */ /* diff --git a/arch/mips/generic/Kconfig b/arch/mips/generic/Kconfig index ba9b2c8cce68..08e33c6b2539 100644 --- a/arch/mips/generic/Kconfig +++ b/arch/mips/generic/Kconfig @@ -35,13 +35,13 @@ config LEGACY_BOARD_OCELOT depends on LEGACY_BOARD_SEAD3=n select LEGACY_BOARDS select MSCC_OCELOT + select SYS_HAS_EARLY_PRINTK + select USE_GENERIC_EARLY_PRINTK_8250 config MSCC_OCELOT bool select GPIOLIB select MSCC_OCELOT_IRQ - select SYS_HAS_EARLY_PRINTK - select USE_GENERIC_EARLY_PRINTK_8250 comment "FIT/UHI Boards" @@ -65,6 +65,14 @@ config FIT_IMAGE_FDT_XILFPGA Enable this to include the FDT for the MIPSfpga platform from Imagination Technologies in the FIT kernel image. +config FIT_IMAGE_FDT_OCELOT_PCB123 + bool "Include FDT for Microsemi Ocelot PCB123" + select MSCC_OCELOT + help + Enable this to include the FDT for the Ocelot PCB123 platform + from Microsemi in the FIT kernel image. + This requires u-boot on the platform. + config VIRT_BOARD_RANCHU bool "Support Ranchu platform for Android emulator" help diff --git a/arch/mips/generic/Platform b/arch/mips/generic/Platform index 0dd0d5d460a5..879cb80396c8 100644 --- a/arch/mips/generic/Platform +++ b/arch/mips/generic/Platform @@ -16,4 +16,5 @@ all-$(CONFIG_MIPS_GENERIC) := vmlinux.gz.itb its-y := vmlinux.its.S its-$(CONFIG_FIT_IMAGE_FDT_BOSTON) += board-boston.its.S its-$(CONFIG_FIT_IMAGE_FDT_NI169445) += board-ni169445.its.S +its-$(CONFIG_FIT_IMAGE_FDT_OCELOT_PCB123) += board-ocelot_pcb123.its.S its-$(CONFIG_FIT_IMAGE_FDT_XILFPGA) += board-xilfpga.its.S diff --git a/arch/mips/generic/board-ocelot_pcb123.its.S b/arch/mips/generic/board-ocelot_pcb123.its.S new file mode 100644 index 000000000000..5a7d5e1c878a --- /dev/null +++ b/arch/mips/generic/board-ocelot_pcb123.its.S @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ +/ { + images { + fdt@ocelot_pcb123 { + description = "MSCC Ocelot PCB123 Device Tree"; + data = /incbin/("boot/dts/mscc/ocelot_pcb123.dtb"); + type = "flat_dt"; + arch = "mips"; + compression = "none"; + hash@0 { + algo = "sha1"; + }; + }; + }; + + configurations { + conf@ocelot_pcb123 { + description = "Ocelot Linux kernel"; + kernel = "kernel@0"; + fdt = "fdt@ocelot_pcb123"; + }; + }; +}; diff --git a/arch/mips/generic/init.c b/arch/mips/generic/init.c index 5ba6fcc26fa7..a106f8113842 100644 --- a/arch/mips/generic/init.c +++ b/arch/mips/generic/init.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include @@ -204,22 +203,11 @@ void __init arch_init_irq(void) "mti,cpu-interrupt-controller"); if (!cpu_has_veic && !intc_node) mips_cpu_irq_init(); + of_node_put(intc_node); irqchip_init(); } -static int __init publish_devices(void) -{ - if (!of_have_populated_dt()) - panic("Device-tree not present"); - - if (of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL)) - panic("Failed to populate DT"); - - return 0; -} -arch_initcall(publish_devices); - void __init prom_free_prom_memory(void) { } diff --git a/arch/mips/generic/yamon-dt.c b/arch/mips/generic/yamon-dt.c index b408dac722ac..7ba4ad5cc1d6 100644 --- a/arch/mips/generic/yamon-dt.c +++ b/arch/mips/generic/yamon-dt.c @@ -27,8 +27,6 @@ __init int yamon_dt_append_cmdline(void *fdt) /* find or add chosen node */ chosen_off = fdt_path_offset(fdt, "/chosen"); - if (chosen_off == -FDT_ERR_NOTFOUND) - chosen_off = fdt_path_offset(fdt, "/chosen@0"); if (chosen_off == -FDT_ERR_NOTFOUND) chosen_off = fdt_add_subnode(fdt, 0, "chosen"); if (chosen_off < 0) { @@ -220,8 +218,6 @@ __init int yamon_dt_serial_config(void *fdt) /* find or add chosen node */ chosen_off = fdt_path_offset(fdt, "/chosen"); - if (chosen_off == -FDT_ERR_NOTFOUND) - chosen_off = fdt_path_offset(fdt, "/chosen@0"); if (chosen_off == -FDT_ERR_NOTFOUND) chosen_off = fdt_add_subnode(fdt, 0, "chosen"); if (chosen_off < 0) { diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild index 45d541baf359..58351e48421e 100644 --- a/arch/mips/include/asm/Kbuild +++ b/arch/mips/include/asm/Kbuild @@ -8,6 +8,7 @@ generic-y += irq_work.h generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h +generic-y += msi.h generic-y += parport.h generic-y += percpu.h generic-y += preempt.h diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 0ab176bdb8e8..0269b3de8b51 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -22,6 +22,17 @@ #include #include +/* + * Using a branch-likely instruction to check the result of an sc instruction + * works around a bug present in R10000 CPUs prior to revision 3.0 that could + * cause ll-sc sequences to execute non-atomically. + */ +#if R10000_LLSC_WAR +# define __scbeqz "beqzl" +#else +# define __scbeqz "beqz" +#endif + #define ATOMIC_INIT(i) { (i) } /* @@ -44,31 +55,18 @@ #define ATOMIC_OP(op, c_op, asm_op) \ static __inline__ void atomic_##op(int i, atomic_t * v) \ { \ - if (kernel_uses_llsc && R10000_LLSC_WAR) { \ + if (kernel_uses_llsc) { \ int temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: ll %0, %1 # atomic_" #op " \n" \ " " #asm_op " %0, %2 \n" \ " sc %0, %1 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " .set mips0 \n" \ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - int temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " ll %0, %1 # atomic_" #op "\n" \ - " " #asm_op " %0, %2 \n" \ - " sc %0, %1 \n" \ - " .set mips0 \n" \ - : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ - } while (unlikely(!temp)); \ } else { \ unsigned long flags; \ \ @@ -83,36 +81,20 @@ static __inline__ int atomic_##op##_return_relaxed(int i, atomic_t * v) \ { \ int result; \ \ - if (kernel_uses_llsc && R10000_LLSC_WAR) { \ + if (kernel_uses_llsc) { \ int temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: ll %1, %2 # atomic_" #op "_return \n" \ " " #asm_op " %0, %1, %3 \n" \ " sc %0, %2 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " " #asm_op " %0, %1, %3 \n" \ " .set mips0 \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - int temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " ll %1, %2 # atomic_" #op "_return \n" \ - " " #asm_op " %0, %1, %3 \n" \ - " sc %0, %2 \n" \ - " .set mips0 \n" \ - : "=&r" (result), "=&r" (temp), \ - "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ - } while (unlikely(!result)); \ - \ - result = temp; result c_op i; \ } else { \ unsigned long flags; \ \ @@ -131,36 +113,20 @@ static __inline__ int atomic_fetch_##op##_relaxed(int i, atomic_t * v) \ { \ int result; \ \ - if (kernel_uses_llsc && R10000_LLSC_WAR) { \ + if (kernel_uses_llsc) { \ int temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: ll %1, %2 # atomic_fetch_" #op " \n" \ " " #asm_op " %0, %1, %3 \n" \ " sc %0, %2 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " move %0, %1 \n" \ " .set mips0 \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - int temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " ll %1, %2 # atomic_fetch_" #op " \n" \ - " " #asm_op " %0, %1, %3 \n" \ - " sc %0, %2 \n" \ - " .set mips0 \n" \ - : "=&r" (result), "=&r" (temp), \ - "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ - } while (unlikely(!result)); \ - \ - result = temp; \ } else { \ unsigned long flags; \ \ @@ -218,38 +184,17 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) smp_mb__before_llsc(); - if (kernel_uses_llsc && R10000_LLSC_WAR) { - int temp; - - __asm__ __volatile__( - " .set arch=r4000 \n" - "1: ll %1, %2 # atomic_sub_if_positive\n" - " subu %0, %1, %3 \n" - " bltz %0, 1f \n" - " sc %0, %2 \n" - " .set noreorder \n" - " beqzl %0, 1b \n" - " subu %0, %1, %3 \n" - " .set reorder \n" - "1: \n" - " .set mips0 \n" - : "=&r" (result), "=&r" (temp), - "+" GCC_OFF_SMALL_ASM() (v->counter) - : "Ir" (i), GCC_OFF_SMALL_ASM() (v->counter) - : "memory"); - } else if (kernel_uses_llsc) { + if (kernel_uses_llsc) { int temp; __asm__ __volatile__( " .set "MIPS_ISA_LEVEL" \n" "1: ll %1, %2 # atomic_sub_if_positive\n" " subu %0, %1, %3 \n" + " move %1, %0 \n" " bltz %0, 1f \n" - " sc %0, %2 \n" - " .set noreorder \n" - " beqz %0, 1b \n" - " subu %0, %1, %3 \n" - " .set reorder \n" + " sc %1, %2 \n" + "\t" __scbeqz " %1, 1b \n" "1: \n" " .set mips0 \n" : "=&r" (result), "=&r" (temp), @@ -274,97 +219,12 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), (new))) -/** - * __atomic_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - -/* - * atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -/* - * atomic_dec_and_test - decrement by 1 and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) - /* * atomic_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t */ #define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v) -/* - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc(v) atomic_add(1, (v)) - -/* - * atomic_dec - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec(v) atomic_sub(1, (v)) - -/* - * atomic_add_negative - add and test if negative - * @v: pointer of type atomic_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic_add_negative(i, v) (atomic_add_return(i, (v)) < 0) - #ifdef CONFIG_64BIT #define ATOMIC64_INIT(i) { (i) } @@ -386,31 +246,18 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) #define ATOMIC64_OP(op, c_op, asm_op) \ static __inline__ void atomic64_##op(long i, atomic64_t * v) \ { \ - if (kernel_uses_llsc && R10000_LLSC_WAR) { \ + if (kernel_uses_llsc) { \ long temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: lld %0, %1 # atomic64_" #op " \n" \ " " #asm_op " %0, %2 \n" \ " scd %0, %1 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " .set mips0 \n" \ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - long temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " lld %0, %1 # atomic64_" #op "\n" \ - " " #asm_op " %0, %2 \n" \ - " scd %0, %1 \n" \ - " .set mips0 \n" \ - : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ - } while (unlikely(!temp)); \ } else { \ unsigned long flags; \ \ @@ -425,37 +272,20 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ { \ long result; \ \ - if (kernel_uses_llsc && R10000_LLSC_WAR) { \ + if (kernel_uses_llsc) { \ long temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: lld %1, %2 # atomic64_" #op "_return\n" \ " " #asm_op " %0, %1, %3 \n" \ " scd %0, %2 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " " #asm_op " %0, %1, %3 \n" \ " .set mips0 \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - long temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " lld %1, %2 # atomic64_" #op "_return\n" \ - " " #asm_op " %0, %1, %3 \n" \ - " scd %0, %2 \n" \ - " .set mips0 \n" \ - : "=&r" (result), "=&r" (temp), \ - "=" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i), GCC_OFF_SMALL_ASM() (v->counter) \ - : "memory"); \ - } while (unlikely(!result)); \ - \ - result = temp; result c_op i; \ } else { \ unsigned long flags; \ \ @@ -478,33 +308,16 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \ long temp; \ \ __asm__ __volatile__( \ - " .set arch=r4000 \n" \ + " .set "MIPS_ISA_LEVEL" \n" \ "1: lld %1, %2 # atomic64_fetch_" #op "\n" \ " " #asm_op " %0, %1, %3 \n" \ " scd %0, %2 \n" \ - " beqzl %0, 1b \n" \ + "\t" __scbeqz " %0, 1b \n" \ " move %0, %1 \n" \ " .set mips0 \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ : "Ir" (i)); \ - } else if (kernel_uses_llsc) { \ - long temp; \ - \ - do { \ - __asm__ __volatile__( \ - " .set "MIPS_ISA_LEVEL" \n" \ - " lld %1, %2 # atomic64_fetch_" #op "\n" \ - " " #asm_op " %0, %1, %3 \n" \ - " scd %0, %2 \n" \ - " .set mips0 \n" \ - : "=&r" (result), "=&r" (temp), \ - "=" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i), GCC_OFF_SMALL_ASM() (v->counter) \ - : "memory"); \ - } while (unlikely(!result)); \ - \ - result = temp; \ } else { \ unsigned long flags; \ \ @@ -563,38 +376,17 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) smp_mb__before_llsc(); - if (kernel_uses_llsc && R10000_LLSC_WAR) { - long temp; - - __asm__ __volatile__( - " .set arch=r4000 \n" - "1: lld %1, %2 # atomic64_sub_if_positive\n" - " dsubu %0, %1, %3 \n" - " bltz %0, 1f \n" - " scd %0, %2 \n" - " .set noreorder \n" - " beqzl %0, 1b \n" - " dsubu %0, %1, %3 \n" - " .set reorder \n" - "1: \n" - " .set mips0 \n" - : "=&r" (result), "=&r" (temp), - "=" GCC_OFF_SMALL_ASM() (v->counter) - : "Ir" (i), GCC_OFF_SMALL_ASM() (v->counter) - : "memory"); - } else if (kernel_uses_llsc) { + if (kernel_uses_llsc) { long temp; __asm__ __volatile__( " .set "MIPS_ISA_LEVEL" \n" "1: lld %1, %2 # atomic64_sub_if_positive\n" " dsubu %0, %1, %3 \n" + " move %1, %0 \n" " bltz %0, 1f \n" - " scd %0, %2 \n" - " .set noreorder \n" - " beqz %0, 1b \n" - " dsubu %0, %1, %3 \n" - " .set reorder \n" + " scd %1, %2 \n" + "\t" __scbeqz " %1, 1b \n" "1: \n" " .set mips0 \n" : "=&r" (result), "=&r" (temp), @@ -620,99 +412,12 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), (new))) -/** - * atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns true iff @v was not @u. - */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - -/* - * atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) - -/* - * atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - -/* - * atomic64_dec_and_test - decrement by 1 and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) - /* * atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic64_t */ #define atomic64_dec_if_positive(v) atomic64_sub_if_positive(1, v) -/* - * atomic64_inc - increment atomic variable - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1. - */ -#define atomic64_inc(v) atomic64_add(1, (v)) - -/* - * atomic64_dec - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1. - */ -#define atomic64_dec(v) atomic64_sub(1, (v)) - -/* - * atomic64_add_negative - add and test if negative - * @v: pointer of type atomic64_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic64_add_negative(i, v) (atomic64_add_return(i, (v)) < 0) - #endif /* CONFIG_64BIT */ #endif /* _ASM_ATOMIC_H */ diff --git a/arch/mips/include/asm/bmips.h b/arch/mips/include/asm/bmips.h index b3e2975f83d3..bf6a8afd7ad2 100644 --- a/arch/mips/include/asm/bmips.h +++ b/arch/mips/include/asm/bmips.h @@ -123,22 +123,6 @@ static inline void bmips_write_zscm_reg(unsigned int offset, unsigned long data) barrier(); } -static inline void bmips_post_dma_flush(struct device *dev) -{ - void __iomem *cbr = BMIPS_GET_CBR(); - u32 cfg; - - if (boot_cpu_type() != CPU_BMIPS3300 && - boot_cpu_type() != CPU_BMIPS4350 && - boot_cpu_type() != CPU_BMIPS4380) - return; - - /* Flush stale data out of the readahead cache */ - cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG); - __raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG); - __raw_readl(cbr + BMIPS_RAC_CONFIG); -} - #endif /* !defined(__ASSEMBLY__) */ #endif /* _ASM_BMIPS_H */ diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h index 9cdb4e4ce258..0edba3e75747 100644 --- a/arch/mips/include/asm/cpu-features.h +++ b/arch/mips/include/asm/cpu-features.h @@ -14,39 +14,77 @@ #include #include +#define __ase(ase) (cpu_data[0].ases & (ase)) +#define __opt(opt) (cpu_data[0].options & (opt)) + +/* + * Check if MIPS_ISA_REV is >= isa *and* an option or ASE is detected during + * boot (typically by cpu_probe()). + * + * Note that these should only be used in cases where a kernel built for an + * older ISA *cannot* run on a CPU which supports the feature in question. For + * example this may be used for features introduced with MIPSr6, since a kernel + * built for an older ISA cannot run on a MIPSr6 CPU. This should not be used + * for MIPSr2 features however, since a MIPSr1 or earlier kernel might run on a + * MIPSr2 CPU. + */ +#define __isa_ge_and_ase(isa, ase) ((MIPS_ISA_REV >= (isa)) && __ase(ase)) +#define __isa_ge_and_opt(isa, opt) ((MIPS_ISA_REV >= (isa)) && __opt(opt)) + +/* + * Check if MIPS_ISA_REV is >= isa *or* an option or ASE is detected during + * boot (typically by cpu_probe()). + * + * These are for use with features that are optional up until a particular ISA + * revision & then become required. + */ +#define __isa_ge_or_ase(isa, ase) ((MIPS_ISA_REV >= (isa)) || __ase(ase)) +#define __isa_ge_or_opt(isa, opt) ((MIPS_ISA_REV >= (isa)) || __opt(opt)) + +/* + * Check if MIPS_ISA_REV is < isa *and* an option or ASE is detected during + * boot (typically by cpu_probe()). + * + * These are for use with features that are optional up until a particular ISA + * revision & are then removed - ie. no longer present in any CPU implementing + * the given ISA revision. + */ +#define __isa_lt_and_ase(isa, ase) ((MIPS_ISA_REV < (isa)) && __ase(ase)) +#define __isa_lt_and_opt(isa, opt) ((MIPS_ISA_REV < (isa)) && __opt(opt)) + /* * SMP assumption: Options of CPU 0 are a superset of all processors. * This is true for all known MIPS systems. */ #ifndef cpu_has_tlb -#define cpu_has_tlb (cpu_data[0].options & MIPS_CPU_TLB) +#define cpu_has_tlb __opt(MIPS_CPU_TLB) #endif #ifndef cpu_has_ftlb -#define cpu_has_ftlb (cpu_data[0].options & MIPS_CPU_FTLB) +#define cpu_has_ftlb __opt(MIPS_CPU_FTLB) #endif #ifndef cpu_has_tlbinv -#define cpu_has_tlbinv (cpu_data[0].options & MIPS_CPU_TLBINV) +#define cpu_has_tlbinv __opt(MIPS_CPU_TLBINV) #endif #ifndef cpu_has_segments -#define cpu_has_segments (cpu_data[0].options & MIPS_CPU_SEGMENTS) +#define cpu_has_segments __opt(MIPS_CPU_SEGMENTS) #endif #ifndef cpu_has_eva -#define cpu_has_eva (cpu_data[0].options & MIPS_CPU_EVA) +#define cpu_has_eva __opt(MIPS_CPU_EVA) #endif #ifndef cpu_has_htw -#define cpu_has_htw (cpu_data[0].options & MIPS_CPU_HTW) +#define cpu_has_htw __opt(MIPS_CPU_HTW) #endif #ifndef cpu_has_ldpte -#define cpu_has_ldpte (cpu_data[0].options & MIPS_CPU_LDPTE) +#define cpu_has_ldpte __opt(MIPS_CPU_LDPTE) #endif #ifndef cpu_has_rixiex -#define cpu_has_rixiex (cpu_data[0].options & MIPS_CPU_RIXIEX) +#define cpu_has_rixiex __isa_ge_or_opt(6, MIPS_CPU_RIXIEX) #endif #ifndef cpu_has_maar -#define cpu_has_maar (cpu_data[0].options & MIPS_CPU_MAAR) +#define cpu_has_maar __opt(MIPS_CPU_MAAR) #endif #ifndef cpu_has_rw_llb -#define cpu_has_rw_llb (cpu_data[0].options & MIPS_CPU_RW_LLB) +#define cpu_has_rw_llb __isa_ge_or_opt(6, MIPS_CPU_RW_LLB) #endif /* @@ -59,18 +97,18 @@ #define cpu_has_3kex (!cpu_has_4kex) #endif #ifndef cpu_has_4kex -#define cpu_has_4kex (cpu_data[0].options & MIPS_CPU_4KEX) +#define cpu_has_4kex __isa_ge_or_opt(1, MIPS_CPU_4KEX) #endif #ifndef cpu_has_3k_cache -#define cpu_has_3k_cache (cpu_data[0].options & MIPS_CPU_3K_CACHE) +#define cpu_has_3k_cache __isa_lt_and_opt(1, MIPS_CPU_3K_CACHE) #endif #define cpu_has_6k_cache 0 #define cpu_has_8k_cache 0 #ifndef cpu_has_4k_cache -#define cpu_has_4k_cache (cpu_data[0].options & MIPS_CPU_4K_CACHE) +#define cpu_has_4k_cache __isa_ge_or_opt(1, MIPS_CPU_4K_CACHE) #endif #ifndef cpu_has_tx39_cache -#define cpu_has_tx39_cache (cpu_data[0].options & MIPS_CPU_TX39_CACHE) +#define cpu_has_tx39_cache __opt(MIPS_CPU_TX39_CACHE) #endif #ifndef cpu_has_octeon_cache #define cpu_has_octeon_cache 0 @@ -83,92 +121,92 @@ #define raw_cpu_has_fpu cpu_has_fpu #endif #ifndef cpu_has_32fpr -#define cpu_has_32fpr (cpu_data[0].options & MIPS_CPU_32FPR) +#define cpu_has_32fpr __isa_ge_or_opt(1, MIPS_CPU_32FPR) #endif #ifndef cpu_has_counter -#define cpu_has_counter (cpu_data[0].options & MIPS_CPU_COUNTER) +#define cpu_has_counter __opt(MIPS_CPU_COUNTER) #endif #ifndef cpu_has_watch -#define cpu_has_watch (cpu_data[0].options & MIPS_CPU_WATCH) +#define cpu_has_watch __opt(MIPS_CPU_WATCH) #endif #ifndef cpu_has_divec -#define cpu_has_divec (cpu_data[0].options & MIPS_CPU_DIVEC) +#define cpu_has_divec __isa_ge_or_opt(1, MIPS_CPU_DIVEC) #endif #ifndef cpu_has_vce -#define cpu_has_vce (cpu_data[0].options & MIPS_CPU_VCE) +#define cpu_has_vce __opt(MIPS_CPU_VCE) #endif #ifndef cpu_has_cache_cdex_p -#define cpu_has_cache_cdex_p (cpu_data[0].options & MIPS_CPU_CACHE_CDEX_P) +#define cpu_has_cache_cdex_p __opt(MIPS_CPU_CACHE_CDEX_P) #endif #ifndef cpu_has_cache_cdex_s -#define cpu_has_cache_cdex_s (cpu_data[0].options & MIPS_CPU_CACHE_CDEX_S) +#define cpu_has_cache_cdex_s __opt(MIPS_CPU_CACHE_CDEX_S) #endif #ifndef cpu_has_prefetch -#define cpu_has_prefetch (cpu_data[0].options & MIPS_CPU_PREFETCH) +#define cpu_has_prefetch __isa_ge_or_opt(1, MIPS_CPU_PREFETCH) #endif #ifndef cpu_has_mcheck -#define cpu_has_mcheck (cpu_data[0].options & MIPS_CPU_MCHECK) +#define cpu_has_mcheck __isa_ge_or_opt(1, MIPS_CPU_MCHECK) #endif #ifndef cpu_has_ejtag -#define cpu_has_ejtag (cpu_data[0].options & MIPS_CPU_EJTAG) +#define cpu_has_ejtag __opt(MIPS_CPU_EJTAG) #endif #ifndef cpu_has_llsc -#define cpu_has_llsc (cpu_data[0].options & MIPS_CPU_LLSC) +#define cpu_has_llsc __isa_ge_or_opt(1, MIPS_CPU_LLSC) #endif #ifndef cpu_has_bp_ghist -#define cpu_has_bp_ghist (cpu_data[0].options & MIPS_CPU_BP_GHIST) +#define cpu_has_bp_ghist __opt(MIPS_CPU_BP_GHIST) #endif #ifndef kernel_uses_llsc #define kernel_uses_llsc cpu_has_llsc #endif #ifndef cpu_has_guestctl0ext -#define cpu_has_guestctl0ext (cpu_data[0].options & MIPS_CPU_GUESTCTL0EXT) +#define cpu_has_guestctl0ext __opt(MIPS_CPU_GUESTCTL0EXT) #endif #ifndef cpu_has_guestctl1 -#define cpu_has_guestctl1 (cpu_data[0].options & MIPS_CPU_GUESTCTL1) +#define cpu_has_guestctl1 __opt(MIPS_CPU_GUESTCTL1) #endif #ifndef cpu_has_guestctl2 -#define cpu_has_guestctl2 (cpu_data[0].options & MIPS_CPU_GUESTCTL2) +#define cpu_has_guestctl2 __opt(MIPS_CPU_GUESTCTL2) #endif #ifndef cpu_has_guestid -#define cpu_has_guestid (cpu_data[0].options & MIPS_CPU_GUESTID) +#define cpu_has_guestid __opt(MIPS_CPU_GUESTID) #endif #ifndef cpu_has_drg -#define cpu_has_drg (cpu_data[0].options & MIPS_CPU_DRG) +#define cpu_has_drg __opt(MIPS_CPU_DRG) #endif #ifndef cpu_has_mips16 -#define cpu_has_mips16 (cpu_data[0].ases & MIPS_ASE_MIPS16) +#define cpu_has_mips16 __isa_lt_and_ase(6, MIPS_ASE_MIPS16) #endif #ifndef cpu_has_mips16e2 -#define cpu_has_mips16e2 (cpu_data[0].ases & MIPS_ASE_MIPS16E2) +#define cpu_has_mips16e2 __isa_lt_and_ase(6, MIPS_ASE_MIPS16E2) #endif #ifndef cpu_has_mdmx -#define cpu_has_mdmx (cpu_data[0].ases & MIPS_ASE_MDMX) +#define cpu_has_mdmx __isa_lt_and_ase(6, MIPS_ASE_MDMX) #endif #ifndef cpu_has_mips3d -#define cpu_has_mips3d (cpu_data[0].ases & MIPS_ASE_MIPS3D) +#define cpu_has_mips3d __isa_lt_and_ase(6, MIPS_ASE_MIPS3D) #endif #ifndef cpu_has_smartmips -#define cpu_has_smartmips (cpu_data[0].ases & MIPS_ASE_SMARTMIPS) +#define cpu_has_smartmips __isa_lt_and_ase(6, MIPS_ASE_SMARTMIPS) #endif #ifndef cpu_has_rixi -#define cpu_has_rixi (cpu_data[0].options & MIPS_CPU_RIXI) +#define cpu_has_rixi __isa_ge_or_opt(6, MIPS_CPU_RIXI) #endif #ifndef cpu_has_mmips # ifdef CONFIG_SYS_SUPPORTS_MICROMIPS -# define cpu_has_mmips (cpu_data[0].options & MIPS_CPU_MICROMIPS) +# define cpu_has_mmips __opt(MIPS_CPU_MICROMIPS) # else # define cpu_has_mmips 0 # endif #endif #ifndef cpu_has_lpa -#define cpu_has_lpa (cpu_data[0].options & MIPS_CPU_LPA) +#define cpu_has_lpa __opt(MIPS_CPU_LPA) #endif #ifndef cpu_has_mvh -#define cpu_has_mvh (cpu_data[0].options & MIPS_CPU_MVH) +#define cpu_has_mvh __opt(MIPS_CPU_MVH) #endif #ifndef cpu_has_xpa #define cpu_has_xpa (cpu_has_lpa && cpu_has_mvh) @@ -338,32 +376,32 @@ #endif #ifndef cpu_has_dsp -#define cpu_has_dsp (cpu_data[0].ases & MIPS_ASE_DSP) +#define cpu_has_dsp __ase(MIPS_ASE_DSP) #endif #ifndef cpu_has_dsp2 -#define cpu_has_dsp2 (cpu_data[0].ases & MIPS_ASE_DSP2P) +#define cpu_has_dsp2 __ase(MIPS_ASE_DSP2P) #endif #ifndef cpu_has_dsp3 -#define cpu_has_dsp3 (cpu_data[0].ases & MIPS_ASE_DSP3) +#define cpu_has_dsp3 __ase(MIPS_ASE_DSP3) #endif #ifndef cpu_has_mipsmt -#define cpu_has_mipsmt (cpu_data[0].ases & MIPS_ASE_MIPSMT) +#define cpu_has_mipsmt __isa_lt_and_ase(6, MIPS_ASE_MIPSMT) #endif #ifndef cpu_has_vp -#define cpu_has_vp (cpu_data[0].options & MIPS_CPU_VP) +#define cpu_has_vp __isa_ge_and_opt(6, MIPS_CPU_VP) #endif #ifndef cpu_has_userlocal -#define cpu_has_userlocal (cpu_data[0].options & MIPS_CPU_ULRI) +#define cpu_has_userlocal __isa_ge_or_opt(6, MIPS_CPU_ULRI) #endif #ifdef CONFIG_32BIT # ifndef cpu_has_nofpuex -# define cpu_has_nofpuex (cpu_data[0].options & MIPS_CPU_NOFPUEX) +# define cpu_has_nofpuex __isa_lt_and_opt(1, MIPS_CPU_NOFPUEX) # endif # ifndef cpu_has_64bits # define cpu_has_64bits (cpu_data[0].isa_level & MIPS_CPU_ISA_64BIT) @@ -405,19 +443,19 @@ #endif #if defined(CONFIG_CPU_MIPSR2_IRQ_VI) && !defined(cpu_has_vint) -# define cpu_has_vint (cpu_data[0].options & MIPS_CPU_VINT) +# define cpu_has_vint __opt(MIPS_CPU_VINT) #elif !defined(cpu_has_vint) # define cpu_has_vint 0 #endif #if defined(CONFIG_CPU_MIPSR2_IRQ_EI) && !defined(cpu_has_veic) -# define cpu_has_veic (cpu_data[0].options & MIPS_CPU_VEIC) +# define cpu_has_veic __opt(MIPS_CPU_VEIC) #elif !defined(cpu_has_veic) # define cpu_has_veic 0 #endif #ifndef cpu_has_inclusive_pcaches -#define cpu_has_inclusive_pcaches (cpu_data[0].options & MIPS_CPU_INCLUSIVE_CACHES) +#define cpu_has_inclusive_pcaches __opt(MIPS_CPU_INCLUSIVE_CACHES) #endif #ifndef cpu_dcache_line_size @@ -438,63 +476,63 @@ #endif #ifndef cpu_has_perf_cntr_intr_bit -#define cpu_has_perf_cntr_intr_bit (cpu_data[0].options & MIPS_CPU_PCI) +#define cpu_has_perf_cntr_intr_bit __opt(MIPS_CPU_PCI) #endif #ifndef cpu_has_vz -#define cpu_has_vz (cpu_data[0].ases & MIPS_ASE_VZ) +#define cpu_has_vz __ase(MIPS_ASE_VZ) #endif #if defined(CONFIG_CPU_HAS_MSA) && !defined(cpu_has_msa) -# define cpu_has_msa (cpu_data[0].ases & MIPS_ASE_MSA) +# define cpu_has_msa __ase(MIPS_ASE_MSA) #elif !defined(cpu_has_msa) # define cpu_has_msa 0 #endif #ifndef cpu_has_ufr -# define cpu_has_ufr (cpu_data[0].options & MIPS_CPU_UFR) +# define cpu_has_ufr __opt(MIPS_CPU_UFR) #endif #ifndef cpu_has_fre -# define cpu_has_fre (cpu_data[0].options & MIPS_CPU_FRE) +# define cpu_has_fre __opt(MIPS_CPU_FRE) #endif #ifndef cpu_has_cdmm -# define cpu_has_cdmm (cpu_data[0].options & MIPS_CPU_CDMM) +# define cpu_has_cdmm __opt(MIPS_CPU_CDMM) #endif #ifndef cpu_has_small_pages -# define cpu_has_small_pages (cpu_data[0].options & MIPS_CPU_SP) +# define cpu_has_small_pages __opt(MIPS_CPU_SP) #endif #ifndef cpu_has_nan_legacy -#define cpu_has_nan_legacy (cpu_data[0].options & MIPS_CPU_NAN_LEGACY) +#define cpu_has_nan_legacy __isa_lt_and_opt(6, MIPS_CPU_NAN_LEGACY) #endif #ifndef cpu_has_nan_2008 -#define cpu_has_nan_2008 (cpu_data[0].options & MIPS_CPU_NAN_2008) +#define cpu_has_nan_2008 __isa_ge_or_opt(6, MIPS_CPU_NAN_2008) #endif #ifndef cpu_has_ebase_wg -# define cpu_has_ebase_wg (cpu_data[0].options & MIPS_CPU_EBASE_WG) +# define cpu_has_ebase_wg __opt(MIPS_CPU_EBASE_WG) #endif #ifndef cpu_has_badinstr -# define cpu_has_badinstr (cpu_data[0].options & MIPS_CPU_BADINSTR) +# define cpu_has_badinstr __isa_ge_or_opt(6, MIPS_CPU_BADINSTR) #endif #ifndef cpu_has_badinstrp -# define cpu_has_badinstrp (cpu_data[0].options & MIPS_CPU_BADINSTRP) +# define cpu_has_badinstrp __isa_ge_or_opt(6, MIPS_CPU_BADINSTRP) #endif #ifndef cpu_has_contextconfig -# define cpu_has_contextconfig (cpu_data[0].options & MIPS_CPU_CTXTC) +# define cpu_has_contextconfig __opt(MIPS_CPU_CTXTC) #endif #ifndef cpu_has_perf -# define cpu_has_perf (cpu_data[0].options & MIPS_CPU_PERF) +# define cpu_has_perf __opt(MIPS_CPU_PERF) #endif -#if defined(CONFIG_SMP) && (MIPS_ISA_REV >= 6) +#ifdef CONFIG_SMP /* * Some systems share FTLB RAMs between threads within a core (siblings in * kernel parlance). This means that FTLB entries may become invalid at almost @@ -507,7 +545,7 @@ */ # ifndef cpu_has_shared_ftlb_ram # define cpu_has_shared_ftlb_ram \ - (current_cpu_data.options & MIPS_CPU_SHARED_FTLB_RAM) + __isa_ge_and_opt(6, MIPS_CPU_SHARED_FTLB_RAM) # endif /* @@ -524,9 +562,9 @@ */ # ifndef cpu_has_shared_ftlb_entries # define cpu_has_shared_ftlb_entries \ - (current_cpu_data.options & MIPS_CPU_SHARED_FTLB_ENTRIES) + __isa_ge_and_opt(6, MIPS_CPU_SHARED_FTLB_ENTRIES) # endif -#endif /* SMP && MIPS_ISA_REV >= 6 */ +#endif /* SMP */ #ifndef cpu_has_shared_ftlb_ram # define cpu_has_shared_ftlb_ram 0 @@ -537,7 +575,7 @@ #ifdef CONFIG_MIPS_MT_SMP # define cpu_has_mipsmt_pertccounters \ - (cpu_data[0].options & MIPS_CPU_MT_PER_TC_PERF_COUNTERS) + __isa_lt_and_opt(6, MIPS_CPU_MT_PER_TC_PERF_COUNTERS) #else # define cpu_has_mipsmt_pertccounters 0 #endif /* CONFIG_MIPS_MT_SMP */ diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h index 5b9d02ef4f60..dacbdb84516a 100644 --- a/arch/mips/include/asm/cpu.h +++ b/arch/mips/include/asm/cpu.h @@ -225,31 +225,32 @@ * Definitions for 7:0 on legacy processors */ -#define PRID_REV_TX4927 0x0022 -#define PRID_REV_TX4937 0x0030 -#define PRID_REV_R4400 0x0040 -#define PRID_REV_R3000A 0x0030 -#define PRID_REV_R3000 0x0020 -#define PRID_REV_R2000A 0x0010 -#define PRID_REV_TX3912 0x0010 -#define PRID_REV_TX3922 0x0030 -#define PRID_REV_TX3927 0x0040 -#define PRID_REV_VR4111 0x0050 -#define PRID_REV_VR4181 0x0050 /* Same as VR4111 */ -#define PRID_REV_VR4121 0x0060 -#define PRID_REV_VR4122 0x0070 -#define PRID_REV_VR4181A 0x0070 /* Same as VR4122 */ -#define PRID_REV_VR4130 0x0080 -#define PRID_REV_34K_V1_0_2 0x0022 -#define PRID_REV_LOONGSON1B 0x0020 -#define PRID_REV_LOONGSON1C 0x0020 /* Same as Loongson-1B */ -#define PRID_REV_LOONGSON2E 0x0002 -#define PRID_REV_LOONGSON2F 0x0003 -#define PRID_REV_LOONGSON3A_R1 0x0005 -#define PRID_REV_LOONGSON3B_R1 0x0006 -#define PRID_REV_LOONGSON3B_R2 0x0007 -#define PRID_REV_LOONGSON3A_R2 0x0008 -#define PRID_REV_LOONGSON3A_R3 0x0009 +#define PRID_REV_TX4927 0x0022 +#define PRID_REV_TX4937 0x0030 +#define PRID_REV_R4400 0x0040 +#define PRID_REV_R3000A 0x0030 +#define PRID_REV_R3000 0x0020 +#define PRID_REV_R2000A 0x0010 +#define PRID_REV_TX3912 0x0010 +#define PRID_REV_TX3922 0x0030 +#define PRID_REV_TX3927 0x0040 +#define PRID_REV_VR4111 0x0050 +#define PRID_REV_VR4181 0x0050 /* Same as VR4111 */ +#define PRID_REV_VR4121 0x0060 +#define PRID_REV_VR4122 0x0070 +#define PRID_REV_VR4181A 0x0070 /* Same as VR4122 */ +#define PRID_REV_VR4130 0x0080 +#define PRID_REV_34K_V1_0_2 0x0022 +#define PRID_REV_LOONGSON1B 0x0020 +#define PRID_REV_LOONGSON1C 0x0020 /* Same as Loongson-1B */ +#define PRID_REV_LOONGSON2E 0x0002 +#define PRID_REV_LOONGSON2F 0x0003 +#define PRID_REV_LOONGSON3A_R1 0x0005 +#define PRID_REV_LOONGSON3B_R1 0x0006 +#define PRID_REV_LOONGSON3B_R2 0x0007 +#define PRID_REV_LOONGSON3A_R2 0x0008 +#define PRID_REV_LOONGSON3A_R3_0 0x0009 +#define PRID_REV_LOONGSON3A_R3_1 0x000d /* * Older processors used to encode processor version and revision in two diff --git a/arch/mips/include/asm/dma-coherence.h b/arch/mips/include/asm/dma-coherence.h index 72d0eab02afc..8eda48748ed5 100644 --- a/arch/mips/include/asm/dma-coherence.h +++ b/arch/mips/include/asm/dma-coherence.h @@ -21,10 +21,10 @@ enum coherent_io_user_state { extern enum coherent_io_user_state coherentio; extern int hw_coherentio; #else -#ifdef CONFIG_DMA_COHERENT -#define coherentio IO_COHERENCE_ENABLED -#else +#ifdef CONFIG_DMA_NONCOHERENT #define coherentio IO_COHERENCE_DISABLED +#else +#define coherentio IO_COHERENCE_ENABLED #endif #define hw_coherentio 0 #endif /* CONFIG_DMA_MAYBE_COHERENT */ diff --git a/arch/mips/include/asm/dma-direct.h b/arch/mips/include/asm/dma-direct.h index f32f15530aba..b5c240806e1b 100644 --- a/arch/mips/include/asm/dma-direct.h +++ b/arch/mips/include/asm/dma-direct.h @@ -1 +1,16 @@ -#include +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _MIPS_DMA_DIRECT_H +#define _MIPS_DMA_DIRECT_H 1 + +static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) +{ + if (!dev->dma_mask) + return false; + + return addr + size - 1 <= *dev->dma_mask; +} + +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr); +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr); + +#endif /* _MIPS_DMA_DIRECT_H */ diff --git a/arch/mips/include/asm/dma-mapping.h b/arch/mips/include/asm/dma-mapping.h index 886e75a383f2..e81c4e97ff1a 100644 --- a/arch/mips/include/asm/dma-mapping.h +++ b/arch/mips/include/asm/dma-mapping.h @@ -2,19 +2,21 @@ #ifndef _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H -#include -#include -#include +#include -#ifndef CONFIG_SGI_IP27 /* Kludge to fix 2.6.39 build for IP27 */ -#include -#endif - -extern const struct dma_map_ops *mips_dma_map_ops; +extern const struct dma_map_ops jazz_dma_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { - return mips_dma_map_ops; +#if defined(CONFIG_MACH_JAZZ) + return &jazz_dma_ops; +#elif defined(CONFIG_SWIOTLB) + return &swiotlb_dma_ops; +#elif defined(CONFIG_DMA_NONCOHERENT_OPS) + return &dma_noncoherent_ops; +#else + return &dma_direct_ops; +#endif } #define arch_setup_dma_ops arch_setup_dma_ops diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h index a7d0b836f2f7..54c730aed327 100644 --- a/arch/mips/include/asm/io.h +++ b/arch/mips/include/asm/io.h @@ -12,6 +12,8 @@ #ifndef _ASM_IO_H #define _ASM_IO_H +#define ARCH_HAS_IOREMAP_WC + #include #include #include @@ -141,14 +143,14 @@ static inline void * phys_to_virt(unsigned long address) /* * ISA I/O bus memory addresses are 1:1 with the physical address. */ -static inline unsigned long isa_virt_to_bus(volatile void * address) +static inline unsigned long isa_virt_to_bus(volatile void *address) { - return (unsigned long)address - PAGE_OFFSET; + return virt_to_phys(address); } -static inline void * isa_bus_to_virt(unsigned long address) +static inline void *isa_bus_to_virt(unsigned long address) { - return (void *)(address + PAGE_OFFSET); + return phys_to_virt(address); } #define isa_page_to_bus page_to_phys @@ -278,15 +280,25 @@ static inline void __iomem * __ioremap_mode(phys_addr_t offset, unsigned long si #define ioremap_cache ioremap_cachable /* - * These two are MIPS specific ioremap variant. ioremap_cacheable_cow - * requests a cachable mapping, ioremap_uncached_accelerated requests a - * mapping using the uncached accelerated mode which isn't supported on - * all processors. + * ioremap_wc - map bus memory into CPU space + * @offset: bus address of the memory + * @size: size of the resource to map + * + * ioremap_wc performs a platform specific sequence of operations to + * make bus memory CPU accessible via the readb/readw/readl/writeb/ + * writew/writel functions and the other mmio helpers. The returned + * address is not guaranteed to be usable directly as a virtual + * address. + * + * This version of ioremap ensures that the memory is marked uncachable + * but accelerated by means of write-combining feature. It is specifically + * useful for PCIe prefetchable windows, which may vastly improve a + * communications performance. If it was determined on boot stage, what + * CPU CCA doesn't support UCA, the method shall fall-back to the + * _CACHE_UNCACHED option (see cpu_probe() method). */ -#define ioremap_cacheable_cow(offset, size) \ - __ioremap_mode((offset), (size), _CACHE_CACHABLE_COW) -#define ioremap_uncached_accelerated(offset, size) \ - __ioremap_mode((offset), (size), _CACHE_UNCACHED_ACCELERATED) +#define ioremap_wc(offset, size) \ + __ioremap_mode((offset), (size), boot_cpu_data.writecombine) static inline void iounmap(const volatile void __iomem *addr) { @@ -414,6 +426,8 @@ static inline type pfx##in##bwlq##p(unsigned long port) \ __val = *__addr; \ slow; \ \ + /* prevent prefetching of coherent DMA data prematurely */ \ + rmb(); \ return pfx##ioswab##bwlq(__addr, __val); \ } @@ -588,7 +602,7 @@ static inline void memcpy_toio(volatile void __iomem *dst, const void *src, int * * This API used to be exported; it now is for arch code internal use only. */ -#if defined(CONFIG_DMA_NONCOHERENT) || defined(CONFIG_DMA_MAYBE_COHERENT) +#ifdef CONFIG_DMA_NONCOHERENT extern void (*_dma_cache_wback_inv)(unsigned long start, unsigned long size); extern void (*_dma_cache_wback)(unsigned long start, unsigned long size); @@ -607,7 +621,7 @@ extern void (*_dma_cache_inv)(unsigned long start, unsigned long size); #define dma_cache_inv(start,size) \ do { (void) (start); (void) (size); } while (0) -#endif /* CONFIG_DMA_NONCOHERENT || CONFIG_DMA_MAYBE_COHERENT */ +#endif /* CONFIG_DMA_NONCOHERENT */ /* * Read a 32-bit register that requires a 64-bit read cycle on the bus. diff --git a/arch/mips/include/asm/kprobes.h b/arch/mips/include/asm/kprobes.h index ad1a99948f27..a72dfbf1babb 100644 --- a/arch/mips/include/asm/kprobes.h +++ b/arch/mips/include/asm/kprobes.h @@ -68,16 +68,6 @@ struct prev_kprobe { unsigned long saved_epc; }; -#define MAX_JPROBES_STACK_SIZE 128 -#define MAX_JPROBES_STACK_ADDR \ - (((unsigned long)current_thread_info()) + THREAD_SIZE - 32 - sizeof(struct pt_regs)) - -#define MIN_JPROBES_STACK_SIZE(ADDR) \ - ((((ADDR) + MAX_JPROBES_STACK_SIZE) > MAX_JPROBES_STACK_ADDR) \ - ? MAX_JPROBES_STACK_ADDR - (ADDR) \ - : MAX_JPROBES_STACK_SIZE) - - #define SKIP_DELAYSLOT 0x0001 /* per-cpu kprobe control block */ @@ -86,12 +76,9 @@ struct kprobe_ctlblk { unsigned long kprobe_old_SR; unsigned long kprobe_saved_SR; unsigned long kprobe_saved_epc; - unsigned long jprobe_saved_sp; - struct pt_regs jprobe_saved_regs; /* Per-thread fields, used while emulating branches */ unsigned long flags; unsigned long target_epc; - u8 jprobes_stack[MAX_JPROBES_STACK_SIZE]; struct prev_kprobe prev_kprobe; }; diff --git a/arch/mips/include/asm/mach-ar7/spaces.h b/arch/mips/include/asm/mach-ar7/spaces.h index 660ab64c0fc9..a004d94dfbdd 100644 --- a/arch/mips/include/asm/mach-ar7/spaces.h +++ b/arch/mips/include/asm/mach-ar7/spaces.h @@ -17,9 +17,6 @@ #define PAGE_OFFSET _AC(0x94000000, UL) #define PHYS_OFFSET _AC(0x14000000, UL) -#define UNCAC_BASE _AC(0xb4000000, UL) /* 0xa0000000 + PHYS_OFFSET */ -#define IO_BASE UNCAC_BASE - #include #endif /* __ASM_AR7_SPACES_H */ diff --git a/arch/mips/include/asm/mach-ath25/dma-coherence.h b/arch/mips/include/asm/mach-ath25/dma-coherence.h deleted file mode 100644 index d5defdde32db..000000000000 --- a/arch/mips/include/asm/mach-ath25/dma-coherence.h +++ /dev/null @@ -1,76 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - * Copyright (C) 2007 Felix Fietkau - * - */ -#ifndef __ASM_MACH_ATH25_DMA_COHERENCE_H -#define __ASM_MACH_ATH25_DMA_COHERENCE_H - -#include - -/* - * We need some arbitrary non-zero value to be programmed to the BAR1 register - * of PCI host controller to enable DMA. The same value should be used as the - * offset to calculate the physical address of DMA buffer for PCI devices. - */ -#define AR2315_PCI_HOST_SDRAM_BASEADDR 0x20000000 - -static inline dma_addr_t ath25_dev_offset(struct device *dev) -{ -#ifdef CONFIG_PCI - extern struct bus_type pci_bus_type; - - if (dev && dev->bus == &pci_bus_type) - return AR2315_PCI_HOST_SDRAM_BASEADDR; -#endif - return 0; -} - -static inline dma_addr_t -plat_map_dma_mem(struct device *dev, void *addr, size_t size) -{ - return virt_to_phys(addr) + ath25_dev_offset(dev); -} - -static inline dma_addr_t -plat_map_dma_mem_page(struct device *dev, struct page *page) -{ - return page_to_phys(page) + ath25_dev_offset(dev); -} - -static inline unsigned long -plat_dma_addr_to_phys(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr - ath25_dev_offset(dev); -} - -static inline void -plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, size_t size, - enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - return 1; -} - -static inline int plat_device_is_coherent(struct device *dev) -{ -#ifdef CONFIG_DMA_COHERENT - return 1; -#endif -#ifdef CONFIG_DMA_NONCOHERENT - return 0; -#endif -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -#endif /* __ASM_MACH_ATH25_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h index d99ca862dae3..284b4fa23e03 100644 --- a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h +++ b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h @@ -20,6 +20,10 @@ #include #define AR71XX_APB_BASE 0x18000000 +#define AR71XX_GE0_BASE 0x19000000 +#define AR71XX_GE0_SIZE 0x10000 +#define AR71XX_GE1_BASE 0x1a000000 +#define AR71XX_GE1_SIZE 0x10000 #define AR71XX_EHCI_BASE 0x1b000000 #define AR71XX_EHCI_SIZE 0x1000 #define AR71XX_OHCI_BASE 0x1c000000 @@ -39,6 +43,8 @@ #define AR71XX_PLL_SIZE 0x100 #define AR71XX_RESET_BASE (AR71XX_APB_BASE + 0x00060000) #define AR71XX_RESET_SIZE 0x100 +#define AR71XX_MII_BASE (AR71XX_APB_BASE + 0x00070000) +#define AR71XX_MII_SIZE 0x100 #define AR71XX_PCI_MEM_BASE 0x10000000 #define AR71XX_PCI_MEM_SIZE 0x07000000 @@ -81,18 +87,39 @@ #define AR933X_UART_BASE (AR71XX_APB_BASE + 0x00020000) #define AR933X_UART_SIZE 0x14 +#define AR933X_GMAC_BASE (AR71XX_APB_BASE + 0x00070000) +#define AR933X_GMAC_SIZE 0x04 #define AR933X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000) #define AR933X_WMAC_SIZE 0x20000 #define AR933X_EHCI_BASE 0x1b000000 #define AR933X_EHCI_SIZE 0x1000 +#define AR934X_GMAC_BASE (AR71XX_APB_BASE + 0x00070000) +#define AR934X_GMAC_SIZE 0x14 #define AR934X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000) #define AR934X_WMAC_SIZE 0x20000 #define AR934X_EHCI_BASE 0x1b000000 #define AR934X_EHCI_SIZE 0x200 +#define AR934X_NFC_BASE 0x1b000200 +#define AR934X_NFC_SIZE 0xb8 #define AR934X_SRIF_BASE (AR71XX_APB_BASE + 0x00116000) #define AR934X_SRIF_SIZE 0x1000 +#define QCA953X_GMAC_BASE (AR71XX_APB_BASE + 0x00070000) +#define QCA953X_GMAC_SIZE 0x14 +#define QCA953X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000) +#define QCA953X_WMAC_SIZE 0x20000 +#define QCA953X_EHCI_BASE 0x1b000000 +#define QCA953X_EHCI_SIZE 0x200 +#define QCA953X_SRIF_BASE (AR71XX_APB_BASE + 0x00116000) +#define QCA953X_SRIF_SIZE 0x1000 + +#define QCA953X_PCI_CFG_BASE0 0x14000000 +#define QCA953X_PCI_CTRL_BASE0 (AR71XX_APB_BASE + 0x000f0000) +#define QCA953X_PCI_CRP_BASE0 (AR71XX_APB_BASE + 0x000c0000) +#define QCA953X_PCI_MEM_BASE0 0x10000000 +#define QCA953X_PCI_MEM_SIZE 0x02000000 + #define QCA955X_PCI_MEM_BASE0 0x10000000 #define QCA955X_PCI_MEM_BASE1 0x12000000 #define QCA955X_PCI_MEM_SIZE 0x02000000 @@ -106,11 +133,72 @@ #define QCA955X_PCI_CTRL_BASE1 (AR71XX_APB_BASE + 0x00280000) #define QCA955X_PCI_CTRL_SIZE 0x100 +#define QCA955X_GMAC_BASE (AR71XX_APB_BASE + 0x00070000) +#define QCA955X_GMAC_SIZE 0x40 #define QCA955X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000) #define QCA955X_WMAC_SIZE 0x20000 #define QCA955X_EHCI0_BASE 0x1b000000 #define QCA955X_EHCI1_BASE 0x1b400000 #define QCA955X_EHCI_SIZE 0x1000 +#define QCA955X_NFC_BASE 0x1b800200 +#define QCA955X_NFC_SIZE 0xb8 + +#define QCA956X_PCI_MEM_BASE1 0x12000000 +#define QCA956X_PCI_MEM_SIZE 0x02000000 +#define QCA956X_PCI_CFG_BASE1 0x16000000 +#define QCA956X_PCI_CFG_SIZE 0x1000 +#define QCA956X_PCI_CRP_BASE1 (AR71XX_APB_BASE + 0x00250000) +#define QCA956X_PCI_CRP_SIZE 0x1000 +#define QCA956X_PCI_CTRL_BASE1 (AR71XX_APB_BASE + 0x00280000) +#define QCA956X_PCI_CTRL_SIZE 0x100 + +#define QCA956X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000) +#define QCA956X_WMAC_SIZE 0x20000 +#define QCA956X_EHCI0_BASE 0x1b000000 +#define QCA956X_EHCI1_BASE 0x1b400000 +#define QCA956X_EHCI_SIZE 0x200 +#define QCA956X_GMAC_SGMII_BASE (AR71XX_APB_BASE + 0x00070000) +#define QCA956X_GMAC_SGMII_SIZE 0x64 +#define QCA956X_PLL_BASE (AR71XX_APB_BASE + 0x00050000) +#define QCA956X_PLL_SIZE 0x50 +#define QCA956X_GMAC_BASE (AR71XX_APB_BASE + 0x00070000) +#define QCA956X_GMAC_SIZE 0x64 + +/* + * Hidden Registers + */ +#define QCA956X_MAC_CFG_BASE 0xb9000000 +#define QCA956X_MAC_CFG_SIZE 0x64 + +#define QCA956X_MAC_CFG1_REG 0x00 +#define QCA956X_MAC_CFG1_SOFT_RST BIT(31) +#define QCA956X_MAC_CFG1_RX_RST BIT(19) +#define QCA956X_MAC_CFG1_TX_RST BIT(18) +#define QCA956X_MAC_CFG1_LOOPBACK BIT(8) +#define QCA956X_MAC_CFG1_RX_EN BIT(2) +#define QCA956X_MAC_CFG1_TX_EN BIT(0) + +#define QCA956X_MAC_CFG2_REG 0x04 +#define QCA956X_MAC_CFG2_IF_1000 BIT(9) +#define QCA956X_MAC_CFG2_IF_10_100 BIT(8) +#define QCA956X_MAC_CFG2_HUGE_FRAME_EN BIT(5) +#define QCA956X_MAC_CFG2_LEN_CHECK BIT(4) +#define QCA956X_MAC_CFG2_PAD_CRC_EN BIT(2) +#define QCA956X_MAC_CFG2_FDX BIT(0) + +#define QCA956X_MAC_MII_MGMT_CFG_REG 0x20 +#define QCA956X_MGMT_CFG_CLK_DIV_20 0x07 + +#define QCA956X_MAC_FIFO_CFG0_REG 0x48 +#define QCA956X_MAC_FIFO_CFG1_REG 0x4c +#define QCA956X_MAC_FIFO_CFG2_REG 0x50 +#define QCA956X_MAC_FIFO_CFG3_REG 0x54 +#define QCA956X_MAC_FIFO_CFG4_REG 0x58 +#define QCA956X_MAC_FIFO_CFG5_REG 0x5c + +#define QCA956X_DAM_RESET_OFFSET 0xb90001bc +#define QCA956X_DAM_RESET_SIZE 0x4 +#define QCA956X_INLINE_CHKSUM_ENG BIT(27) /* * DDR_CTRL block @@ -149,6 +237,12 @@ #define AR934X_DDR_REG_FLUSH_PCIE 0xa8 #define AR934X_DDR_REG_FLUSH_WMAC 0xac +#define QCA953X_DDR_REG_FLUSH_GE0 0x9c +#define QCA953X_DDR_REG_FLUSH_GE1 0xa0 +#define QCA953X_DDR_REG_FLUSH_USB 0xa4 +#define QCA953X_DDR_REG_FLUSH_PCIE 0xa8 +#define QCA953X_DDR_REG_FLUSH_WMAC 0xac + /* * PLL block */ @@ -166,9 +260,15 @@ #define AR71XX_AHB_DIV_SHIFT 20 #define AR71XX_AHB_DIV_MASK 0x7 +#define AR71XX_ETH0_PLL_SHIFT 17 +#define AR71XX_ETH1_PLL_SHIFT 19 + #define AR724X_PLL_REG_CPU_CONFIG 0x00 #define AR724X_PLL_REG_PCIE_CONFIG 0x10 +#define AR724X_PLL_REG_PCIE_CONFIG_PPL_BYPASS BIT(16) +#define AR724X_PLL_REG_PCIE_CONFIG_PPL_RESET BIT(25) + #define AR724X_PLL_FB_SHIFT 0 #define AR724X_PLL_FB_MASK 0x3ff #define AR724X_PLL_REF_DIV_SHIFT 10 @@ -178,6 +278,8 @@ #define AR724X_DDR_DIV_SHIFT 22 #define AR724X_DDR_DIV_MASK 0x3 +#define AR7242_PLL_REG_ETH0_INT_CLOCK 0x2c + #define AR913X_PLL_REG_CPU_CONFIG 0x00 #define AR913X_PLL_REG_ETH_CONFIG 0x04 #define AR913X_PLL_REG_ETH0_INT_CLOCK 0x14 @@ -190,6 +292,9 @@ #define AR913X_AHB_DIV_SHIFT 19 #define AR913X_AHB_DIV_MASK 0x1 +#define AR913X_ETH0_PLL_SHIFT 20 +#define AR913X_ETH1_PLL_SHIFT 22 + #define AR933X_PLL_CPU_CONFIG_REG 0x00 #define AR933X_PLL_CLOCK_CTRL_REG 0x08 @@ -211,6 +316,8 @@ #define AR934X_PLL_CPU_CONFIG_REG 0x00 #define AR934X_PLL_DDR_CONFIG_REG 0x04 #define AR934X_PLL_CPU_DDR_CLK_CTRL_REG 0x08 +#define AR934X_PLL_SWITCH_CLOCK_CONTROL_REG 0x24 +#define AR934X_PLL_ETH_XMII_CONTROL_REG 0x2c #define AR934X_PLL_CPU_CONFIG_NFRAC_SHIFT 0 #define AR934X_PLL_CPU_CONFIG_NFRAC_MASK 0x3f @@ -243,9 +350,52 @@ #define AR934X_PLL_CPU_DDR_CLK_CTRL_DDRCLK_FROM_DDRPLL BIT(21) #define AR934X_PLL_CPU_DDR_CLK_CTRL_AHBCLK_FROM_DDRPLL BIT(24) +#define AR934X_PLL_SWITCH_CLOCK_CONTROL_MDIO_CLK_SEL BIT(6) + +#define QCA953X_PLL_CPU_CONFIG_REG 0x00 +#define QCA953X_PLL_DDR_CONFIG_REG 0x04 +#define QCA953X_PLL_CLK_CTRL_REG 0x08 +#define QCA953X_PLL_SWITCH_CLOCK_CONTROL_REG 0x24 +#define QCA953X_PLL_ETH_XMII_CONTROL_REG 0x2c +#define QCA953X_PLL_ETH_SGMII_CONTROL_REG 0x48 + +#define QCA953X_PLL_CPU_CONFIG_NFRAC_SHIFT 0 +#define QCA953X_PLL_CPU_CONFIG_NFRAC_MASK 0x3f +#define QCA953X_PLL_CPU_CONFIG_NINT_SHIFT 6 +#define QCA953X_PLL_CPU_CONFIG_NINT_MASK 0x3f +#define QCA953X_PLL_CPU_CONFIG_REFDIV_SHIFT 12 +#define QCA953X_PLL_CPU_CONFIG_REFDIV_MASK 0x1f +#define QCA953X_PLL_CPU_CONFIG_OUTDIV_SHIFT 19 +#define QCA953X_PLL_CPU_CONFIG_OUTDIV_MASK 0x7 + +#define QCA953X_PLL_DDR_CONFIG_NFRAC_SHIFT 0 +#define QCA953X_PLL_DDR_CONFIG_NFRAC_MASK 0x3ff +#define QCA953X_PLL_DDR_CONFIG_NINT_SHIFT 10 +#define QCA953X_PLL_DDR_CONFIG_NINT_MASK 0x3f +#define QCA953X_PLL_DDR_CONFIG_REFDIV_SHIFT 16 +#define QCA953X_PLL_DDR_CONFIG_REFDIV_MASK 0x1f +#define QCA953X_PLL_DDR_CONFIG_OUTDIV_SHIFT 23 +#define QCA953X_PLL_DDR_CONFIG_OUTDIV_MASK 0x7 + +#define QCA953X_PLL_CLK_CTRL_CPU_PLL_BYPASS BIT(2) +#define QCA953X_PLL_CLK_CTRL_DDR_PLL_BYPASS BIT(3) +#define QCA953X_PLL_CLK_CTRL_AHB_PLL_BYPASS BIT(4) +#define QCA953X_PLL_CLK_CTRL_CPU_POST_DIV_SHIFT 5 +#define QCA953X_PLL_CLK_CTRL_CPU_POST_DIV_MASK 0x1f +#define QCA953X_PLL_CLK_CTRL_DDR_POST_DIV_SHIFT 10 +#define QCA953X_PLL_CLK_CTRL_DDR_POST_DIV_MASK 0x1f +#define QCA953X_PLL_CLK_CTRL_AHB_POST_DIV_SHIFT 15 +#define QCA953X_PLL_CLK_CTRL_AHB_POST_DIV_MASK 0x1f +#define QCA953X_PLL_CLK_CTRL_CPUCLK_FROM_CPUPLL BIT(20) +#define QCA953X_PLL_CLK_CTRL_DDRCLK_FROM_DDRPLL BIT(21) +#define QCA953X_PLL_CLK_CTRL_AHBCLK_FROM_DDRPLL BIT(24) + #define QCA955X_PLL_CPU_CONFIG_REG 0x00 #define QCA955X_PLL_DDR_CONFIG_REG 0x04 #define QCA955X_PLL_CLK_CTRL_REG 0x08 +#define QCA955X_PLL_ETH_XMII_CONTROL_REG 0x28 +#define QCA955X_PLL_ETH_SGMII_CONTROL_REG 0x48 +#define QCA955X_PLL_ETH_SGMII_SERDES_REG 0x4c #define QCA955X_PLL_CPU_CONFIG_NFRAC_SHIFT 0 #define QCA955X_PLL_CPU_CONFIG_NFRAC_MASK 0x3f @@ -278,6 +428,81 @@ #define QCA955X_PLL_CLK_CTRL_DDRCLK_FROM_DDRPLL BIT(21) #define QCA955X_PLL_CLK_CTRL_AHBCLK_FROM_DDRPLL BIT(24) +#define QCA955X_PLL_ETH_SGMII_SERDES_LOCK_DETECT BIT(2) +#define QCA955X_PLL_ETH_SGMII_SERDES_PLL_REFCLK BIT(1) +#define QCA955X_PLL_ETH_SGMII_SERDES_EN_PLL BIT(0) + +#define QCA956X_PLL_CPU_CONFIG_REG 0x00 +#define QCA956X_PLL_CPU_CONFIG1_REG 0x04 +#define QCA956X_PLL_DDR_CONFIG_REG 0x08 +#define QCA956X_PLL_DDR_CONFIG1_REG 0x0c +#define QCA956X_PLL_CLK_CTRL_REG 0x10 +#define QCA956X_PLL_SWITCH_CLOCK_CONTROL_REG 0x28 +#define QCA956X_PLL_ETH_XMII_CONTROL_REG 0x30 +#define QCA956X_PLL_ETH_SGMII_SERDES_REG 0x4c + +#define QCA956X_PLL_CPU_CONFIG_REFDIV_SHIFT 12 +#define QCA956X_PLL_CPU_CONFIG_REFDIV_MASK 0x1f +#define QCA956X_PLL_CPU_CONFIG_OUTDIV_SHIFT 19 +#define QCA956X_PLL_CPU_CONFIG_OUTDIV_MASK 0x7 + +#define QCA956X_PLL_CPU_CONFIG1_NFRAC_L_SHIFT 0 +#define QCA956X_PLL_CPU_CONFIG1_NFRAC_L_MASK 0x1f +#define QCA956X_PLL_CPU_CONFIG1_NFRAC_H_SHIFT 5 +#define QCA956X_PLL_CPU_CONFIG1_NFRAC_H_MASK 0x1fff +#define QCA956X_PLL_CPU_CONFIG1_NINT_SHIFT 18 +#define QCA956X_PLL_CPU_CONFIG1_NINT_MASK 0x1ff + +#define QCA956X_PLL_DDR_CONFIG_REFDIV_SHIFT 16 +#define QCA956X_PLL_DDR_CONFIG_REFDIV_MASK 0x1f +#define QCA956X_PLL_DDR_CONFIG_OUTDIV_SHIFT 23 +#define QCA956X_PLL_DDR_CONFIG_OUTDIV_MASK 0x7 + +#define QCA956X_PLL_DDR_CONFIG1_NFRAC_L_SHIFT 0 +#define QCA956X_PLL_DDR_CONFIG1_NFRAC_L_MASK 0x1f +#define QCA956X_PLL_DDR_CONFIG1_NFRAC_H_SHIFT 5 +#define QCA956X_PLL_DDR_CONFIG1_NFRAC_H_MASK 0x1fff +#define QCA956X_PLL_DDR_CONFIG1_NINT_SHIFT 18 +#define QCA956X_PLL_DDR_CONFIG1_NINT_MASK 0x1ff + +#define QCA956X_PLL_CLK_CTRL_CPU_PLL_BYPASS BIT(2) +#define QCA956X_PLL_CLK_CTRL_DDR_PLL_BYPASS BIT(3) +#define QCA956X_PLL_CLK_CTRL_AHB_PLL_BYPASS BIT(4) +#define QCA956X_PLL_CLK_CTRL_CPU_POST_DIV_SHIFT 5 +#define QCA956X_PLL_CLK_CTRL_CPU_POST_DIV_MASK 0x1f +#define QCA956X_PLL_CLK_CTRL_DDR_POST_DIV_SHIFT 10 +#define QCA956X_PLL_CLK_CTRL_DDR_POST_DIV_MASK 0x1f +#define QCA956X_PLL_CLK_CTRL_AHB_POST_DIV_SHIFT 15 +#define QCA956X_PLL_CLK_CTRL_AHB_POST_DIV_MASK 0x1f +#define QCA956X_PLL_CLK_CTRL_CPU_DDRCLK_FROM_DDRPLL BIT(20) +#define QCA956X_PLL_CLK_CTRL_CPU_DDRCLK_FROM_CPUPLL BIT(21) +#define QCA956X_PLL_CLK_CTRL_AHBCLK_FROM_DDRPLL BIT(24) + +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_I2C_CLK_SELB BIT(5) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_MDIO_CLK_SEL0_1 BIT(6) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_UART1_CLK_SEL BIT(7) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_USB_REFCLK_FREQ_SEL_SHIFT 8 +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_USB_REFCLK_FREQ_SEL_MASK 0xf +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_EN_PLL_TOP BIT(12) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_MDIO_CLK_SEL0_2 BIT(13) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_MDIO_CLK_SEL1_1 BIT(14) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_MDIO_CLK_SEL1_2 BIT(15) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_SWITCH_FUNC_TST_MODE BIT(16) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_EEE_ENABLE BIT(17) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_OEN_CLK125M_PLL BIT(18) +#define QCA956X_PLL_SWITCH_CLOCK_SPARE_SWITCHCLK_SEL BIT(19) + +#define QCA956X_PLL_ETH_XMII_TX_INVERT BIT(1) +#define QCA956X_PLL_ETH_XMII_GIGE BIT(25) +#define QCA956X_PLL_ETH_XMII_RX_DELAY_SHIFT 28 +#define QCA956X_PLL_ETH_XMII_RX_DELAY_MASK 0x3 +#define QCA956X_PLL_ETH_XMII_TX_DELAY_SHIFT 26 +#define QCA956X_PLL_ETH_XMII_TX_DELAY_MASK 3 + +#define QCA956X_PLL_ETH_SGMII_SERDES_LOCK_DETECT BIT(2) +#define QCA956X_PLL_ETH_SGMII_SERDES_PLL_REFCLK BIT(1) +#define QCA956X_PLL_ETH_SGMII_SERDES_EN_PLL BIT(0) + /* * USB_CONFIG block */ @@ -317,10 +542,19 @@ #define AR934X_RESET_REG_BOOTSTRAP 0xb0 #define AR934X_RESET_REG_PCIE_WMAC_INT_STATUS 0xac +#define QCA953X_RESET_REG_RESET_MODULE 0x1c +#define QCA953X_RESET_REG_BOOTSTRAP 0xb0 +#define QCA953X_RESET_REG_PCIE_WMAC_INT_STATUS 0xac + #define QCA955X_RESET_REG_RESET_MODULE 0x1c #define QCA955X_RESET_REG_BOOTSTRAP 0xb0 #define QCA955X_RESET_REG_EXT_INT_STATUS 0xac +#define QCA956X_RESET_REG_RESET_MODULE 0x1c +#define QCA956X_RESET_REG_BOOTSTRAP 0xb0 +#define QCA956X_RESET_REG_EXT_INT_STATUS 0xac + +#define MISC_INT_MIPS_SI_TIMERINT_MASK BIT(28) #define MISC_INT_ETHSW BIT(12) #define MISC_INT_TIMER4 BIT(10) #define MISC_INT_TIMER3 BIT(9) @@ -370,16 +604,123 @@ #define AR913X_RESET_USB_HOST BIT(5) #define AR913X_RESET_USB_PHY BIT(4) +#define AR933X_RESET_GE1_MDIO BIT(23) +#define AR933X_RESET_GE0_MDIO BIT(22) +#define AR933X_RESET_GE1_MAC BIT(13) #define AR933X_RESET_WMAC BIT(11) +#define AR933X_RESET_GE0_MAC BIT(9) #define AR933X_RESET_USB_HOST BIT(5) #define AR933X_RESET_USB_PHY BIT(4) #define AR933X_RESET_USBSUS_OVERRIDE BIT(3) +#define AR934X_RESET_HOST BIT(31) +#define AR934X_RESET_SLIC BIT(30) +#define AR934X_RESET_HDMA BIT(29) +#define AR934X_RESET_EXTERNAL BIT(28) +#define AR934X_RESET_RTC BIT(27) +#define AR934X_RESET_PCIE_EP_INT BIT(26) +#define AR934X_RESET_CHKSUM_ACC BIT(25) +#define AR934X_RESET_FULL_CHIP BIT(24) +#define AR934X_RESET_GE1_MDIO BIT(23) +#define AR934X_RESET_GE0_MDIO BIT(22) +#define AR934X_RESET_CPU_NMI BIT(21) +#define AR934X_RESET_CPU_COLD BIT(20) +#define AR934X_RESET_HOST_RESET_INT BIT(19) +#define AR934X_RESET_PCIE_EP BIT(18) +#define AR934X_RESET_UART1 BIT(17) +#define AR934X_RESET_DDR BIT(16) +#define AR934X_RESET_USB_PHY_PLL_PWD_EXT BIT(15) +#define AR934X_RESET_NANDF BIT(14) +#define AR934X_RESET_GE1_MAC BIT(13) +#define AR934X_RESET_ETH_SWITCH_ANALOG BIT(12) #define AR934X_RESET_USB_PHY_ANALOG BIT(11) +#define AR934X_RESET_HOST_DMA_INT BIT(10) +#define AR934X_RESET_GE0_MAC BIT(9) +#define AR934X_RESET_ETH_SWITCH BIT(8) +#define AR934X_RESET_PCIE_PHY BIT(7) +#define AR934X_RESET_PCIE BIT(6) #define AR934X_RESET_USB_HOST BIT(5) #define AR934X_RESET_USB_PHY BIT(4) #define AR934X_RESET_USBSUS_OVERRIDE BIT(3) - +#define AR934X_RESET_LUT BIT(2) +#define AR934X_RESET_MBOX BIT(1) +#define AR934X_RESET_I2S BIT(0) + +#define QCA953X_RESET_USB_EXT_PWR BIT(29) +#define QCA953X_RESET_EXTERNAL BIT(28) +#define QCA953X_RESET_RTC BIT(27) +#define QCA953X_RESET_FULL_CHIP BIT(24) +#define QCA953X_RESET_GE1_MDIO BIT(23) +#define QCA953X_RESET_GE0_MDIO BIT(22) +#define QCA953X_RESET_CPU_NMI BIT(21) +#define QCA953X_RESET_CPU_COLD BIT(20) +#define QCA953X_RESET_DDR BIT(16) +#define QCA953X_RESET_USB_PHY_PLL_PWD_EXT BIT(15) +#define QCA953X_RESET_GE1_MAC BIT(13) +#define QCA953X_RESET_ETH_SWITCH_ANALOG BIT(12) +#define QCA953X_RESET_USB_PHY_ANALOG BIT(11) +#define QCA953X_RESET_GE0_MAC BIT(9) +#define QCA953X_RESET_ETH_SWITCH BIT(8) +#define QCA953X_RESET_PCIE_PHY BIT(7) +#define QCA953X_RESET_PCIE BIT(6) +#define QCA953X_RESET_USB_HOST BIT(5) +#define QCA953X_RESET_USB_PHY BIT(4) +#define QCA953X_RESET_USBSUS_OVERRIDE BIT(3) + +#define QCA955X_RESET_HOST BIT(31) +#define QCA955X_RESET_SLIC BIT(30) +#define QCA955X_RESET_HDMA BIT(29) +#define QCA955X_RESET_EXTERNAL BIT(28) +#define QCA955X_RESET_RTC BIT(27) +#define QCA955X_RESET_PCIE_EP_INT BIT(26) +#define QCA955X_RESET_CHKSUM_ACC BIT(25) +#define QCA955X_RESET_FULL_CHIP BIT(24) +#define QCA955X_RESET_GE1_MDIO BIT(23) +#define QCA955X_RESET_GE0_MDIO BIT(22) +#define QCA955X_RESET_CPU_NMI BIT(21) +#define QCA955X_RESET_CPU_COLD BIT(20) +#define QCA955X_RESET_HOST_RESET_INT BIT(19) +#define QCA955X_RESET_PCIE_EP BIT(18) +#define QCA955X_RESET_UART1 BIT(17) +#define QCA955X_RESET_DDR BIT(16) +#define QCA955X_RESET_USB_PHY_PLL_PWD_EXT BIT(15) +#define QCA955X_RESET_NANDF BIT(14) +#define QCA955X_RESET_GE1_MAC BIT(13) +#define QCA955X_RESET_SGMII_ANALOG BIT(12) +#define QCA955X_RESET_USB_PHY_ANALOG BIT(11) +#define QCA955X_RESET_HOST_DMA_INT BIT(10) +#define QCA955X_RESET_GE0_MAC BIT(9) +#define QCA955X_RESET_SGMII BIT(8) +#define QCA955X_RESET_PCIE_PHY BIT(7) +#define QCA955X_RESET_PCIE BIT(6) +#define QCA955X_RESET_USB_HOST BIT(5) +#define QCA955X_RESET_USB_PHY BIT(4) +#define QCA955X_RESET_USBSUS_OVERRIDE BIT(3) +#define QCA955X_RESET_LUT BIT(2) +#define QCA955X_RESET_MBOX BIT(1) +#define QCA955X_RESET_I2S BIT(0) + +#define QCA956X_RESET_EXTERNAL BIT(28) +#define QCA956X_RESET_FULL_CHIP BIT(24) +#define QCA956X_RESET_GE1_MDIO BIT(23) +#define QCA956X_RESET_GE0_MDIO BIT(22) +#define QCA956X_RESET_CPU_NMI BIT(21) +#define QCA956X_RESET_CPU_COLD BIT(20) +#define QCA956X_RESET_DMA BIT(19) +#define QCA956X_RESET_DDR BIT(16) +#define QCA956X_RESET_GE1_MAC BIT(13) +#define QCA956X_RESET_SGMII_ANALOG BIT(12) +#define QCA956X_RESET_USB_PHY_ANALOG BIT(11) +#define QCA956X_RESET_GE0_MAC BIT(9) +#define QCA956X_RESET_SGMII BIT(8) +#define QCA956X_RESET_USB_HOST BIT(5) +#define QCA956X_RESET_USB_PHY BIT(4) +#define QCA956X_RESET_USBSUS_OVERRIDE BIT(3) +#define QCA956X_RESET_SWITCH_ANALOG BIT(2) +#define QCA956X_RESET_SWITCH BIT(0) + +#define AR933X_BOOTSTRAP_MDIO_GPIO_EN BIT(18) +#define AR933X_BOOTSTRAP_EEPBUSY BIT(4) #define AR933X_BOOTSTRAP_REF_CLK_40 BIT(0) #define AR934X_BOOTSTRAP_SW_OPTION8 BIT(23) @@ -398,8 +739,17 @@ #define AR934X_BOOTSTRAP_SDRAM_DISABLED BIT(1) #define AR934X_BOOTSTRAP_DDR1 BIT(0) +#define QCA953X_BOOTSTRAP_SW_OPTION2 BIT(12) +#define QCA953X_BOOTSTRAP_SW_OPTION1 BIT(11) +#define QCA953X_BOOTSTRAP_EJTAG_MODE BIT(5) +#define QCA953X_BOOTSTRAP_REF_CLK_40 BIT(4) +#define QCA953X_BOOTSTRAP_SDRAM_DISABLED BIT(1) +#define QCA953X_BOOTSTRAP_DDR1 BIT(0) + #define QCA955X_BOOTSTRAP_REF_CLK_40 BIT(4) +#define QCA956X_BOOTSTRAP_REF_CLK_40 BIT(2) + #define AR934X_PCIE_WMAC_INT_WMAC_MISC BIT(0) #define AR934X_PCIE_WMAC_INT_WMAC_TX BIT(1) #define AR934X_PCIE_WMAC_INT_WMAC_RXLP BIT(2) @@ -418,6 +768,24 @@ AR934X_PCIE_WMAC_INT_PCIE_RC1 | AR934X_PCIE_WMAC_INT_PCIE_RC2 | \ AR934X_PCIE_WMAC_INT_PCIE_RC3) +#define QCA953X_PCIE_WMAC_INT_WMAC_MISC BIT(0) +#define QCA953X_PCIE_WMAC_INT_WMAC_TX BIT(1) +#define QCA953X_PCIE_WMAC_INT_WMAC_RXLP BIT(2) +#define QCA953X_PCIE_WMAC_INT_WMAC_RXHP BIT(3) +#define QCA953X_PCIE_WMAC_INT_PCIE_RC BIT(4) +#define QCA953X_PCIE_WMAC_INT_PCIE_RC0 BIT(5) +#define QCA953X_PCIE_WMAC_INT_PCIE_RC1 BIT(6) +#define QCA953X_PCIE_WMAC_INT_PCIE_RC2 BIT(7) +#define QCA953X_PCIE_WMAC_INT_PCIE_RC3 BIT(8) +#define QCA953X_PCIE_WMAC_INT_WMAC_ALL \ + (QCA953X_PCIE_WMAC_INT_WMAC_MISC | QCA953X_PCIE_WMAC_INT_WMAC_TX | \ + QCA953X_PCIE_WMAC_INT_WMAC_RXLP | QCA953X_PCIE_WMAC_INT_WMAC_RXHP) + +#define QCA953X_PCIE_WMAC_INT_PCIE_ALL \ + (QCA953X_PCIE_WMAC_INT_PCIE_RC | QCA953X_PCIE_WMAC_INT_PCIE_RC0 | \ + QCA953X_PCIE_WMAC_INT_PCIE_RC1 | QCA953X_PCIE_WMAC_INT_PCIE_RC2 | \ + QCA953X_PCIE_WMAC_INT_PCIE_RC3) + #define QCA955X_EXT_INT_WMAC_MISC BIT(0) #define QCA955X_EXT_INT_WMAC_TX BIT(1) #define QCA955X_EXT_INT_WMAC_RXLP BIT(2) @@ -449,6 +817,37 @@ QCA955X_EXT_INT_PCIE_RC2_INT1 | QCA955X_EXT_INT_PCIE_RC2_INT2 | \ QCA955X_EXT_INT_PCIE_RC2_INT3) +#define QCA956X_EXT_INT_WMAC_MISC BIT(0) +#define QCA956X_EXT_INT_WMAC_TX BIT(1) +#define QCA956X_EXT_INT_WMAC_RXLP BIT(2) +#define QCA956X_EXT_INT_WMAC_RXHP BIT(3) +#define QCA956X_EXT_INT_PCIE_RC1 BIT(4) +#define QCA956X_EXT_INT_PCIE_RC1_INT0 BIT(5) +#define QCA956X_EXT_INT_PCIE_RC1_INT1 BIT(6) +#define QCA956X_EXT_INT_PCIE_RC1_INT2 BIT(7) +#define QCA956X_EXT_INT_PCIE_RC1_INT3 BIT(8) +#define QCA956X_EXT_INT_PCIE_RC2 BIT(12) +#define QCA956X_EXT_INT_PCIE_RC2_INT0 BIT(13) +#define QCA956X_EXT_INT_PCIE_RC2_INT1 BIT(14) +#define QCA956X_EXT_INT_PCIE_RC2_INT2 BIT(15) +#define QCA956X_EXT_INT_PCIE_RC2_INT3 BIT(16) +#define QCA956X_EXT_INT_USB1 BIT(24) +#define QCA956X_EXT_INT_USB2 BIT(28) + +#define QCA956X_EXT_INT_WMAC_ALL \ + (QCA956X_EXT_INT_WMAC_MISC | QCA956X_EXT_INT_WMAC_TX | \ + QCA956X_EXT_INT_WMAC_RXLP | QCA956X_EXT_INT_WMAC_RXHP) + +#define QCA956X_EXT_INT_PCIE_RC1_ALL \ + (QCA956X_EXT_INT_PCIE_RC1 | QCA956X_EXT_INT_PCIE_RC1_INT0 | \ + QCA956X_EXT_INT_PCIE_RC1_INT1 | QCA956X_EXT_INT_PCIE_RC1_INT2 | \ + QCA956X_EXT_INT_PCIE_RC1_INT3) + +#define QCA956X_EXT_INT_PCIE_RC2_ALL \ + (QCA956X_EXT_INT_PCIE_RC2 | QCA956X_EXT_INT_PCIE_RC2_INT0 | \ + QCA956X_EXT_INT_PCIE_RC2_INT1 | QCA956X_EXT_INT_PCIE_RC2_INT2 | \ + QCA956X_EXT_INT_PCIE_RC2_INT3) + #define REV_ID_MAJOR_MASK 0xfff0 #define REV_ID_MAJOR_AR71XX 0x00a0 #define REV_ID_MAJOR_AR913X 0x00b0 @@ -460,8 +859,12 @@ #define REV_ID_MAJOR_AR9341 0x0120 #define REV_ID_MAJOR_AR9342 0x1120 #define REV_ID_MAJOR_AR9344 0x2120 +#define REV_ID_MAJOR_QCA9533 0x0140 +#define REV_ID_MAJOR_QCA9533_V2 0x0160 #define REV_ID_MAJOR_QCA9556 0x0130 #define REV_ID_MAJOR_QCA9558 0x1130 +#define REV_ID_MAJOR_TP9343 0x0150 +#define REV_ID_MAJOR_QCA956X 0x1150 #define AR71XX_REV_ID_MINOR_MASK 0x3 #define AR71XX_REV_ID_MINOR_AR7130 0x0 @@ -482,8 +885,12 @@ #define AR934X_REV_ID_REVISION_MASK 0xf +#define QCA953X_REV_ID_REVISION_MASK 0xf + #define QCA955X_REV_ID_REVISION_MASK 0xf +#define QCA956X_REV_ID_REVISION_MASK 0xf + /* * SPI block */ @@ -521,15 +928,63 @@ #define AR71XX_GPIO_REG_INT_ENABLE 0x24 #define AR71XX_GPIO_REG_FUNC 0x28 +#define AR934X_GPIO_REG_OUT_FUNC0 0x2c +#define AR934X_GPIO_REG_OUT_FUNC1 0x30 +#define AR934X_GPIO_REG_OUT_FUNC2 0x34 +#define AR934X_GPIO_REG_OUT_FUNC3 0x38 +#define AR934X_GPIO_REG_OUT_FUNC4 0x3c +#define AR934X_GPIO_REG_OUT_FUNC5 0x40 #define AR934X_GPIO_REG_FUNC 0x6c +#define QCA953X_GPIO_REG_OUT_FUNC0 0x2c +#define QCA953X_GPIO_REG_OUT_FUNC1 0x30 +#define QCA953X_GPIO_REG_OUT_FUNC2 0x34 +#define QCA953X_GPIO_REG_OUT_FUNC3 0x38 +#define QCA953X_GPIO_REG_OUT_FUNC4 0x3c +#define QCA953X_GPIO_REG_IN_ENABLE0 0x44 +#define QCA953X_GPIO_REG_FUNC 0x6c + +#define QCA953X_GPIO_OUT_MUX_SPI_CS1 10 +#define QCA953X_GPIO_OUT_MUX_SPI_CS2 11 +#define QCA953X_GPIO_OUT_MUX_SPI_CS0 9 +#define QCA953X_GPIO_OUT_MUX_SPI_CLK 8 +#define QCA953X_GPIO_OUT_MUX_SPI_MOSI 12 +#define QCA953X_GPIO_OUT_MUX_LED_LINK1 41 +#define QCA953X_GPIO_OUT_MUX_LED_LINK2 42 +#define QCA953X_GPIO_OUT_MUX_LED_LINK3 43 +#define QCA953X_GPIO_OUT_MUX_LED_LINK4 44 +#define QCA953X_GPIO_OUT_MUX_LED_LINK5 45 + +#define QCA955X_GPIO_REG_OUT_FUNC0 0x2c +#define QCA955X_GPIO_REG_OUT_FUNC1 0x30 +#define QCA955X_GPIO_REG_OUT_FUNC2 0x34 +#define QCA955X_GPIO_REG_OUT_FUNC3 0x38 +#define QCA955X_GPIO_REG_OUT_FUNC4 0x3c +#define QCA955X_GPIO_REG_OUT_FUNC5 0x40 +#define QCA955X_GPIO_REG_FUNC 0x6c + +#define QCA956X_GPIO_REG_OUT_FUNC0 0x2c +#define QCA956X_GPIO_REG_OUT_FUNC1 0x30 +#define QCA956X_GPIO_REG_OUT_FUNC2 0x34 +#define QCA956X_GPIO_REG_OUT_FUNC3 0x38 +#define QCA956X_GPIO_REG_OUT_FUNC4 0x3c +#define QCA956X_GPIO_REG_OUT_FUNC5 0x40 +#define QCA956X_GPIO_REG_IN_ENABLE0 0x44 +#define QCA956X_GPIO_REG_IN_ENABLE3 0x50 +#define QCA956X_GPIO_REG_FUNC 0x6c + +#define QCA956X_GPIO_OUT_MUX_GE0_MDO 32 +#define QCA956X_GPIO_OUT_MUX_GE0_MDC 33 + #define AR71XX_GPIO_COUNT 16 #define AR7240_GPIO_COUNT 18 #define AR7241_GPIO_COUNT 20 #define AR913X_GPIO_COUNT 22 #define AR933X_GPIO_COUNT 30 #define AR934X_GPIO_COUNT 23 +#define QCA953X_GPIO_COUNT 18 #define QCA955X_GPIO_COUNT 24 +#define QCA956X_GPIO_COUNT 23 /* * SRIF block @@ -552,4 +1007,318 @@ #define AR934X_SRIF_DPLL2_OUTDIV_SHIFT 13 #define AR934X_SRIF_DPLL2_OUTDIV_MASK 0x7 +#define QCA953X_SRIF_CPU_DPLL1_REG 0x1c0 +#define QCA953X_SRIF_CPU_DPLL2_REG 0x1c4 +#define QCA953X_SRIF_CPU_DPLL3_REG 0x1c8 + +#define QCA953X_SRIF_DDR_DPLL1_REG 0x240 +#define QCA953X_SRIF_DDR_DPLL2_REG 0x244 +#define QCA953X_SRIF_DDR_DPLL3_REG 0x248 + +#define QCA953X_SRIF_DPLL1_REFDIV_SHIFT 27 +#define QCA953X_SRIF_DPLL1_REFDIV_MASK 0x1f +#define QCA953X_SRIF_DPLL1_NINT_SHIFT 18 +#define QCA953X_SRIF_DPLL1_NINT_MASK 0x1ff +#define QCA953X_SRIF_DPLL1_NFRAC_MASK 0x0003ffff + +#define QCA953X_SRIF_DPLL2_LOCAL_PLL BIT(30) +#define QCA953X_SRIF_DPLL2_OUTDIV_SHIFT 13 +#define QCA953X_SRIF_DPLL2_OUTDIV_MASK 0x7 + +#define AR71XX_GPIO_FUNC_STEREO_EN BIT(17) +#define AR71XX_GPIO_FUNC_SLIC_EN BIT(16) +#define AR71XX_GPIO_FUNC_SPI_CS2_EN BIT(13) +#define AR71XX_GPIO_FUNC_SPI_CS1_EN BIT(12) +#define AR71XX_GPIO_FUNC_UART_EN BIT(8) +#define AR71XX_GPIO_FUNC_USB_OC_EN BIT(4) +#define AR71XX_GPIO_FUNC_USB_CLK_EN BIT(0) + +#define AR724X_GPIO_FUNC_GE0_MII_CLK_EN BIT(19) +#define AR724X_GPIO_FUNC_SPI_EN BIT(18) +#define AR724X_GPIO_FUNC_SPI_CS_EN2 BIT(14) +#define AR724X_GPIO_FUNC_SPI_CS_EN1 BIT(13) +#define AR724X_GPIO_FUNC_CLK_OBS5_EN BIT(12) +#define AR724X_GPIO_FUNC_CLK_OBS4_EN BIT(11) +#define AR724X_GPIO_FUNC_CLK_OBS3_EN BIT(10) +#define AR724X_GPIO_FUNC_CLK_OBS2_EN BIT(9) +#define AR724X_GPIO_FUNC_CLK_OBS1_EN BIT(8) +#define AR724X_GPIO_FUNC_ETH_SWITCH_LED4_EN BIT(7) +#define AR724X_GPIO_FUNC_ETH_SWITCH_LED3_EN BIT(6) +#define AR724X_GPIO_FUNC_ETH_SWITCH_LED2_EN BIT(5) +#define AR724X_GPIO_FUNC_ETH_SWITCH_LED1_EN BIT(4) +#define AR724X_GPIO_FUNC_ETH_SWITCH_LED0_EN BIT(3) +#define AR724X_GPIO_FUNC_UART_RTS_CTS_EN BIT(2) +#define AR724X_GPIO_FUNC_UART_EN BIT(1) +#define AR724X_GPIO_FUNC_JTAG_DISABLE BIT(0) + +#define AR913X_GPIO_FUNC_WMAC_LED_EN BIT(22) +#define AR913X_GPIO_FUNC_EXP_PORT_CS_EN BIT(21) +#define AR913X_GPIO_FUNC_I2S_REFCLKEN BIT(20) +#define AR913X_GPIO_FUNC_I2S_MCKEN BIT(19) +#define AR913X_GPIO_FUNC_I2S1_EN BIT(18) +#define AR913X_GPIO_FUNC_I2S0_EN BIT(17) +#define AR913X_GPIO_FUNC_SLIC_EN BIT(16) +#define AR913X_GPIO_FUNC_UART_RTSCTS_EN BIT(9) +#define AR913X_GPIO_FUNC_UART_EN BIT(8) +#define AR913X_GPIO_FUNC_USB_CLK_EN BIT(4) + +#define AR933X_GPIO_FUNC_SPDIF2TCK BIT(31) +#define AR933X_GPIO_FUNC_SPDIF_EN BIT(30) +#define AR933X_GPIO_FUNC_I2SO_22_18_EN BIT(29) +#define AR933X_GPIO_FUNC_I2S_MCK_EN BIT(27) +#define AR933X_GPIO_FUNC_I2SO_EN BIT(26) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED_DUPL BIT(25) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED_COLL BIT(24) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED_ACT BIT(23) +#define AR933X_GPIO_FUNC_SPI_EN BIT(18) +#define AR933X_GPIO_FUNC_SPI_CS_EN2 BIT(14) +#define AR933X_GPIO_FUNC_SPI_CS_EN1 BIT(13) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED4_EN BIT(7) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED3_EN BIT(6) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED2_EN BIT(5) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED1_EN BIT(4) +#define AR933X_GPIO_FUNC_ETH_SWITCH_LED0_EN BIT(3) +#define AR933X_GPIO_FUNC_UART_RTS_CTS_EN BIT(2) +#define AR933X_GPIO_FUNC_UART_EN BIT(1) +#define AR933X_GPIO_FUNC_JTAG_DISABLE BIT(0) + +#define AR934X_GPIO_FUNC_CLK_OBS7_EN BIT(9) +#define AR934X_GPIO_FUNC_CLK_OBS6_EN BIT(8) +#define AR934X_GPIO_FUNC_CLK_OBS5_EN BIT(7) +#define AR934X_GPIO_FUNC_CLK_OBS4_EN BIT(6) +#define AR934X_GPIO_FUNC_CLK_OBS3_EN BIT(5) +#define AR934X_GPIO_FUNC_CLK_OBS2_EN BIT(4) +#define AR934X_GPIO_FUNC_CLK_OBS1_EN BIT(3) +#define AR934X_GPIO_FUNC_CLK_OBS0_EN BIT(2) +#define AR934X_GPIO_FUNC_JTAG_DISABLE BIT(1) + +#define AR934X_GPIO_OUT_GPIO 0 +#define AR934X_GPIO_OUT_SPI_CS1 7 +#define AR934X_GPIO_OUT_LED_LINK0 41 +#define AR934X_GPIO_OUT_LED_LINK1 42 +#define AR934X_GPIO_OUT_LED_LINK2 43 +#define AR934X_GPIO_OUT_LED_LINK3 44 +#define AR934X_GPIO_OUT_LED_LINK4 45 +#define AR934X_GPIO_OUT_EXT_LNA0 46 +#define AR934X_GPIO_OUT_EXT_LNA1 47 + +#define QCA955X_GPIO_FUNC_CLK_OBS7_EN BIT(9) +#define QCA955X_GPIO_FUNC_CLK_OBS6_EN BIT(8) +#define QCA955X_GPIO_FUNC_CLK_OBS5_EN BIT(7) +#define QCA955X_GPIO_FUNC_CLK_OBS4_EN BIT(6) +#define QCA955X_GPIO_FUNC_CLK_OBS3_EN BIT(5) +#define QCA955X_GPIO_FUNC_CLK_OBS2_EN BIT(4) +#define QCA955X_GPIO_FUNC_CLK_OBS1_EN BIT(3) +#define QCA955X_GPIO_FUNC_JTAG_DISABLE BIT(1) + +#define QCA955X_GPIO_OUT_GPIO 0 +#define QCA955X_MII_EXT_MDI 1 +#define QCA955X_SLIC_DATA_OUT 3 +#define QCA955X_SLIC_PCM_FS 4 +#define QCA955X_SLIC_PCM_CLK 5 +#define QCA955X_SPI_CLK 8 +#define QCA955X_SPI_CS_0 9 +#define QCA955X_SPI_CS_1 10 +#define QCA955X_SPI_CS_2 11 +#define QCA955X_SPI_MISO 12 +#define QCA955X_I2S_CLK 13 +#define QCA955X_I2S_WS 14 +#define QCA955X_I2S_SD 15 +#define QCA955X_I2S_MCK 16 +#define QCA955X_SPDIF_OUT 17 +#define QCA955X_UART1_TD 18 +#define QCA955X_UART1_RTS 19 +#define QCA955X_UART1_RD 20 +#define QCA955X_UART1_CTS 21 +#define QCA955X_UART0_SOUT 22 +#define QCA955X_SPDIF2_OUT 23 +#define QCA955X_LED_SGMII_SPEED0 24 +#define QCA955X_LED_SGMII_SPEED1 25 +#define QCA955X_LED_SGMII_DUPLEX 26 +#define QCA955X_LED_SGMII_LINK_UP 27 +#define QCA955X_SGMII_SPEED0_INVERT 28 +#define QCA955X_SGMII_SPEED1_INVERT 29 +#define QCA955X_SGMII_DUPLEX_INVERT 30 +#define QCA955X_SGMII_LINK_UP_INVERT 31 +#define QCA955X_GE1_MII_MDO 32 +#define QCA955X_GE1_MII_MDC 33 +#define QCA955X_SWCOM2 38 +#define QCA955X_SWCOM3 39 +#define QCA955X_MAC2_GPIO 40 +#define QCA955X_MAC3_GPIO 41 +#define QCA955X_ATT_LED 42 +#define QCA955X_PWR_LED 43 +#define QCA955X_TX_FRAME 44 +#define QCA955X_RX_CLEAR_EXTERNAL 45 +#define QCA955X_LED_NETWORK_EN 46 +#define QCA955X_LED_POWER_EN 47 +#define QCA955X_WMAC_GLUE_WOW 68 +#define QCA955X_RX_CLEAR_EXTENSION 70 +#define QCA955X_CP_NAND_CS1 73 +#define QCA955X_USB_SUSPEND 74 +#define QCA955X_ETH_TX_ERR 75 +#define QCA955X_DDR_DQ_OE 76 +#define QCA955X_CLKREQ_N_EP 77 +#define QCA955X_CLKREQ_N_RC 78 +#define QCA955X_CLK_OBS0 79 +#define QCA955X_CLK_OBS1 80 +#define QCA955X_CLK_OBS2 81 +#define QCA955X_CLK_OBS3 82 +#define QCA955X_CLK_OBS4 83 +#define QCA955X_CLK_OBS5 84 + +/* + * MII_CTRL block + */ +#define AR71XX_MII_REG_MII0_CTRL 0x00 +#define AR71XX_MII_REG_MII1_CTRL 0x04 + +#define AR71XX_MII_CTRL_IF_MASK 3 +#define AR71XX_MII_CTRL_SPEED_SHIFT 4 +#define AR71XX_MII_CTRL_SPEED_MASK 3 +#define AR71XX_MII_CTRL_SPEED_10 0 +#define AR71XX_MII_CTRL_SPEED_100 1 +#define AR71XX_MII_CTRL_SPEED_1000 2 + +#define AR71XX_MII0_CTRL_IF_GMII 0 +#define AR71XX_MII0_CTRL_IF_MII 1 +#define AR71XX_MII0_CTRL_IF_RGMII 2 +#define AR71XX_MII0_CTRL_IF_RMII 3 + +#define AR71XX_MII1_CTRL_IF_RGMII 0 +#define AR71XX_MII1_CTRL_IF_RMII 1 + +/* + * AR933X GMAC interface + */ +#define AR933X_GMAC_REG_ETH_CFG 0x00 + +#define AR933X_ETH_CFG_RGMII_GE0 BIT(0) +#define AR933X_ETH_CFG_MII_GE0 BIT(1) +#define AR933X_ETH_CFG_GMII_GE0 BIT(2) +#define AR933X_ETH_CFG_MII_GE0_MASTER BIT(3) +#define AR933X_ETH_CFG_MII_GE0_SLAVE BIT(4) +#define AR933X_ETH_CFG_MII_GE0_ERR_EN BIT(5) +#define AR933X_ETH_CFG_SW_PHY_SWAP BIT(7) +#define AR933X_ETH_CFG_SW_PHY_ADDR_SWAP BIT(8) +#define AR933X_ETH_CFG_RMII_GE0 BIT(9) +#define AR933X_ETH_CFG_RMII_GE0_SPD_10 0 +#define AR933X_ETH_CFG_RMII_GE0_SPD_100 BIT(10) + +/* + * AR934X GMAC Interface + */ +#define AR934X_GMAC_REG_ETH_CFG 0x00 + +#define AR934X_ETH_CFG_RGMII_GMAC0 BIT(0) +#define AR934X_ETH_CFG_MII_GMAC0 BIT(1) +#define AR934X_ETH_CFG_GMII_GMAC0 BIT(2) +#define AR934X_ETH_CFG_MII_GMAC0_MASTER BIT(3) +#define AR934X_ETH_CFG_MII_GMAC0_SLAVE BIT(4) +#define AR934X_ETH_CFG_MII_GMAC0_ERR_EN BIT(5) +#define AR934X_ETH_CFG_SW_ONLY_MODE BIT(6) +#define AR934X_ETH_CFG_SW_PHY_SWAP BIT(7) +#define AR934X_ETH_CFG_SW_APB_ACCESS BIT(9) +#define AR934X_ETH_CFG_RMII_GMAC0 BIT(10) +#define AR933X_ETH_CFG_MII_CNTL_SPEED BIT(11) +#define AR934X_ETH_CFG_RMII_GMAC0_MASTER BIT(12) +#define AR933X_ETH_CFG_SW_ACC_MSB_FIRST BIT(13) +#define AR934X_ETH_CFG_RXD_DELAY BIT(14) +#define AR934X_ETH_CFG_RXD_DELAY_MASK 0x3 +#define AR934X_ETH_CFG_RXD_DELAY_SHIFT 14 +#define AR934X_ETH_CFG_RDV_DELAY BIT(16) +#define AR934X_ETH_CFG_RDV_DELAY_MASK 0x3 +#define AR934X_ETH_CFG_RDV_DELAY_SHIFT 16 + +/* + * QCA953X GMAC Interface + */ +#define QCA953X_GMAC_REG_ETH_CFG 0x00 + +#define QCA953X_ETH_CFG_SW_ONLY_MODE BIT(6) +#define QCA953X_ETH_CFG_SW_PHY_SWAP BIT(7) +#define QCA953X_ETH_CFG_SW_APB_ACCESS BIT(9) +#define QCA953X_ETH_CFG_SW_ACC_MSB_FIRST BIT(13) + +/* + * QCA955X GMAC Interface + */ + +#define QCA955X_GMAC_REG_ETH_CFG 0x00 +#define QCA955X_GMAC_REG_SGMII_SERDES 0x18 + +#define QCA955X_ETH_CFG_RGMII_EN BIT(0) +#define QCA955X_ETH_CFG_MII_GE0 BIT(1) +#define QCA955X_ETH_CFG_GMII_GE0 BIT(2) +#define QCA955X_ETH_CFG_MII_GE0_MASTER BIT(3) +#define QCA955X_ETH_CFG_MII_GE0_SLAVE BIT(4) +#define QCA955X_ETH_CFG_GE0_ERR_EN BIT(5) +#define QCA955X_ETH_CFG_GE0_SGMII BIT(6) +#define QCA955X_ETH_CFG_RMII_GE0 BIT(10) +#define QCA955X_ETH_CFG_MII_CNTL_SPEED BIT(11) +#define QCA955X_ETH_CFG_RMII_GE0_MASTER BIT(12) +#define QCA955X_ETH_CFG_RXD_DELAY_MASK 0x3 +#define QCA955X_ETH_CFG_RXD_DELAY_SHIFT 14 +#define QCA955X_ETH_CFG_RDV_DELAY BIT(16) +#define QCA955X_ETH_CFG_RDV_DELAY_MASK 0x3 +#define QCA955X_ETH_CFG_RDV_DELAY_SHIFT 16 +#define QCA955X_ETH_CFG_TXD_DELAY_MASK 0x3 +#define QCA955X_ETH_CFG_TXD_DELAY_SHIFT 18 +#define QCA955X_ETH_CFG_TXE_DELAY_MASK 0x3 +#define QCA955X_ETH_CFG_TXE_DELAY_SHIFT 20 + +#define QCA955X_SGMII_SERDES_LOCK_DETECT_STATUS BIT(15) +#define QCA955X_SGMII_SERDES_RES_CALIBRATION_SHIFT 23 +#define QCA955X_SGMII_SERDES_RES_CALIBRATION_MASK 0xf +/* + * QCA956X GMAC Interface + */ + +#define QCA956X_GMAC_REG_ETH_CFG 0x00 +#define QCA956X_GMAC_REG_SGMII_RESET 0x14 +#define QCA956X_GMAC_REG_SGMII_SERDES 0x18 +#define QCA956X_GMAC_REG_MR_AN_CONTROL 0x1c +#define QCA956X_GMAC_REG_SGMII_CONFIG 0x34 +#define QCA956X_GMAC_REG_SGMII_DEBUG 0x58 + +#define QCA956X_ETH_CFG_RGMII_EN BIT(0) +#define QCA956X_ETH_CFG_GE0_SGMII BIT(6) +#define QCA956X_ETH_CFG_SW_ONLY_MODE BIT(7) +#define QCA956X_ETH_CFG_SW_PHY_SWAP BIT(8) +#define QCA956X_ETH_CFG_SW_PHY_ADDR_SWAP BIT(9) +#define QCA956X_ETH_CFG_SW_APB_ACCESS BIT(10) +#define QCA956X_ETH_CFG_SW_ACC_MSB_FIRST BIT(13) +#define QCA956X_ETH_CFG_RXD_DELAY_MASK 0x3 +#define QCA956X_ETH_CFG_RXD_DELAY_SHIFT 14 +#define QCA956X_ETH_CFG_RDV_DELAY_MASK 0x3 +#define QCA956X_ETH_CFG_RDV_DELAY_SHIFT 16 + +#define QCA956X_SGMII_RESET_RX_CLK_N_RESET 0x0 +#define QCA956X_SGMII_RESET_RX_CLK_N BIT(0) +#define QCA956X_SGMII_RESET_TX_CLK_N BIT(1) +#define QCA956X_SGMII_RESET_RX_125M_N BIT(2) +#define QCA956X_SGMII_RESET_TX_125M_N BIT(3) +#define QCA956X_SGMII_RESET_HW_RX_125M_N BIT(4) + +#define QCA956X_SGMII_SERDES_CDR_BW_MASK 0x3 +#define QCA956X_SGMII_SERDES_CDR_BW_SHIFT 1 +#define QCA956X_SGMII_SERDES_TX_DR_CTRL_MASK 0x7 +#define QCA956X_SGMII_SERDES_TX_DR_CTRL_SHIFT 4 +#define QCA956X_SGMII_SERDES_PLL_BW BIT(8) +#define QCA956X_SGMII_SERDES_VCO_FAST BIT(9) +#define QCA956X_SGMII_SERDES_VCO_SLOW BIT(10) +#define QCA956X_SGMII_SERDES_LOCK_DETECT_STATUS BIT(15) +#define QCA956X_SGMII_SERDES_EN_SIGNAL_DETECT BIT(16) +#define QCA956X_SGMII_SERDES_FIBER_SDO BIT(17) +#define QCA956X_SGMII_SERDES_RES_CALIBRATION_SHIFT 23 +#define QCA956X_SGMII_SERDES_RES_CALIBRATION_MASK 0xf +#define QCA956X_SGMII_SERDES_VCO_REG_SHIFT 27 +#define QCA956X_SGMII_SERDES_VCO_REG_MASK 0xf + +#define QCA956X_MR_AN_CONTROL_AN_ENABLE BIT(12) +#define QCA956X_MR_AN_CONTROL_PHY_RESET BIT(15) + +#define QCA956X_SGMII_CONFIG_MODE_CTRL_SHIFT 0 +#define QCA956X_SGMII_CONFIG_MODE_CTRL_MASK 0x7 + #endif /* __ASM_MACH_AR71XX_REGS_H */ diff --git a/arch/mips/include/asm/mach-ath79/ath79.h b/arch/mips/include/asm/mach-ath79/ath79.h index 441faa92c3cd..73dcd63b8243 100644 --- a/arch/mips/include/asm/mach-ath79/ath79.h +++ b/arch/mips/include/asm/mach-ath79/ath79.h @@ -32,8 +32,11 @@ enum ath79_soc_type { ATH79_SOC_AR9341, ATH79_SOC_AR9342, ATH79_SOC_AR9344, + ATH79_SOC_QCA9533, ATH79_SOC_QCA9556, ATH79_SOC_QCA9558, + ATH79_SOC_TP9343, + ATH79_SOC_QCA956X, }; extern enum ath79_soc_type ath79_soc; @@ -100,6 +103,16 @@ static inline int soc_is_ar934x(void) return soc_is_ar9341() || soc_is_ar9342() || soc_is_ar9344(); } +static inline int soc_is_qca9533(void) +{ + return ath79_soc == ATH79_SOC_QCA9533; +} + +static inline int soc_is_qca953x(void) +{ + return soc_is_qca9533(); +} + static inline int soc_is_qca9556(void) { return ath79_soc == ATH79_SOC_QCA9556; @@ -115,6 +128,26 @@ static inline int soc_is_qca955x(void) return soc_is_qca9556() || soc_is_qca9558(); } +static inline int soc_is_tp9343(void) +{ + return ath79_soc == ATH79_SOC_TP9343; +} + +static inline int soc_is_qca9561(void) +{ + return ath79_soc == ATH79_SOC_QCA956X; +} + +static inline int soc_is_qca9563(void) +{ + return ath79_soc == ATH79_SOC_QCA956X; +} + +static inline int soc_is_qca956x(void) +{ + return soc_is_qca9561() || soc_is_qca9563(); +} + void ath79_ddr_wb_flush(unsigned int reg); void ath79_ddr_set_pci_windows(void); @@ -134,6 +167,7 @@ static inline u32 ath79_pll_rr(unsigned reg) static inline void ath79_reset_wr(unsigned reg, u32 val) { __raw_writel(val, ath79_reset_base + reg); + (void) __raw_readl(ath79_reset_base + reg); /* flush */ } static inline u32 ath79_reset_rr(unsigned reg) diff --git a/arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h index 0089a740e5ae..026ad90c8ac0 100644 --- a/arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h +++ b/arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h @@ -36,6 +36,7 @@ #define cpu_has_mdmx 0 #define cpu_has_mips3d 0 #define cpu_has_smartmips 0 +#define cpu_has_rixi 0 #define cpu_has_mips32r1 1 #define cpu_has_mips32r2 1 @@ -43,6 +44,7 @@ #define cpu_has_mips64r2 0 #define cpu_has_mipsmt 0 +#define cpu_has_userlocal 0 #define cpu_has_64bits 0 #define cpu_has_64bit_zero_reg 0 @@ -51,5 +53,9 @@ #define cpu_dcache_line_size() 32 #define cpu_icache_line_size() 32 +#define cpu_has_vtag_icache 0 +#define cpu_has_dc_aliases 1 +#define cpu_has_ic_fills_f_dc 0 +#define cpu_has_pindexed_dcache 0 #endif /* __ASM_MACH_ATH79_CPU_FEATURE_OVERRIDES_H */ diff --git a/arch/mips/include/asm/mach-bmips/dma-coherence.h b/arch/mips/include/asm/mach-bmips/dma-coherence.h deleted file mode 100644 index d29781f02285..000000000000 --- a/arch/mips/include/asm/mach-bmips/dma-coherence.h +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Copyright (C) 2006 Ralf Baechle - * Copyright (C) 2009 Broadcom Corporation - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#ifndef __ASM_MACH_BMIPS_DMA_COHERENCE_H -#define __ASM_MACH_BMIPS_DMA_COHERENCE_H - -#include -#include -#include - -struct device; - -extern dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, size_t size); -extern dma_addr_t plat_map_dma_mem_page(struct device *dev, struct page *page); -extern unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr); - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline int plat_device_is_coherent(struct device *dev) -{ - return 0; -} - -#define plat_post_dma_flush bmips_post_dma_flush - -#endif /* __ASM_MACH_BMIPS_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h b/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h deleted file mode 100644 index 6eb1ee548b11..000000000000 --- a/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h +++ /dev/null @@ -1,79 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - * - * - * Similar to mach-generic/dma-coherence.h except - * plat_device_is_coherent hard coded to return 1. - * - */ -#ifndef __ASM_MACH_CAVIUM_OCTEON_DMA_COHERENCE_H -#define __ASM_MACH_CAVIUM_OCTEON_DMA_COHERENCE_H - -#include - -struct device; - -extern void octeon_pci_dma_init(void); - -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, - size_t size) -{ - BUG(); - return 0; -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ - BUG(); - return 0; -} - -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ - BUG(); - return 0; -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ - BUG(); -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - BUG(); - return 0; -} - -static inline int plat_device_is_coherent(struct device *dev) -{ - return 1; -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) -{ - if (!dev->dma_mask) - return false; - - return addr + size - 1 <= *dev->dma_mask; -} - -dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr); -phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr); - -struct dma_map_ops; -extern const struct dma_map_ops *octeon_pci_dma_map_ops; -extern char *octeon_swiotlb; - -#endif /* __ASM_MACH_CAVIUM_OCTEON_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-generic/dma-coherence.h b/arch/mips/include/asm/mach-generic/dma-coherence.h deleted file mode 100644 index 8ad7a40ca786..000000000000 --- a/arch/mips/include/asm/mach-generic/dma-coherence.h +++ /dev/null @@ -1,73 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - * - */ -#ifndef __ASM_MACH_GENERIC_DMA_COHERENCE_H -#define __ASM_MACH_GENERIC_DMA_COHERENCE_H - -struct device; - -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, - size_t size) -{ - return virt_to_phys(addr); -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ - return page_to_phys(page); -} - -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ - return dma_addr; -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline int plat_device_is_coherent(struct device *dev) -{ -#ifdef CONFIG_DMA_PERDEV_COHERENT - return dev->archdata.dma_coherent; -#else - switch (coherentio) { - default: - case IO_COHERENCE_DEFAULT: - return hw_coherentio; - case IO_COHERENCE_ENABLED: - return 1; - case IO_COHERENCE_DISABLED: - return 0; - } -#endif -} - -#ifndef plat_post_dma_flush -static inline void plat_post_dma_flush(struct device *dev) -{ -} -#endif - -#endif /* __ASM_MACH_GENERIC_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-generic/kmalloc.h b/arch/mips/include/asm/mach-generic/kmalloc.h index 74207c7bd00d..649a98338886 100644 --- a/arch/mips/include/asm/mach-generic/kmalloc.h +++ b/arch/mips/include/asm/mach-generic/kmalloc.h @@ -2,8 +2,7 @@ #ifndef __ASM_MACH_GENERIC_KMALLOC_H #define __ASM_MACH_GENERIC_KMALLOC_H - -#ifndef CONFIG_DMA_COHERENT +#ifdef CONFIG_DMA_NONCOHERENT /* * Total overkill for most systems but need as a safe default. * Set this one if any device in the system might do non-coherent DMA. diff --git a/arch/mips/include/asm/mach-generic/spaces.h b/arch/mips/include/asm/mach-generic/spaces.h index 952b0fdfda0e..ee5ebe98f6cf 100644 --- a/arch/mips/include/asm/mach-generic/spaces.h +++ b/arch/mips/include/asm/mach-generic/spaces.h @@ -17,9 +17,13 @@ /* * This gives the physical RAM offset. */ -#ifndef PHYS_OFFSET -#define PHYS_OFFSET _AC(0, UL) -#endif +#ifndef __ASSEMBLY__ +# if defined(CONFIG_MIPS_AUTO_PFN_OFFSET) +# define PHYS_OFFSET ((unsigned long)PFN_PHYS(ARCH_PFN_OFFSET)) +# elif !defined(PHYS_OFFSET) +# define PHYS_OFFSET _AC(0, UL) +# endif +#endif /* __ASSEMBLY__ */ #ifdef CONFIG_32BIT #ifdef CONFIG_KVM_GUEST diff --git a/arch/mips/include/asm/mach-ip27/dma-coherence.h b/arch/mips/include/asm/mach-ip27/dma-coherence.h deleted file mode 100644 index 04d862020ac9..000000000000 --- a/arch/mips/include/asm/mach-ip27/dma-coherence.h +++ /dev/null @@ -1,70 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - * - */ -#ifndef __ASM_MACH_IP27_DMA_COHERENCE_H -#define __ASM_MACH_IP27_DMA_COHERENCE_H - -#include - -#define pdev_to_baddr(pdev, addr) \ - (BRIDGE_CONTROLLER(pdev->bus)->baddr + (addr)) -#define dev_to_baddr(dev, addr) \ - pdev_to_baddr(to_pci_dev(dev), (addr)) - -struct device; - -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, - size_t size) -{ - dma_addr_t pa = dev_to_baddr(dev, virt_to_phys(addr)); - - return pa; -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ - dma_addr_t pa = dev_to_baddr(dev, page_to_phys(page)); - - return pa; -} - -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ - return dma_addr & ~(0xffUL << 56); -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -static inline int plat_device_is_coherent(struct device *dev) -{ - return 1; /* IP27 non-coherent mode is unsupported */ -} - -#endif /* __ASM_MACH_IP27_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-ip32/dma-coherence.h b/arch/mips/include/asm/mach-ip32/dma-coherence.h deleted file mode 100644 index 7bdf212587a0..000000000000 --- a/arch/mips/include/asm/mach-ip32/dma-coherence.h +++ /dev/null @@ -1,92 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - * - */ -#ifndef __ASM_MACH_IP32_DMA_COHERENCE_H -#define __ASM_MACH_IP32_DMA_COHERENCE_H - -#include - -struct device; - -/* - * Few notes. - * 1. CPU sees memory as two chunks: 0-256M@0x0, and the rest @0x40000000+256M - * 2. PCI sees memory as one big chunk @0x0 (or we could use 0x40000000 for - * native-endian) - * 3. All other devices see memory as one big chunk at 0x40000000 - * 4. Non-PCI devices will pass NULL as struct device* - * - * Thus we translate differently, depending on device. - */ - -#define RAM_OFFSET_MASK 0x3fffffffUL - -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, - size_t size) -{ - dma_addr_t pa = virt_to_phys(addr) & RAM_OFFSET_MASK; - - if (dev == NULL) - pa += CRIME_HI_MEM_BASE; - - return pa; -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ - dma_addr_t pa; - - pa = page_to_phys(page) & RAM_OFFSET_MASK; - - if (dev == NULL) - pa += CRIME_HI_MEM_BASE; - - return pa; -} - -/* This is almost certainly wrong but it's what dma-ip32.c used to use */ -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ - unsigned long addr = dma_addr & RAM_OFFSET_MASK; - - if (dma_addr >= 256*1024*1024) - addr += CRIME_HI_MEM_BASE; - - return addr; -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -static inline int plat_device_is_coherent(struct device *dev) -{ - return 0; /* IP32 is non-coherent */ -} - -#endif /* __ASM_MACH_IP32_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-jazz/dma-coherence.h b/arch/mips/include/asm/mach-jazz/dma-coherence.h deleted file mode 100644 index dc347c25c343..000000000000 --- a/arch/mips/include/asm/mach-jazz/dma-coherence.h +++ /dev/null @@ -1,60 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006 Ralf Baechle - */ -#ifndef __ASM_MACH_JAZZ_DMA_COHERENCE_H -#define __ASM_MACH_JAZZ_DMA_COHERENCE_H - -#include - -struct device; - -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, size_t size) -{ - return vdma_alloc(virt_to_phys(addr), size); -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ - return vdma_alloc(page_to_phys(page), PAGE_SIZE); -} - -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ - return vdma_log2phys(dma_addr); -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ - vdma_free(dma_addr); -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -static inline int plat_device_is_coherent(struct device *dev) -{ - return 0; -} - -#endif /* __ASM_MACH_JAZZ_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-loongson64/dma-coherence.h b/arch/mips/include/asm/mach-loongson64/dma-coherence.h deleted file mode 100644 index 64fc44dec0a8..000000000000 --- a/arch/mips/include/asm/mach-loongson64/dma-coherence.h +++ /dev/null @@ -1,93 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2006, 07 Ralf Baechle - * Copyright (C) 2007 Lemote, Inc. & Institute of Computing Technology - * Author: Fuxin Zhang, zhangfx@lemote.com - * - */ -#ifndef __ASM_MACH_LOONGSON64_DMA_COHERENCE_H -#define __ASM_MACH_LOONGSON64_DMA_COHERENCE_H - -#ifdef CONFIG_SWIOTLB -#include -#endif - -struct device; - -static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) -{ - if (!dev->dma_mask) - return false; - - return addr + size - 1 <= *dev->dma_mask; -} - -extern dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr); -extern phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr); -static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, - size_t size) -{ -#ifdef CONFIG_CPU_LOONGSON3 - return __phys_to_dma(dev, virt_to_phys(addr)); -#else - return virt_to_phys(addr) | 0x80000000; -#endif -} - -static inline dma_addr_t plat_map_dma_mem_page(struct device *dev, - struct page *page) -{ -#ifdef CONFIG_CPU_LOONGSON3 - return __phys_to_dma(dev, page_to_phys(page)); -#else - return page_to_phys(page) | 0x80000000; -#endif -} - -static inline unsigned long plat_dma_addr_to_phys(struct device *dev, - dma_addr_t dma_addr) -{ -#if defined(CONFIG_CPU_LOONGSON3) && defined(CONFIG_64BIT) - return __dma_to_phys(dev, dma_addr); -#elif defined(CONFIG_CPU_LOONGSON2F) && defined(CONFIG_64BIT) - return (dma_addr > 0x8fffffff) ? dma_addr : (dma_addr & 0x0fffffff); -#else - return dma_addr & 0x7fffffff; -#endif -} - -static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction) -{ -} - -static inline int plat_dma_supported(struct device *dev, u64 mask) -{ - /* - * we fall back to GFP_DMA when the mask isn't all 1s, - * so we can't guarantee allocations that must be - * within a tighter range than GFP_DMA.. - */ - if (mask < DMA_BIT_MASK(24)) - return 0; - - return 1; -} - -static inline int plat_device_is_coherent(struct device *dev) -{ -#ifdef CONFIG_DMA_NONCOHERENT - return 0; -#else - return 1; -#endif /* CONFIG_DMA_NONCOHERENT */ -} - -static inline void plat_post_dma_flush(struct device *dev) -{ -} - -#endif /* __ASM_MACH_LOONGSON64_DMA_COHERENCE_H */ diff --git a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h index 8393bc548987..312739117bb0 100644 --- a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h +++ b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h @@ -19,18 +19,18 @@ .set push .set mips64 /* Set LPA on LOONGSON3 config3 */ - mfc0 t0, $16, 3 + mfc0 t0, CP0_CONFIG3 or t0, (0x1 << 7) - mtc0 t0, $16, 3 + mtc0 t0, CP0_CONFIG3 /* Set ELPA on LOONGSON3 pagegrain */ - mfc0 t0, $5, 1 + mfc0 t0, CP0_PAGEGRAIN or t0, (0x1 << 29) - mtc0 t0, $5, 1 + mtc0 t0, CP0_PAGEGRAIN #ifdef CONFIG_LOONGSON3_ENHANCEMENT /* Enable STFill Buffer */ - mfc0 t0, $16, 6 + mfc0 t0, CP0_CONFIG6 or t0, 0x100 - mtc0 t0, $16, 6 + mtc0 t0, CP0_CONFIG6 #endif _ehb .set pop @@ -45,18 +45,18 @@ .set push .set mips64 /* Set LPA on LOONGSON3 config3 */ - mfc0 t0, $16, 3 + mfc0 t0, CP0_CONFIG3 or t0, (0x1 << 7) - mtc0 t0, $16, 3 + mtc0 t0, CP0_CONFIG3 /* Set ELPA on LOONGSON3 pagegrain */ - mfc0 t0, $5, 1 + mfc0 t0, CP0_PAGEGRAIN or t0, (0x1 << 29) - mtc0 t0, $5, 1 + mtc0 t0, CP0_PAGEGRAIN #ifdef CONFIG_LOONGSON3_ENHANCEMENT /* Enable STFill Buffer */ - mfc0 t0, $16, 6 + mfc0 t0, CP0_CONFIG6 or t0, 0x100 - mtc0 t0, $16, 6 + mtc0 t0, CP0_CONFIG6 #endif _ehb .set pop diff --git a/arch/mips/include/asm/mach-pic32/spaces.h b/arch/mips/include/asm/mach-pic32/spaces.h index 046a0a9aa8b3..a1b9783b76ea 100644 --- a/arch/mips/include/asm/mach-pic32/spaces.h +++ b/arch/mips/include/asm/mach-pic32/spaces.h @@ -16,7 +16,6 @@ #ifdef CONFIG_PIC32MZDA #define PHYS_OFFSET _AC(0x08000000, UL) -#define UNCAC_BASE _AC(0xa8000000, UL) #endif #include diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h index ae461d91cd1f..01df9ad62fb8 100644 --- a/arch/mips/include/asm/mipsregs.h +++ b/arch/mips/include/asm/mipsregs.h @@ -16,6 +16,7 @@ #include #include #include +#include #include /* @@ -51,6 +52,7 @@ #define CP0_GLOBALNUMBER $3, 1 #define CP0_CONTEXT $4 #define CP0_PAGEMASK $5 +#define CP0_PAGEGRAIN $5, 1 #define CP0_SEGCTL0 $5, 2 #define CP0_SEGCTL1 $5, 3 #define CP0_SEGCTL2 $5, 4 @@ -77,6 +79,7 @@ #define CP0_CONFIG $16 #define CP0_CONFIG3 $16, 3 #define CP0_CONFIG5 $16, 5 +#define CP0_CONFIG6 $16, 6 #define CP0_LLADDR $17 #define CP0_WATCHLO $18 #define CP0_WATCHHI $19 @@ -1481,32 +1484,38 @@ do { \ #define __write_64bit_c0_split(source, sel, val) \ do { \ - unsigned long long __tmp; \ + unsigned long long __tmp = (val); \ unsigned long __flags; \ \ local_irq_save(__flags); \ - if (sel == 0) \ + if (MIPS_ISA_REV >= 2) \ + __asm__ __volatile__( \ + ".set\tpush\n\t" \ + ".set\t" MIPS_ISA_LEVEL "\n\t" \ + "dins\t%L0, %M0, 32, 32\n\t" \ + "dmtc0\t%L0, " #source ", " #sel "\n\t" \ + ".set\tpop" \ + : "+r" (__tmp)); \ + else if (sel == 0) \ __asm__ __volatile__( \ ".set\tmips64\n\t" \ - "dsll\t%L0, %L1, 32\n\t" \ + "dsll\t%L0, %L0, 32\n\t" \ "dsrl\t%L0, %L0, 32\n\t" \ - "dsll\t%M0, %M1, 32\n\t" \ + "dsll\t%M0, %M0, 32\n\t" \ "or\t%L0, %L0, %M0\n\t" \ "dmtc0\t%L0, " #source "\n\t" \ ".set\tmips0" \ - : "=&r,r" (__tmp) \ - : "r,0" (val)); \ + : "+r" (__tmp)); \ else \ __asm__ __volatile__( \ ".set\tmips64\n\t" \ - "dsll\t%L0, %L1, 32\n\t" \ + "dsll\t%L0, %L0, 32\n\t" \ "dsrl\t%L0, %L0, 32\n\t" \ - "dsll\t%M0, %M1, 32\n\t" \ + "dsll\t%M0, %M0, 32\n\t" \ "or\t%L0, %L0, %M0\n\t" \ "dmtc0\t%L0, " #source ", " #sel "\n\t" \ ".set\tmips0" \ - : "=&r,r" (__tmp) \ - : "r,0" (val)); \ + : "+r" (__tmp)); \ local_irq_restore(__flags); \ } while (0) diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h index da2004cef2d5..b509371a6b0c 100644 --- a/arch/mips/include/asm/mmu_context.h +++ b/arch/mips/include/asm/mmu_context.h @@ -126,8 +126,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm) for_each_possible_cpu(i) cpu_context(i, mm) = 0; - atomic_set(&mm->context.fp_mode_switching, 0); - mm->context.bd_emupage_allocmap = NULL; spin_lock_init(&mm->context.bd_emupage_lock); init_waitqueue_head(&mm->context.bd_emupage_queue); diff --git a/arch/mips/include/asm/netlogic/xlr/fmn.h b/arch/mips/include/asm/netlogic/xlr/fmn.h index 5604db3d1836..d79c68fa78d9 100644 --- a/arch/mips/include/asm/netlogic/xlr/fmn.h +++ b/arch/mips/include/asm/netlogic/xlr/fmn.h @@ -301,8 +301,6 @@ static inline int nlm_fmn_send(unsigned int size, unsigned int code, for (i = 0; i < 8; i++) { nlm_msgsnd(dest); status = nlm_read_c2_status0(); - if ((status & 0x2) == 1) - pr_info("Send pending fail!\n"); if ((status & 0x4) == 0) return 0; } diff --git a/arch/mips/include/asm/octeon/cvmx-asxx-defs.h b/arch/mips/include/asm/octeon/cvmx-asxx-defs.h index a1e21a3854cf..1eef155979f3 100644 --- a/arch/mips/include/asm/octeon/cvmx-asxx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-asxx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -55,6 +55,8 @@ #define CVMX_ASXX_TX_HI_WATERX(offset, block_id) (CVMX_ADD_IO_SEG(0x00011800B0000080ull) + (((offset) & 3) + ((block_id) & 1) * 0x1000000ull) * 8) #define CVMX_ASXX_TX_PRT_EN(block_id) (CVMX_ADD_IO_SEG(0x00011800B0000008ull) + ((block_id) & 1) * 0x8000000ull) +void __cvmx_interrupt_asxx_enable(int block); + union cvmx_asxx_gmii_rx_clk_set { uint64_t u64; struct cvmx_asxx_gmii_rx_clk_set_s { diff --git a/arch/mips/include/asm/octeon/cvmx-ciu-defs.h b/arch/mips/include/asm/octeon/cvmx-ciu-defs.h index 6e61792d9248..1d18be8cdddc 100644 --- a/arch/mips/include/asm/octeon/cvmx-ciu-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-ciu-defs.h @@ -1,10014 +1,176 @@ -/***********************license start*************** - * Author: Cavium Networks +/* SPDX-License-Identifier: GPL-2.0 */ +/* Octeon CIU definitions * - * Contact: support@caviumnetworks.com - * This file is part of the OCTEON SDK - * - * Copyright (c) 2003-2012 Cavium Networks - * - * This file is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License, Version 2, as - * published by the Free Software Foundation. - * - * This file is distributed in the hope that it will be useful, but - * AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty - * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or - * NONINFRINGEMENT. See the GNU General Public License for more - * details. - * - * You should have received a copy of the GNU General Public License - * along with this file; if not, write to the Free Software - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - * or visit http://www.gnu.org/licenses/. - * - * This file may also be available under a different license from Cavium. - * Contact Cavium Networks for more information - ***********************license end**************************************/ + * Copyright (C) 2003-2018 Cavium, Inc. + */ #ifndef __CVMX_CIU_DEFS_H__ #define __CVMX_CIU_DEFS_H__ -#define CVMX_CIU_BIST (CVMX_ADD_IO_SEG(0x0001070000000730ull)) -#define CVMX_CIU_BLOCK_INT (CVMX_ADD_IO_SEG(0x00010700000007C0ull)) -#define CVMX_CIU_DINT (CVMX_ADD_IO_SEG(0x0001070000000720ull)) -#define CVMX_CIU_EN2_IOX_INT(offset) (CVMX_ADD_IO_SEG(0x000107000000A600ull) + ((offset) & 1) * 8) -#define CVMX_CIU_EN2_IOX_INT_W1C(offset) (CVMX_ADD_IO_SEG(0x000107000000CE00ull) + ((offset) & 1) * 8) -#define CVMX_CIU_EN2_IOX_INT_W1S(offset) (CVMX_ADD_IO_SEG(0x000107000000AE00ull) + ((offset) & 1) * 8) -#define CVMX_CIU_EN2_PPX_IP2(offset) (CVMX_ADD_IO_SEG(0x000107000000A000ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP2_W1C(offset) (CVMX_ADD_IO_SEG(0x000107000000C800ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP2_W1S(offset) (CVMX_ADD_IO_SEG(0x000107000000A800ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP3(offset) (CVMX_ADD_IO_SEG(0x000107000000A200ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP3_W1C(offset) (CVMX_ADD_IO_SEG(0x000107000000CA00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP3_W1S(offset) (CVMX_ADD_IO_SEG(0x000107000000AA00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP4(offset) (CVMX_ADD_IO_SEG(0x000107000000A400ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP4_W1C(offset) (CVMX_ADD_IO_SEG(0x000107000000CC00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_EN2_PPX_IP4_W1S(offset) (CVMX_ADD_IO_SEG(0x000107000000AC00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_FUSE (CVMX_ADD_IO_SEG(0x0001070000000728ull)) -#define CVMX_CIU_GSTOP (CVMX_ADD_IO_SEG(0x0001070000000710ull)) -#define CVMX_CIU_INT33_SUM0 (CVMX_ADD_IO_SEG(0x0001070000000110ull)) -#define CVMX_CIU_INTX_EN0(offset) (CVMX_ADD_IO_SEG(0x0001070000000200ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN0_W1C(offset) (CVMX_ADD_IO_SEG(0x0001070000002200ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN0_W1S(offset) (CVMX_ADD_IO_SEG(0x0001070000006200ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN1(offset) (CVMX_ADD_IO_SEG(0x0001070000000208ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN1_W1C(offset) (CVMX_ADD_IO_SEG(0x0001070000002208ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN1_W1S(offset) (CVMX_ADD_IO_SEG(0x0001070000006208ull) + ((offset) & 63) * 16) -#define CVMX_CIU_INTX_EN4_0(offset) (CVMX_ADD_IO_SEG(0x0001070000000C80ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_EN4_0_W1C(offset) (CVMX_ADD_IO_SEG(0x0001070000002C80ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_EN4_0_W1S(offset) (CVMX_ADD_IO_SEG(0x0001070000006C80ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_EN4_1(offset) (CVMX_ADD_IO_SEG(0x0001070000000C88ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_EN4_1_W1C(offset) (CVMX_ADD_IO_SEG(0x0001070000002C88ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_EN4_1_W1S(offset) (CVMX_ADD_IO_SEG(0x0001070000006C88ull) + ((offset) & 15) * 16) -#define CVMX_CIU_INTX_SUM0(offset) (CVMX_ADD_IO_SEG(0x0001070000000000ull) + ((offset) & 63) * 8) -#define CVMX_CIU_INTX_SUM4(offset) (CVMX_ADD_IO_SEG(0x0001070000000C00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_INT_DBG_SEL (CVMX_ADD_IO_SEG(0x00010700000007D0ull)) -#define CVMX_CIU_INT_SUM1 (CVMX_ADD_IO_SEG(0x0001070000000108ull)) -static inline uint64_t CVMX_CIU_MBOX_CLRX(unsigned long offset) +#include + +#define CVMX_CIU_ADDR(addr, coreid, coremask, offset) \ + (CVMX_ADD_IO_SEG(0x0001070000000000ull + addr##ull) + \ + (((coreid) & (coremask)) * offset)) + +#define CVMX_CIU_EN2_PPX_IP4(c) CVMX_CIU_ADDR(0xA400, c, 0x0F, 8) +#define CVMX_CIU_EN2_PPX_IP4_W1C(c) CVMX_CIU_ADDR(0xCC00, c, 0x0F, 8) +#define CVMX_CIU_EN2_PPX_IP4_W1S(c) CVMX_CIU_ADDR(0xAC00, c, 0x0F, 8) +#define CVMX_CIU_FUSE CVMX_CIU_ADDR(0x0728, 0, 0x00, 0) +#define CVMX_CIU_INT_SUM1 CVMX_CIU_ADDR(0x0108, 0, 0x00, 0) +#define CVMX_CIU_INTX_EN0(c) CVMX_CIU_ADDR(0x0200, c, 0x3F, 16) +#define CVMX_CIU_INTX_EN0_W1C(c) CVMX_CIU_ADDR(0x2200, c, 0x3F, 16) +#define CVMX_CIU_INTX_EN0_W1S(c) CVMX_CIU_ADDR(0x6200, c, 0x3F, 16) +#define CVMX_CIU_INTX_EN1(c) CVMX_CIU_ADDR(0x0208, c, 0x3F, 16) +#define CVMX_CIU_INTX_EN1_W1C(c) CVMX_CIU_ADDR(0x2208, c, 0x3F, 16) +#define CVMX_CIU_INTX_EN1_W1S(c) CVMX_CIU_ADDR(0x6208, c, 0x3F, 16) +#define CVMX_CIU_INTX_SUM0(c) CVMX_CIU_ADDR(0x0000, c, 0x3F, 8) +#define CVMX_CIU_NMI CVMX_CIU_ADDR(0x0718, 0, 0x00, 0) +#define CVMX_CIU_PCI_INTA CVMX_CIU_ADDR(0x0750, 0, 0x00, 0) +#define CVMX_CIU_PP_BIST_STAT CVMX_CIU_ADDR(0x07E0, 0, 0x00, 0) +#define CVMX_CIU_PP_DBG CVMX_CIU_ADDR(0x0708, 0, 0x00, 0) +#define CVMX_CIU_PP_RST CVMX_CIU_ADDR(0x0700, 0, 0x00, 0) +#define CVMX_CIU_QLM0 CVMX_CIU_ADDR(0x0780, 0, 0x00, 0) +#define CVMX_CIU_QLM1 CVMX_CIU_ADDR(0x0788, 0, 0x00, 0) +#define CVMX_CIU_QLM_JTGC CVMX_CIU_ADDR(0x0768, 0, 0x00, 0) +#define CVMX_CIU_QLM_JTGD CVMX_CIU_ADDR(0x0770, 0, 0x00, 0) +#define CVMX_CIU_SOFT_BIST CVMX_CIU_ADDR(0x0738, 0, 0x00, 0) +#define CVMX_CIU_SOFT_PRST1 CVMX_CIU_ADDR(0x0758, 0, 0x00, 0) +#define CVMX_CIU_SOFT_PRST CVMX_CIU_ADDR(0x0748, 0, 0x00, 0) +#define CVMX_CIU_SOFT_RST CVMX_CIU_ADDR(0x0740, 0, 0x00, 0) +#define CVMX_CIU_SUM2_PPX_IP4(c) CVMX_CIU_ADDR(0x8C00, c, 0x0F, 8) +#define CVMX_CIU_TIM_MULTI_CAST CVMX_CIU_ADDR(0xC200, 0, 0x00, 0) +#define CVMX_CIU_TIMX(c) CVMX_CIU_ADDR(0x0480, c, 0x0F, 8) + +static inline uint64_t CVMX_CIU_MBOX_CLRX(unsigned int coreid) { - switch (cvmx_get_octeon_family()) { - case OCTEON_CN30XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN52XX & OCTEON_FAMILY_MASK: - case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: - case OCTEON_CN61XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN31XX & OCTEON_FAMILY_MASK: - case OCTEON_CN50XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN38XX & OCTEON_FAMILY_MASK: - case OCTEON_CN58XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN56XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN66XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN63XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; - case OCTEON_CN68XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070100100600ull) + (offset) * 8; - } - return CVMX_ADD_IO_SEG(0x0001070000000680ull) + (offset) * 8; + if (cvmx_get_octeon_family() == (OCTEON_CN68XX & OCTEON_FAMILY_MASK)) + return CVMX_CIU_ADDR(0x100100600, coreid, 0x0F, 8); + else + return CVMX_CIU_ADDR(0x000000680, coreid, 0x0F, 8); } -static inline uint64_t CVMX_CIU_MBOX_SETX(unsigned long offset) +static inline uint64_t CVMX_CIU_MBOX_SETX(unsigned int coreid) { - switch (cvmx_get_octeon_family()) { - case OCTEON_CN30XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN52XX & OCTEON_FAMILY_MASK: - case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: - case OCTEON_CN61XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN31XX & OCTEON_FAMILY_MASK: - case OCTEON_CN50XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN38XX & OCTEON_FAMILY_MASK: - case OCTEON_CN58XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN56XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN66XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN63XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; - case OCTEON_CN68XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070100100400ull) + (offset) * 8; - } - return CVMX_ADD_IO_SEG(0x0001070000000600ull) + (offset) * 8; + if (cvmx_get_octeon_family() == (OCTEON_CN68XX & OCTEON_FAMILY_MASK)) + return CVMX_CIU_ADDR(0x100100400, coreid, 0x0F, 8); + else + return CVMX_CIU_ADDR(0x000000600, coreid, 0x0F, 8); } -#define CVMX_CIU_NMI (CVMX_ADD_IO_SEG(0x0001070000000718ull)) -#define CVMX_CIU_PCI_INTA (CVMX_ADD_IO_SEG(0x0001070000000750ull)) -#define CVMX_CIU_PP_BIST_STAT (CVMX_ADD_IO_SEG(0x00010700000007E0ull)) -#define CVMX_CIU_PP_DBG (CVMX_ADD_IO_SEG(0x0001070000000708ull)) -static inline uint64_t CVMX_CIU_PP_POKEX(unsigned long offset) +static inline uint64_t CVMX_CIU_PP_POKEX(unsigned int coreid) { switch (cvmx_get_octeon_family()) { - case OCTEON_CN30XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN52XX & OCTEON_FAMILY_MASK: - case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: - case OCTEON_CN61XX & OCTEON_FAMILY_MASK: - case OCTEON_CN70XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN31XX & OCTEON_FAMILY_MASK: - case OCTEON_CN50XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN38XX & OCTEON_FAMILY_MASK: - case OCTEON_CN58XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN56XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN66XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; - case OCTEON_CN63XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; case OCTEON_CN68XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070100100200ull) + (offset) * 8; + return CVMX_CIU_ADDR(0x100100200, coreid, 0x0F, 8); case OCTEON_CNF75XX & OCTEON_FAMILY_MASK: case OCTEON_CN73XX & OCTEON_FAMILY_MASK: case OCTEON_CN78XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001010000030000ull) + (offset) * 8; + return CVMX_CIU_ADDR(0x000030000, coreid, 0x0F, 8) - + 0x60000000000ull; + default: + return CVMX_CIU_ADDR(0x000000580, coreid, 0x0F, 8); } - return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; } -#define CVMX_CIU_PP_RST (CVMX_ADD_IO_SEG(0x0001070000000700ull)) -#define CVMX_CIU_QLM0 (CVMX_ADD_IO_SEG(0x0001070000000780ull)) -#define CVMX_CIU_QLM1 (CVMX_ADD_IO_SEG(0x0001070000000788ull)) -#define CVMX_CIU_QLM2 (CVMX_ADD_IO_SEG(0x0001070000000790ull)) -#define CVMX_CIU_QLM3 (CVMX_ADD_IO_SEG(0x0001070000000798ull)) -#define CVMX_CIU_QLM4 (CVMX_ADD_IO_SEG(0x00010700000007A0ull)) -#define CVMX_CIU_QLM_DCOK (CVMX_ADD_IO_SEG(0x0001070000000760ull)) -#define CVMX_CIU_QLM_JTGC (CVMX_ADD_IO_SEG(0x0001070000000768ull)) -#define CVMX_CIU_QLM_JTGD (CVMX_ADD_IO_SEG(0x0001070000000770ull)) -#define CVMX_CIU_SOFT_BIST (CVMX_ADD_IO_SEG(0x0001070000000738ull)) -#define CVMX_CIU_SOFT_PRST (CVMX_ADD_IO_SEG(0x0001070000000748ull)) -#define CVMX_CIU_SOFT_PRST1 (CVMX_ADD_IO_SEG(0x0001070000000758ull)) -#define CVMX_CIU_SOFT_PRST2 (CVMX_ADD_IO_SEG(0x00010700000007D8ull)) -#define CVMX_CIU_SOFT_PRST3 (CVMX_ADD_IO_SEG(0x00010700000007E0ull)) -#define CVMX_CIU_SOFT_RST (CVMX_ADD_IO_SEG(0x0001070000000740ull)) -#define CVMX_CIU_SUM1_IOX_INT(offset) (CVMX_ADD_IO_SEG(0x0001070000008600ull) + ((offset) & 1) * 8) -#define CVMX_CIU_SUM1_PPX_IP2(offset) (CVMX_ADD_IO_SEG(0x0001070000008000ull) + ((offset) & 15) * 8) -#define CVMX_CIU_SUM1_PPX_IP3(offset) (CVMX_ADD_IO_SEG(0x0001070000008200ull) + ((offset) & 15) * 8) -#define CVMX_CIU_SUM1_PPX_IP4(offset) (CVMX_ADD_IO_SEG(0x0001070000008400ull) + ((offset) & 15) * 8) -#define CVMX_CIU_SUM2_IOX_INT(offset) (CVMX_ADD_IO_SEG(0x0001070000008E00ull) + ((offset) & 1) * 8) -#define CVMX_CIU_SUM2_PPX_IP2(offset) (CVMX_ADD_IO_SEG(0x0001070000008800ull) + ((offset) & 15) * 8) -#define CVMX_CIU_SUM2_PPX_IP3(offset) (CVMX_ADD_IO_SEG(0x0001070000008A00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_SUM2_PPX_IP4(offset) (CVMX_ADD_IO_SEG(0x0001070000008C00ull) + ((offset) & 15) * 8) -#define CVMX_CIU_TIMX(offset) (CVMX_ADD_IO_SEG(0x0001070000000480ull) + ((offset) & 15) * 8) -#define CVMX_CIU_TIM_MULTI_CAST (CVMX_ADD_IO_SEG(0x000107000000C200ull)) -static inline uint64_t CVMX_CIU_WDOGX(unsigned long offset) +static inline uint64_t CVMX_CIU_WDOGX(unsigned int coreid) { switch (cvmx_get_octeon_family()) { - case OCTEON_CN30XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN52XX & OCTEON_FAMILY_MASK: - case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: - case OCTEON_CN61XX & OCTEON_FAMILY_MASK: - case OCTEON_CN70XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN31XX & OCTEON_FAMILY_MASK: - case OCTEON_CN50XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN38XX & OCTEON_FAMILY_MASK: - case OCTEON_CN58XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN56XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN66XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; - case OCTEON_CN63XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; case OCTEON_CN68XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001070100100000ull) + (offset) * 8; + return CVMX_CIU_ADDR(0x100100000, coreid, 0x0F, 8); case OCTEON_CNF75XX & OCTEON_FAMILY_MASK: case OCTEON_CN73XX & OCTEON_FAMILY_MASK: case OCTEON_CN78XX & OCTEON_FAMILY_MASK: - return CVMX_ADD_IO_SEG(0x0001010000020000ull) + (offset) * 8; + return CVMX_CIU_ADDR(0x000020000, coreid, 0x0F, 8) - + 0x60000000000ull; + default: + return CVMX_CIU_ADDR(0x000000500, coreid, 0x0F, 8); } - return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; } -union cvmx_ciu_bist { - uint64_t u64; - struct cvmx_ciu_bist_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_7_63:57; - uint64_t bist:7; -#else - uint64_t bist:7; - uint64_t reserved_7_63:57; -#endif - } s; - struct cvmx_ciu_bist_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t bist:4; -#else - uint64_t bist:4; - uint64_t reserved_4_63:60; -#endif - } cn30xx; - struct cvmx_ciu_bist_cn30xx cn31xx; - struct cvmx_ciu_bist_cn30xx cn38xx; - struct cvmx_ciu_bist_cn30xx cn38xxp2; - struct cvmx_ciu_bist_cn50xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t bist:2; -#else - uint64_t bist:2; - uint64_t reserved_2_63:62; -#endif - } cn50xx; - struct cvmx_ciu_bist_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_3_63:61; - uint64_t bist:3; -#else - uint64_t bist:3; - uint64_t reserved_3_63:61; -#endif - } cn52xx; - struct cvmx_ciu_bist_cn52xx cn52xxp1; - struct cvmx_ciu_bist_cn30xx cn56xx; - struct cvmx_ciu_bist_cn30xx cn56xxp1; - struct cvmx_ciu_bist_cn30xx cn58xx; - struct cvmx_ciu_bist_cn30xx cn58xxp1; - struct cvmx_ciu_bist_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t bist:6; -#else - uint64_t bist:6; - uint64_t reserved_6_63:58; -#endif - } cn61xx; - struct cvmx_ciu_bist_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_5_63:59; - uint64_t bist:5; -#else - uint64_t bist:5; - uint64_t reserved_5_63:59; -#endif - } cn63xx; - struct cvmx_ciu_bist_cn63xx cn63xxp1; - struct cvmx_ciu_bist_cn61xx cn66xx; - struct cvmx_ciu_bist_s cn68xx; - struct cvmx_ciu_bist_s cn68xxp1; - struct cvmx_ciu_bist_cn61xx cnf71xx; -}; - -union cvmx_ciu_block_int { - uint64_t u64; - struct cvmx_ciu_block_int_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_62_63:2; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_43_59:17; - uint64_t ptp:1; - uint64_t dpi:1; - uint64_t dfm:1; - uint64_t reserved_34_39:6; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t reserved_31_31:1; - uint64_t iob:1; - uint64_t reserved_29_29:1; - uint64_t agl:1; - uint64_t reserved_27_27:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t reserved_24_24:1; - uint64_t asxpcs1:1; - uint64_t asxpcs0:1; - uint64_t reserved_21_21:1; - uint64_t pip:1; - uint64_t reserved_18_19:2; - uint64_t lmc0:1; - uint64_t l2c:1; - uint64_t reserved_15_15:1; - uint64_t rad:1; - uint64_t usb:1; - uint64_t pow:1; - uint64_t tim:1; - uint64_t pko:1; - uint64_t ipd:1; - uint64_t reserved_8_8:1; - uint64_t zip:1; - uint64_t dfa:1; - uint64_t fpa:1; - uint64_t key:1; - uint64_t sli:1; - uint64_t gmx1:1; - uint64_t gmx0:1; - uint64_t mio:1; -#else - uint64_t mio:1; - uint64_t gmx0:1; - uint64_t gmx1:1; - uint64_t sli:1; - uint64_t key:1; - uint64_t fpa:1; - uint64_t dfa:1; - uint64_t zip:1; - uint64_t reserved_8_8:1; - uint64_t ipd:1; - uint64_t pko:1; - uint64_t tim:1; - uint64_t pow:1; - uint64_t usb:1; - uint64_t rad:1; - uint64_t reserved_15_15:1; - uint64_t l2c:1; - uint64_t lmc0:1; - uint64_t reserved_18_19:2; - uint64_t pip:1; - uint64_t reserved_21_21:1; - uint64_t asxpcs0:1; - uint64_t asxpcs1:1; - uint64_t reserved_24_24:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_27_27:1; - uint64_t agl:1; - uint64_t reserved_29_29:1; - uint64_t iob:1; - uint64_t reserved_31_31:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t reserved_34_39:6; - uint64_t dfm:1; - uint64_t dpi:1; - uint64_t ptp:1; - uint64_t reserved_43_59:17; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_63:2; -#endif - } s; - struct cvmx_ciu_block_int_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_43_63:21; - uint64_t ptp:1; - uint64_t dpi:1; - uint64_t reserved_31_40:10; - uint64_t iob:1; - uint64_t reserved_29_29:1; - uint64_t agl:1; - uint64_t reserved_27_27:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t reserved_24_24:1; - uint64_t asxpcs1:1; - uint64_t asxpcs0:1; - uint64_t reserved_21_21:1; - uint64_t pip:1; - uint64_t reserved_18_19:2; - uint64_t lmc0:1; - uint64_t l2c:1; - uint64_t reserved_15_15:1; - uint64_t rad:1; - uint64_t usb:1; - uint64_t pow:1; - uint64_t tim:1; - uint64_t pko:1; - uint64_t ipd:1; - uint64_t reserved_8_8:1; - uint64_t zip:1; - uint64_t dfa:1; - uint64_t fpa:1; - uint64_t key:1; - uint64_t sli:1; - uint64_t gmx1:1; - uint64_t gmx0:1; - uint64_t mio:1; -#else - uint64_t mio:1; - uint64_t gmx0:1; - uint64_t gmx1:1; - uint64_t sli:1; - uint64_t key:1; - uint64_t fpa:1; - uint64_t dfa:1; - uint64_t zip:1; - uint64_t reserved_8_8:1; - uint64_t ipd:1; - uint64_t pko:1; - uint64_t tim:1; - uint64_t pow:1; - uint64_t usb:1; - uint64_t rad:1; - uint64_t reserved_15_15:1; - uint64_t l2c:1; - uint64_t lmc0:1; - uint64_t reserved_18_19:2; - uint64_t pip:1; - uint64_t reserved_21_21:1; - uint64_t asxpcs0:1; - uint64_t asxpcs1:1; - uint64_t reserved_24_24:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_27_27:1; - uint64_t agl:1; - uint64_t reserved_29_29:1; - uint64_t iob:1; - uint64_t reserved_31_40:10; - uint64_t dpi:1; - uint64_t ptp:1; - uint64_t reserved_43_63:21; -#endif - } cn61xx; - struct cvmx_ciu_block_int_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_43_63:21; - uint64_t ptp:1; - uint64_t dpi:1; - uint64_t dfm:1; - uint64_t reserved_34_39:6; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t reserved_31_31:1; - uint64_t iob:1; - uint64_t reserved_29_29:1; - uint64_t agl:1; - uint64_t reserved_27_27:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t reserved_23_24:2; - uint64_t asxpcs0:1; - uint64_t reserved_21_21:1; - uint64_t pip:1; - uint64_t reserved_18_19:2; - uint64_t lmc0:1; - uint64_t l2c:1; - uint64_t reserved_15_15:1; - uint64_t rad:1; - uint64_t usb:1; - uint64_t pow:1; - uint64_t tim:1; - uint64_t pko:1; - uint64_t ipd:1; - uint64_t reserved_8_8:1; - uint64_t zip:1; - uint64_t dfa:1; - uint64_t fpa:1; - uint64_t key:1; - uint64_t sli:1; - uint64_t reserved_2_2:1; - uint64_t gmx0:1; - uint64_t mio:1; -#else - uint64_t mio:1; - uint64_t gmx0:1; - uint64_t reserved_2_2:1; - uint64_t sli:1; - uint64_t key:1; - uint64_t fpa:1; - uint64_t dfa:1; - uint64_t zip:1; - uint64_t reserved_8_8:1; - uint64_t ipd:1; - uint64_t pko:1; - uint64_t tim:1; - uint64_t pow:1; - uint64_t usb:1; - uint64_t rad:1; - uint64_t reserved_15_15:1; - uint64_t l2c:1; - uint64_t lmc0:1; - uint64_t reserved_18_19:2; - uint64_t pip:1; - uint64_t reserved_21_21:1; - uint64_t asxpcs0:1; - uint64_t reserved_23_24:2; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_27_27:1; - uint64_t agl:1; - uint64_t reserved_29_29:1; - uint64_t iob:1; - uint64_t reserved_31_31:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t reserved_34_39:6; - uint64_t dfm:1; - uint64_t dpi:1; - uint64_t ptp:1; - uint64_t reserved_43_63:21; -#endif - } cn63xx; - struct cvmx_ciu_block_int_cn63xx cn63xxp1; - struct cvmx_ciu_block_int_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_62_63:2; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_43_59:17; - uint64_t ptp:1; - uint64_t dpi:1; - uint64_t dfm:1; - uint64_t reserved_33_39:7; - uint64_t srio0:1; - uint64_t reserved_31_31:1; - uint64_t iob:1; - uint64_t reserved_29_29:1; - uint64_t agl:1; - uint64_t reserved_27_27:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t reserved_24_24:1; - uint64_t asxpcs1:1; - uint64_t asxpcs0:1; - uint64_t reserved_21_21:1; - uint64_t pip:1; - uint64_t reserved_18_19:2; - uint64_t lmc0:1; - uint64_t l2c:1; - uint64_t reserved_15_15:1; - uint64_t rad:1; - uint64_t usb:1; - uint64_t pow:1; - uint64_t tim:1; - uint64_t pko:1; - uint64_t ipd:1; - uint64_t reserved_8_8:1; - uint64_t zip:1; - uint64_t dfa:1; - uint64_t fpa:1; - uint64_t key:1; - uint64_t sli:1; - uint64_t gmx1:1; - uint64_t gmx0:1; - uint64_t mio:1; -#else - uint64_t mio:1; - uint64_t gmx0:1; - uint64_t gmx1:1; - uint64_t sli:1; - uint64_t key:1; - uint64_t fpa:1; - uint64_t dfa:1; - uint64_t zip:1; - uint64_t reserved_8_8:1; - uint64_t ipd:1; - uint64_t pko:1; - uint64_t tim:1; - uint64_t pow:1; - uint64_t usb:1; - uint64_t rad:1; - uint64_t reserved_15_15:1; - uint64_t l2c:1; - uint64_t lmc0:1; - uint64_t reserved_18_19:2; - uint64_t pip:1; - uint64_t reserved_21_21:1; - uint64_t asxpcs0:1; - uint64_t asxpcs1:1; - uint64_t reserved_24_24:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_27_27:1; - uint64_t agl:1; - uint64_t reserved_29_29:1; - uint64_t iob:1; - uint64_t reserved_31_31:1; - uint64_t srio0:1; - uint64_t reserved_33_39:7; - uint64_t dfm:1; - uint64_t dpi:1; - uint64_t ptp:1; - uint64_t reserved_43_59:17; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_63:2; -#endif - } cn66xx; - struct cvmx_ciu_block_int_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_43_63:21; - uint64_t ptp:1; - uint64_t dpi:1; - uint64_t reserved_31_40:10; - uint64_t iob:1; - uint64_t reserved_27_29:3; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t reserved_23_24:2; - uint64_t asxpcs0:1; - uint64_t reserved_21_21:1; - uint64_t pip:1; - uint64_t reserved_18_19:2; - uint64_t lmc0:1; - uint64_t l2c:1; - uint64_t reserved_15_15:1; - uint64_t rad:1; - uint64_t usb:1; - uint64_t pow:1; - uint64_t tim:1; - uint64_t pko:1; - uint64_t ipd:1; - uint64_t reserved_6_8:3; - uint64_t fpa:1; - uint64_t key:1; - uint64_t sli:1; - uint64_t reserved_2_2:1; - uint64_t gmx0:1; - uint64_t mio:1; -#else - uint64_t mio:1; - uint64_t gmx0:1; - uint64_t reserved_2_2:1; - uint64_t sli:1; - uint64_t key:1; - uint64_t fpa:1; - uint64_t reserved_6_8:3; - uint64_t ipd:1; - uint64_t pko:1; - uint64_t tim:1; - uint64_t pow:1; - uint64_t usb:1; - uint64_t rad:1; - uint64_t reserved_15_15:1; - uint64_t l2c:1; - uint64_t lmc0:1; - uint64_t reserved_18_19:2; - uint64_t pip:1; - uint64_t reserved_21_21:1; - uint64_t asxpcs0:1; - uint64_t reserved_23_24:2; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_27_29:3; - uint64_t iob:1; - uint64_t reserved_31_40:10; - uint64_t dpi:1; - uint64_t ptp:1; - uint64_t reserved_43_63:21; -#endif - } cnf71xx; -}; - -union cvmx_ciu_dint { - uint64_t u64; - struct cvmx_ciu_dint_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t dint:32; -#else - uint64_t dint:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_dint_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t dint:1; -#else - uint64_t dint:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_dint_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t dint:2; -#else - uint64_t dint:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_dint_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t dint:16; -#else - uint64_t dint:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_dint_cn38xx cn38xxp2; - struct cvmx_ciu_dint_cn31xx cn50xx; - struct cvmx_ciu_dint_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t dint:4; -#else - uint64_t dint:4; - uint64_t reserved_4_63:60; -#endif - } cn52xx; - struct cvmx_ciu_dint_cn52xx cn52xxp1; - struct cvmx_ciu_dint_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t dint:12; -#else - uint64_t dint:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_dint_cn56xx cn56xxp1; - struct cvmx_ciu_dint_cn38xx cn58xx; - struct cvmx_ciu_dint_cn38xx cn58xxp1; - struct cvmx_ciu_dint_cn52xx cn61xx; - struct cvmx_ciu_dint_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t dint:6; -#else - uint64_t dint:6; - uint64_t reserved_6_63:58; -#endif - } cn63xx; - struct cvmx_ciu_dint_cn63xx cn63xxp1; - struct cvmx_ciu_dint_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t dint:10; -#else - uint64_t dint:10; - uint64_t reserved_10_63:54; -#endif - } cn66xx; - struct cvmx_ciu_dint_s cn68xx; - struct cvmx_ciu_dint_s cn68xxp1; - struct cvmx_ciu_dint_cn52xx cnf71xx; -}; - -union cvmx_ciu_en2_iox_int { - uint64_t u64; - struct cvmx_ciu_en2_iox_int_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_iox_int_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_iox_int_cn61xx cn66xx; - struct cvmx_ciu_en2_iox_int_s cnf71xx; -}; - -union cvmx_ciu_en2_iox_int_w1c { - uint64_t u64; - struct cvmx_ciu_en2_iox_int_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_iox_int_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_iox_int_w1c_cn61xx cn66xx; - struct cvmx_ciu_en2_iox_int_w1c_s cnf71xx; -}; - -union cvmx_ciu_en2_iox_int_w1s { - uint64_t u64; - struct cvmx_ciu_en2_iox_int_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_iox_int_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_iox_int_w1s_cn61xx cn66xx; - struct cvmx_ciu_en2_iox_int_w1s_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip2 { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip2_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip2_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip2_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip2_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip2_w1c { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip2_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip2_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip2_w1c_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip2_w1c_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip2_w1s { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip2_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip2_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip2_w1s_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip2_w1s_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip3 { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip3_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip3_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip3_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip3_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip3_w1c { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip3_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip3_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip3_w1c_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip3_w1c_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip3_w1s { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip3_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip3_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip3_w1s_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip3_w1s_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip4 { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip4_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip4_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip4_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip4_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip4_w1c { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip4_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip4_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip4_w1c_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip4_w1c_s cnf71xx; -}; - -union cvmx_ciu_en2_ppx_ip4_w1s { - uint64_t u64; - struct cvmx_ciu_en2_ppx_ip4_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_en2_ppx_ip4_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_en2_ppx_ip4_w1s_cn61xx cn66xx; - struct cvmx_ciu_en2_ppx_ip4_w1s_s cnf71xx; -}; - -union cvmx_ciu_fuse { - uint64_t u64; - struct cvmx_ciu_fuse_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t fuse:32; -#else - uint64_t fuse:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_fuse_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t fuse:1; -#else - uint64_t fuse:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_fuse_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t fuse:2; -#else - uint64_t fuse:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_fuse_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t fuse:16; -#else - uint64_t fuse:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_fuse_cn38xx cn38xxp2; - struct cvmx_ciu_fuse_cn31xx cn50xx; - struct cvmx_ciu_fuse_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t fuse:4; -#else - uint64_t fuse:4; - uint64_t reserved_4_63:60; -#endif - } cn52xx; - struct cvmx_ciu_fuse_cn52xx cn52xxp1; - struct cvmx_ciu_fuse_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t fuse:12; -#else - uint64_t fuse:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_fuse_cn56xx cn56xxp1; - struct cvmx_ciu_fuse_cn38xx cn58xx; - struct cvmx_ciu_fuse_cn38xx cn58xxp1; - struct cvmx_ciu_fuse_cn52xx cn61xx; - struct cvmx_ciu_fuse_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t fuse:6; -#else - uint64_t fuse:6; - uint64_t reserved_6_63:58; -#endif - } cn63xx; - struct cvmx_ciu_fuse_cn63xx cn63xxp1; - struct cvmx_ciu_fuse_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t fuse:10; -#else - uint64_t fuse:10; - uint64_t reserved_10_63:54; -#endif - } cn66xx; - struct cvmx_ciu_fuse_s cn68xx; - struct cvmx_ciu_fuse_s cn68xxp1; - struct cvmx_ciu_fuse_cn52xx cnf71xx; -}; - -union cvmx_ciu_gstop { - uint64_t u64; - struct cvmx_ciu_gstop_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t gstop:1; -#else - uint64_t gstop:1; - uint64_t reserved_1_63:63; -#endif - } s; - struct cvmx_ciu_gstop_s cn30xx; - struct cvmx_ciu_gstop_s cn31xx; - struct cvmx_ciu_gstop_s cn38xx; - struct cvmx_ciu_gstop_s cn38xxp2; - struct cvmx_ciu_gstop_s cn50xx; - struct cvmx_ciu_gstop_s cn52xx; - struct cvmx_ciu_gstop_s cn52xxp1; - struct cvmx_ciu_gstop_s cn56xx; - struct cvmx_ciu_gstop_s cn56xxp1; - struct cvmx_ciu_gstop_s cn58xx; - struct cvmx_ciu_gstop_s cn58xxp1; - struct cvmx_ciu_gstop_s cn61xx; - struct cvmx_ciu_gstop_s cn63xx; - struct cvmx_ciu_gstop_s cn63xxp1; - struct cvmx_ciu_gstop_s cn66xx; - struct cvmx_ciu_gstop_s cn68xx; - struct cvmx_ciu_gstop_s cn68xxp1; - struct cvmx_ciu_gstop_s cnf71xx; -}; - -union cvmx_ciu_intx_en0 { - uint64_t u64; - struct cvmx_ciu_intx_en0_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en0_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t reserved_47_47:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t reserved_47_47:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn30xx; - struct cvmx_ciu_intx_en0_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn31xx; - struct cvmx_ciu_intx_en0_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn38xx; - struct cvmx_ciu_intx_en0_cn38xx cn38xxp2; - struct cvmx_ciu_intx_en0_cn30xx cn50xx; - struct cvmx_ciu_intx_en0_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en0_cn52xx cn52xxp1; - struct cvmx_ciu_intx_en0_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en0_cn56xx cn56xxp1; - struct cvmx_ciu_intx_en0_cn38xx cn58xx; - struct cvmx_ciu_intx_en0_cn38xx cn58xxp1; - struct cvmx_ciu_intx_en0_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en0_cn52xx cn63xx; - struct cvmx_ciu_intx_en0_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en0_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en0_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en0_w1c { - uint64_t u64; - struct cvmx_ciu_intx_en0_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en0_w1c_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en0_w1c_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en0_w1c_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_en0_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en0_w1c_cn52xx cn63xx; - struct cvmx_ciu_intx_en0_w1c_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en0_w1c_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en0_w1c_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en0_w1s { - uint64_t u64; - struct cvmx_ciu_intx_en0_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en0_w1s_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en0_w1s_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en0_w1s_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_en0_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en0_w1s_cn52xx cn63xx; - struct cvmx_ciu_intx_en0_w1s_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en0_w1s_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en0_w1s_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en1 { - uint64_t u64; - struct cvmx_ciu_intx_en1_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en1_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t wdog:1; -#else - uint64_t wdog:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_intx_en1_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t wdog:2; -#else - uint64_t wdog:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_intx_en1_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_intx_en1_cn38xx cn38xxp2; - struct cvmx_ciu_intx_en1_cn31xx cn50xx; - struct cvmx_ciu_intx_en1_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en1_cn52xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t reserved_19_63:45; -#endif - } cn52xxp1; - struct cvmx_ciu_intx_en1_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en1_cn56xx cn56xxp1; - struct cvmx_ciu_intx_en1_cn38xx cn58xx; - struct cvmx_ciu_intx_en1_cn38xx cn58xxp1; - struct cvmx_ciu_intx_en1_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en1_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en1_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en1_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en1_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en1_w1c { - uint64_t u64; - struct cvmx_ciu_intx_en1_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en1_w1c_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en1_w1c_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en1_w1c_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn58xx; - struct cvmx_ciu_intx_en1_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en1_w1c_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en1_w1c_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en1_w1c_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en1_w1c_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en1_w1s { - uint64_t u64; - struct cvmx_ciu_intx_en1_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en1_w1s_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en1_w1s_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en1_w1s_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn58xx; - struct cvmx_ciu_intx_en1_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en1_w1s_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en1_w1s_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en1_w1s_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en1_w1s_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_0 { - uint64_t u64; - struct cvmx_ciu_intx_en4_0_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en4_0_cn50xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t reserved_47_47:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t reserved_47_47:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn50xx; - struct cvmx_ciu_intx_en4_0_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_0_cn52xx cn52xxp1; - struct cvmx_ciu_intx_en4_0_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_0_cn56xx cn56xxp1; - struct cvmx_ciu_intx_en4_0_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_0_cn58xx cn58xxp1; - struct cvmx_ciu_intx_en4_0_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_0_cn52xx cn63xx; - struct cvmx_ciu_intx_en4_0_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en4_0_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_0_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_0_w1c { - uint64_t u64; - struct cvmx_ciu_intx_en4_0_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en4_0_w1c_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_0_w1c_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_0_w1c_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_0_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_0_w1c_cn52xx cn63xx; - struct cvmx_ciu_intx_en4_0_w1c_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en4_0_w1c_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_0_w1c_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_0_w1s { - uint64_t u64; - struct cvmx_ciu_intx_en4_0_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_en4_0_w1s_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_0_w1s_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_0_w1s_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_0_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_0_w1s_cn52xx cn63xx; - struct cvmx_ciu_intx_en4_0_w1s_cn52xx cn63xxp1; - struct cvmx_ciu_intx_en4_0_w1s_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_0_w1s_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t reserved_44_44:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t reserved_44_44:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_1 { - uint64_t u64; - struct cvmx_ciu_intx_en4_1_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en4_1_cn50xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t wdog:2; -#else - uint64_t wdog:2; - uint64_t reserved_2_63:62; -#endif - } cn50xx; - struct cvmx_ciu_intx_en4_1_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_1_cn52xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t reserved_19_63:45; -#endif - } cn52xxp1; - struct cvmx_ciu_intx_en4_1_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_1_cn56xx cn56xxp1; - struct cvmx_ciu_intx_en4_1_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_1_cn58xx cn58xxp1; - struct cvmx_ciu_intx_en4_1_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_1_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en4_1_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en4_1_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_1_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_1_w1c { - uint64_t u64; - struct cvmx_ciu_intx_en4_1_w1c_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en4_1_w1c_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_1_w1c_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_1_w1c_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_1_w1c_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_1_w1c_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en4_1_w1c_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en4_1_w1c_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_1_w1c_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_en4_1_w1s { - uint64_t u64; - struct cvmx_ciu_intx_en4_1_w1s_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_intx_en4_1_w1s_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_intx_en4_1_w1s_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_intx_en4_1_w1s_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn58xx; - struct cvmx_ciu_intx_en4_1_w1s_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_en4_1_w1s_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_intx_en4_1_w1s_cn63xx cn63xxp1; - struct cvmx_ciu_intx_en4_1_w1s_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_en4_1_w1s_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_sum0 { - uint64_t u64; - struct cvmx_ciu_intx_sum0_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_sum0_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t reserved_47_47:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t reserved_47_47:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn30xx; - struct cvmx_ciu_intx_sum0_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn31xx; - struct cvmx_ciu_intx_sum0_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn38xx; - struct cvmx_ciu_intx_sum0_cn38xx cn38xxp2; - struct cvmx_ciu_intx_sum0_cn30xx cn50xx; - struct cvmx_ciu_intx_sum0_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_sum0_cn52xx cn52xxp1; - struct cvmx_ciu_intx_sum0_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_sum0_cn56xx cn56xxp1; - struct cvmx_ciu_intx_sum0_cn38xx cn58xx; - struct cvmx_ciu_intx_sum0_cn38xx cn58xxp1; - struct cvmx_ciu_intx_sum0_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_sum0_cn52xx cn63xx; - struct cvmx_ciu_intx_sum0_cn52xx cn63xxp1; - struct cvmx_ciu_intx_sum0_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_sum0_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_intx_sum4 { - uint64_t u64; - struct cvmx_ciu_intx_sum4_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_intx_sum4_cn50xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_59_63:5; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t reserved_47_47:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t reserved_47_47:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t reserved_59_63:5; -#endif - } cn50xx; - struct cvmx_ciu_intx_sum4_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn52xx; - struct cvmx_ciu_intx_sum4_cn52xx cn52xxp1; - struct cvmx_ciu_intx_sum4_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn56xx; - struct cvmx_ciu_intx_sum4_cn56xx cn56xxp1; - struct cvmx_ciu_intx_sum4_cn58xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_56_63:8; - uint64_t timer:4; - uint64_t key_zero:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t key_zero:1; - uint64_t timer:4; - uint64_t reserved_56_63:8; -#endif - } cn58xx; - struct cvmx_ciu_intx_sum4_cn58xx cn58xxp1; - struct cvmx_ciu_intx_sum4_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn61xx; - struct cvmx_ciu_intx_sum4_cn52xx cn63xx; - struct cvmx_ciu_intx_sum4_cn52xx cn63xxp1; - struct cvmx_ciu_intx_sum4_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_intx_sum4_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_int33_sum0 { - uint64_t u64; - struct cvmx_ciu_int33_sum0_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } s; - struct cvmx_ciu_int33_sum0_s cn61xx; - struct cvmx_ciu_int33_sum0_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t reserved_57_58:2; - uint64_t usb:1; - uint64_t timer:4; - uint64_t reserved_51_51:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t reserved_51_51:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_58:2; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn63xx; - struct cvmx_ciu_int33_sum0_cn63xx cn63xxp1; - struct cvmx_ciu_int33_sum0_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t mii:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t reserved_57_57:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t gmx_drp:2; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:2; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t reserved_57_57:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t mii:1; - uint64_t bootdma:1; -#endif - } cn66xx; - struct cvmx_ciu_int33_sum0_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t bootdma:1; - uint64_t reserved_62_62:1; - uint64_t ipdppthr:1; - uint64_t powiq:1; - uint64_t twsi2:1; - uint64_t mpi:1; - uint64_t pcm:1; - uint64_t usb:1; - uint64_t timer:4; - uint64_t sum2:1; - uint64_t ipd_drp:1; - uint64_t reserved_49_49:1; - uint64_t gmx_drp:1; - uint64_t trace:1; - uint64_t rml:1; - uint64_t twsi:1; - uint64_t wdog_sum:1; - uint64_t pci_msi:4; - uint64_t pci_int:4; - uint64_t uart:2; - uint64_t mbox:2; - uint64_t gpio:16; - uint64_t workq:16; -#else - uint64_t workq:16; - uint64_t gpio:16; - uint64_t mbox:2; - uint64_t uart:2; - uint64_t pci_int:4; - uint64_t pci_msi:4; - uint64_t wdog_sum:1; - uint64_t twsi:1; - uint64_t rml:1; - uint64_t trace:1; - uint64_t gmx_drp:1; - uint64_t reserved_49_49:1; - uint64_t ipd_drp:1; - uint64_t sum2:1; - uint64_t timer:4; - uint64_t usb:1; - uint64_t pcm:1; - uint64_t mpi:1; - uint64_t twsi2:1; - uint64_t powiq:1; - uint64_t ipdppthr:1; - uint64_t reserved_62_62:1; - uint64_t bootdma:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_int_dbg_sel { - uint64_t u64; - struct cvmx_ciu_int_dbg_sel_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t sel:3; - uint64_t reserved_10_15:6; - uint64_t irq:2; - uint64_t reserved_5_7:3; - uint64_t pp:5; -#else - uint64_t pp:5; - uint64_t reserved_5_7:3; - uint64_t irq:2; - uint64_t reserved_10_15:6; - uint64_t sel:3; - uint64_t reserved_19_63:45; -#endif - } s; - struct cvmx_ciu_int_dbg_sel_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t sel:3; - uint64_t reserved_10_15:6; - uint64_t irq:2; - uint64_t reserved_4_7:4; - uint64_t pp:4; -#else - uint64_t pp:4; - uint64_t reserved_4_7:4; - uint64_t irq:2; - uint64_t reserved_10_15:6; - uint64_t sel:3; - uint64_t reserved_19_63:45; -#endif - } cn61xx; - struct cvmx_ciu_int_dbg_sel_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t sel:3; - uint64_t reserved_10_15:6; - uint64_t irq:2; - uint64_t reserved_3_7:5; - uint64_t pp:3; -#else - uint64_t pp:3; - uint64_t reserved_3_7:5; - uint64_t irq:2; - uint64_t reserved_10_15:6; - uint64_t sel:3; - uint64_t reserved_19_63:45; -#endif - } cn63xx; - struct cvmx_ciu_int_dbg_sel_cn61xx cn66xx; - struct cvmx_ciu_int_dbg_sel_s cn68xx; - struct cvmx_ciu_int_dbg_sel_s cn68xxp1; - struct cvmx_ciu_int_dbg_sel_cn61xx cnf71xx; -}; - -union cvmx_ciu_int_sum1 { - uint64_t u64; - struct cvmx_ciu_int_sum1_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_int_sum1_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t wdog:1; -#else - uint64_t wdog:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_int_sum1_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t wdog:2; -#else - uint64_t wdog:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_int_sum1_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t wdog:16; -#else - uint64_t wdog:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_int_sum1_cn38xx cn38xxp2; - struct cvmx_ciu_int_sum1_cn31xx cn50xx; - struct cvmx_ciu_int_sum1_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_20_63:44; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t reserved_20_63:44; -#endif - } cn52xx; - struct cvmx_ciu_int_sum1_cn52xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_19_63:45; - uint64_t mii1:1; - uint64_t usb1:1; - uint64_t uart2:1; - uint64_t reserved_4_15:12; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_15:12; - uint64_t uart2:1; - uint64_t usb1:1; - uint64_t mii1:1; - uint64_t reserved_19_63:45; -#endif - } cn52xxp1; - struct cvmx_ciu_int_sum1_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t wdog:12; -#else - uint64_t wdog:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_int_sum1_cn56xx cn56xxp1; - struct cvmx_ciu_int_sum1_cn38xx cn58xx; - struct cvmx_ciu_int_sum1_cn38xx cn58xxp1; - struct cvmx_ciu_int_sum1_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_int_sum1_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_57_62:6; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t srio1:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_37_45:9; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_6_17:12; - uint64_t wdog:6; -#else - uint64_t wdog:6; - uint64_t reserved_6_17:12; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_45:9; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t srio1:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_62:6; - uint64_t rst:1; -#endif - } cn63xx; - struct cvmx_ciu_int_sum1_cn63xx cn63xxp1; - struct cvmx_ciu_int_sum1_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_int_sum1_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_37_46:10; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_46:10; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_mbox_clrx { - uint64_t u64; - struct cvmx_ciu_mbox_clrx_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t bits:32; -#else - uint64_t bits:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_mbox_clrx_s cn30xx; - struct cvmx_ciu_mbox_clrx_s cn31xx; - struct cvmx_ciu_mbox_clrx_s cn38xx; - struct cvmx_ciu_mbox_clrx_s cn38xxp2; - struct cvmx_ciu_mbox_clrx_s cn50xx; - struct cvmx_ciu_mbox_clrx_s cn52xx; - struct cvmx_ciu_mbox_clrx_s cn52xxp1; - struct cvmx_ciu_mbox_clrx_s cn56xx; - struct cvmx_ciu_mbox_clrx_s cn56xxp1; - struct cvmx_ciu_mbox_clrx_s cn58xx; - struct cvmx_ciu_mbox_clrx_s cn58xxp1; - struct cvmx_ciu_mbox_clrx_s cn61xx; - struct cvmx_ciu_mbox_clrx_s cn63xx; - struct cvmx_ciu_mbox_clrx_s cn63xxp1; - struct cvmx_ciu_mbox_clrx_s cn66xx; - struct cvmx_ciu_mbox_clrx_s cn68xx; - struct cvmx_ciu_mbox_clrx_s cn68xxp1; - struct cvmx_ciu_mbox_clrx_s cnf71xx; -}; - -union cvmx_ciu_mbox_setx { - uint64_t u64; - struct cvmx_ciu_mbox_setx_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t bits:32; -#else - uint64_t bits:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_mbox_setx_s cn30xx; - struct cvmx_ciu_mbox_setx_s cn31xx; - struct cvmx_ciu_mbox_setx_s cn38xx; - struct cvmx_ciu_mbox_setx_s cn38xxp2; - struct cvmx_ciu_mbox_setx_s cn50xx; - struct cvmx_ciu_mbox_setx_s cn52xx; - struct cvmx_ciu_mbox_setx_s cn52xxp1; - struct cvmx_ciu_mbox_setx_s cn56xx; - struct cvmx_ciu_mbox_setx_s cn56xxp1; - struct cvmx_ciu_mbox_setx_s cn58xx; - struct cvmx_ciu_mbox_setx_s cn58xxp1; - struct cvmx_ciu_mbox_setx_s cn61xx; - struct cvmx_ciu_mbox_setx_s cn63xx; - struct cvmx_ciu_mbox_setx_s cn63xxp1; - struct cvmx_ciu_mbox_setx_s cn66xx; - struct cvmx_ciu_mbox_setx_s cn68xx; - struct cvmx_ciu_mbox_setx_s cn68xxp1; - struct cvmx_ciu_mbox_setx_s cnf71xx; -}; - -union cvmx_ciu_nmi { - uint64_t u64; - struct cvmx_ciu_nmi_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t nmi:32; -#else - uint64_t nmi:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_nmi_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t nmi:1; -#else - uint64_t nmi:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_nmi_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t nmi:2; -#else - uint64_t nmi:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_nmi_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t nmi:16; -#else - uint64_t nmi:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_nmi_cn38xx cn38xxp2; - struct cvmx_ciu_nmi_cn31xx cn50xx; - struct cvmx_ciu_nmi_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t nmi:4; -#else - uint64_t nmi:4; - uint64_t reserved_4_63:60; -#endif - } cn52xx; - struct cvmx_ciu_nmi_cn52xx cn52xxp1; - struct cvmx_ciu_nmi_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t nmi:12; -#else - uint64_t nmi:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_nmi_cn56xx cn56xxp1; - struct cvmx_ciu_nmi_cn38xx cn58xx; - struct cvmx_ciu_nmi_cn38xx cn58xxp1; - struct cvmx_ciu_nmi_cn52xx cn61xx; - struct cvmx_ciu_nmi_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t nmi:6; -#else - uint64_t nmi:6; - uint64_t reserved_6_63:58; -#endif - } cn63xx; - struct cvmx_ciu_nmi_cn63xx cn63xxp1; - struct cvmx_ciu_nmi_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t nmi:10; -#else - uint64_t nmi:10; - uint64_t reserved_10_63:54; -#endif - } cn66xx; - struct cvmx_ciu_nmi_s cn68xx; - struct cvmx_ciu_nmi_s cn68xxp1; - struct cvmx_ciu_nmi_cn52xx cnf71xx; -}; - -union cvmx_ciu_pci_inta { - uint64_t u64; - struct cvmx_ciu_pci_inta_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t intr:2; -#else - uint64_t intr:2; - uint64_t reserved_2_63:62; -#endif - } s; - struct cvmx_ciu_pci_inta_s cn30xx; - struct cvmx_ciu_pci_inta_s cn31xx; - struct cvmx_ciu_pci_inta_s cn38xx; - struct cvmx_ciu_pci_inta_s cn38xxp2; - struct cvmx_ciu_pci_inta_s cn50xx; - struct cvmx_ciu_pci_inta_s cn52xx; - struct cvmx_ciu_pci_inta_s cn52xxp1; - struct cvmx_ciu_pci_inta_s cn56xx; - struct cvmx_ciu_pci_inta_s cn56xxp1; - struct cvmx_ciu_pci_inta_s cn58xx; - struct cvmx_ciu_pci_inta_s cn58xxp1; - struct cvmx_ciu_pci_inta_s cn61xx; - struct cvmx_ciu_pci_inta_s cn63xx; - struct cvmx_ciu_pci_inta_s cn63xxp1; - struct cvmx_ciu_pci_inta_s cn66xx; - struct cvmx_ciu_pci_inta_s cn68xx; - struct cvmx_ciu_pci_inta_s cn68xxp1; - struct cvmx_ciu_pci_inta_s cnf71xx; -}; - -union cvmx_ciu_pp_bist_stat { - uint64_t u64; - struct cvmx_ciu_pp_bist_stat_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t pp_bist:32; -#else - uint64_t pp_bist:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_pp_bist_stat_s cn68xx; - struct cvmx_ciu_pp_bist_stat_s cn68xxp1; -}; - -union cvmx_ciu_pp_dbg { - uint64_t u64; - struct cvmx_ciu_pp_dbg_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t ppdbg:32; -#else - uint64_t ppdbg:32; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_pp_dbg_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t ppdbg:1; -#else - uint64_t ppdbg:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_pp_dbg_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t ppdbg:2; -#else - uint64_t ppdbg:2; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_pp_dbg_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t ppdbg:16; -#else - uint64_t ppdbg:16; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_pp_dbg_cn38xx cn38xxp2; - struct cvmx_ciu_pp_dbg_cn31xx cn50xx; - struct cvmx_ciu_pp_dbg_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t ppdbg:4; -#else - uint64_t ppdbg:4; - uint64_t reserved_4_63:60; -#endif - } cn52xx; - struct cvmx_ciu_pp_dbg_cn52xx cn52xxp1; - struct cvmx_ciu_pp_dbg_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t ppdbg:12; -#else - uint64_t ppdbg:12; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_pp_dbg_cn56xx cn56xxp1; - struct cvmx_ciu_pp_dbg_cn38xx cn58xx; - struct cvmx_ciu_pp_dbg_cn38xx cn58xxp1; - struct cvmx_ciu_pp_dbg_cn52xx cn61xx; - struct cvmx_ciu_pp_dbg_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t ppdbg:6; -#else - uint64_t ppdbg:6; - uint64_t reserved_6_63:58; -#endif - } cn63xx; - struct cvmx_ciu_pp_dbg_cn63xx cn63xxp1; - struct cvmx_ciu_pp_dbg_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t ppdbg:10; -#else - uint64_t ppdbg:10; - uint64_t reserved_10_63:54; -#endif - } cn66xx; - struct cvmx_ciu_pp_dbg_s cn68xx; - struct cvmx_ciu_pp_dbg_s cn68xxp1; - struct cvmx_ciu_pp_dbg_cn52xx cnf71xx; -}; - -union cvmx_ciu_pp_pokex { - uint64_t u64; - struct cvmx_ciu_pp_pokex_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t poke:64; -#else - uint64_t poke:64; -#endif - } s; - struct cvmx_ciu_pp_pokex_s cn30xx; - struct cvmx_ciu_pp_pokex_s cn31xx; - struct cvmx_ciu_pp_pokex_s cn38xx; - struct cvmx_ciu_pp_pokex_s cn38xxp2; - struct cvmx_ciu_pp_pokex_s cn50xx; - struct cvmx_ciu_pp_pokex_s cn52xx; - struct cvmx_ciu_pp_pokex_s cn52xxp1; - struct cvmx_ciu_pp_pokex_s cn56xx; - struct cvmx_ciu_pp_pokex_s cn56xxp1; - struct cvmx_ciu_pp_pokex_s cn58xx; - struct cvmx_ciu_pp_pokex_s cn58xxp1; - struct cvmx_ciu_pp_pokex_s cn61xx; - struct cvmx_ciu_pp_pokex_s cn63xx; - struct cvmx_ciu_pp_pokex_s cn63xxp1; - struct cvmx_ciu_pp_pokex_s cn66xx; - struct cvmx_ciu_pp_pokex_s cn68xx; - struct cvmx_ciu_pp_pokex_s cn68xxp1; - struct cvmx_ciu_pp_pokex_s cnf71xx; -}; - -union cvmx_ciu_pp_rst { - uint64_t u64; - struct cvmx_ciu_pp_rst_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t rst:31; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:31; - uint64_t reserved_32_63:32; -#endif - } s; - struct cvmx_ciu_pp_rst_cn30xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t reserved_1_63:63; -#endif - } cn30xx; - struct cvmx_ciu_pp_rst_cn31xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t rst:1; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:1; - uint64_t reserved_2_63:62; -#endif - } cn31xx; - struct cvmx_ciu_pp_rst_cn38xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_16_63:48; - uint64_t rst:15; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:15; - uint64_t reserved_16_63:48; -#endif - } cn38xx; - struct cvmx_ciu_pp_rst_cn38xx cn38xxp2; - struct cvmx_ciu_pp_rst_cn31xx cn50xx; - struct cvmx_ciu_pp_rst_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t rst:3; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:3; - uint64_t reserved_4_63:60; -#endif - } cn52xx; - struct cvmx_ciu_pp_rst_cn52xx cn52xxp1; - struct cvmx_ciu_pp_rst_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_12_63:52; - uint64_t rst:11; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:11; - uint64_t reserved_12_63:52; -#endif - } cn56xx; - struct cvmx_ciu_pp_rst_cn56xx cn56xxp1; - struct cvmx_ciu_pp_rst_cn38xx cn58xx; - struct cvmx_ciu_pp_rst_cn38xx cn58xxp1; - struct cvmx_ciu_pp_rst_cn52xx cn61xx; - struct cvmx_ciu_pp_rst_cn63xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_6_63:58; - uint64_t rst:5; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:5; - uint64_t reserved_6_63:58; -#endif - } cn63xx; - struct cvmx_ciu_pp_rst_cn63xx cn63xxp1; - struct cvmx_ciu_pp_rst_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t rst:9; - uint64_t rst0:1; -#else - uint64_t rst0:1; - uint64_t rst:9; - uint64_t reserved_10_63:54; -#endif - } cn66xx; - struct cvmx_ciu_pp_rst_s cn68xx; - struct cvmx_ciu_pp_rst_s cn68xxp1; - struct cvmx_ciu_pp_rst_cn52xx cnf71xx; -}; - -union cvmx_ciu_qlm0 { - uint64_t u64; - struct cvmx_ciu_qlm0_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t g2bypass:1; - uint64_t reserved_53_62:10; - uint64_t g2deemph:5; - uint64_t reserved_45_47:3; - uint64_t g2margin:5; - uint64_t reserved_32_39:8; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_39:8; - uint64_t g2margin:5; - uint64_t reserved_45_47:3; - uint64_t g2deemph:5; - uint64_t reserved_53_62:10; - uint64_t g2bypass:1; -#endif - } s; - struct cvmx_ciu_qlm0_s cn61xx; - struct cvmx_ciu_qlm0_s cn63xx; - struct cvmx_ciu_qlm0_cn63xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t txbypass:1; - uint64_t reserved_20_30:11; - uint64_t txdeemph:4; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:4; - uint64_t reserved_20_30:11; - uint64_t txbypass:1; - uint64_t reserved_32_63:32; -#endif - } cn63xxp1; - struct cvmx_ciu_qlm0_s cn66xx; - struct cvmx_ciu_qlm0_cn68xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_63:32; -#endif - } cn68xx; - struct cvmx_ciu_qlm0_cn68xx cn68xxp1; - struct cvmx_ciu_qlm0_s cnf71xx; -}; - -union cvmx_ciu_qlm1 { - uint64_t u64; - struct cvmx_ciu_qlm1_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t g2bypass:1; - uint64_t reserved_53_62:10; - uint64_t g2deemph:5; - uint64_t reserved_45_47:3; - uint64_t g2margin:5; - uint64_t reserved_32_39:8; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_39:8; - uint64_t g2margin:5; - uint64_t reserved_45_47:3; - uint64_t g2deemph:5; - uint64_t reserved_53_62:10; - uint64_t g2bypass:1; -#endif - } s; - struct cvmx_ciu_qlm1_s cn61xx; - struct cvmx_ciu_qlm1_s cn63xx; - struct cvmx_ciu_qlm1_cn63xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t txbypass:1; - uint64_t reserved_20_30:11; - uint64_t txdeemph:4; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:4; - uint64_t reserved_20_30:11; - uint64_t txbypass:1; - uint64_t reserved_32_63:32; -#endif - } cn63xxp1; - struct cvmx_ciu_qlm1_s cn66xx; - struct cvmx_ciu_qlm1_s cn68xx; - struct cvmx_ciu_qlm1_s cn68xxp1; - struct cvmx_ciu_qlm1_s cnf71xx; -}; - -union cvmx_ciu_qlm2 { - uint64_t u64; - struct cvmx_ciu_qlm2_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t g2bypass:1; - uint64_t reserved_53_62:10; - uint64_t g2deemph:5; - uint64_t reserved_45_47:3; - uint64_t g2margin:5; - uint64_t reserved_32_39:8; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_39:8; - uint64_t g2margin:5; - uint64_t reserved_45_47:3; - uint64_t g2deemph:5; - uint64_t reserved_53_62:10; - uint64_t g2bypass:1; -#endif - } s; - struct cvmx_ciu_qlm2_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_63:32; -#endif - } cn61xx; - struct cvmx_ciu_qlm2_cn61xx cn63xx; - struct cvmx_ciu_qlm2_cn63xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_32_63:32; - uint64_t txbypass:1; - uint64_t reserved_20_30:11; - uint64_t txdeemph:4; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:4; - uint64_t reserved_20_30:11; - uint64_t txbypass:1; - uint64_t reserved_32_63:32; -#endif - } cn63xxp1; - struct cvmx_ciu_qlm2_cn61xx cn66xx; - struct cvmx_ciu_qlm2_s cn68xx; - struct cvmx_ciu_qlm2_s cn68xxp1; - struct cvmx_ciu_qlm2_cn61xx cnf71xx; -}; -union cvmx_ciu_qlm3 { +union cvmx_ciu_qlm { uint64_t u64; - struct cvmx_ciu_qlm3_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t g2bypass:1; - uint64_t reserved_53_62:10; - uint64_t g2deemph:5; - uint64_t reserved_45_47:3; - uint64_t g2margin:5; - uint64_t reserved_32_39:8; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_39:8; - uint64_t g2margin:5; - uint64_t reserved_45_47:3; - uint64_t g2deemph:5; - uint64_t reserved_53_62:10; - uint64_t g2bypass:1; -#endif + struct cvmx_ciu_qlm_s { + __BITFIELD_FIELD(uint64_t g2bypass:1, + __BITFIELD_FIELD(uint64_t reserved_53_62:10, + __BITFIELD_FIELD(uint64_t g2deemph:5, + __BITFIELD_FIELD(uint64_t reserved_45_47:3, + __BITFIELD_FIELD(uint64_t g2margin:5, + __BITFIELD_FIELD(uint64_t reserved_32_39:8, + __BITFIELD_FIELD(uint64_t txbypass:1, + __BITFIELD_FIELD(uint64_t reserved_21_30:10, + __BITFIELD_FIELD(uint64_t txdeemph:5, + __BITFIELD_FIELD(uint64_t reserved_13_15:3, + __BITFIELD_FIELD(uint64_t txmargin:5, + __BITFIELD_FIELD(uint64_t reserved_4_7:4, + __BITFIELD_FIELD(uint64_t lane_en:4, + ;))))))))))))) } s; - struct cvmx_ciu_qlm3_s cn68xx; - struct cvmx_ciu_qlm3_s cn68xxp1; -}; - -union cvmx_ciu_qlm4 { - uint64_t u64; - struct cvmx_ciu_qlm4_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t g2bypass:1; - uint64_t reserved_53_62:10; - uint64_t g2deemph:5; - uint64_t reserved_45_47:3; - uint64_t g2margin:5; - uint64_t reserved_32_39:8; - uint64_t txbypass:1; - uint64_t reserved_21_30:10; - uint64_t txdeemph:5; - uint64_t reserved_13_15:3; - uint64_t txmargin:5; - uint64_t reserved_4_7:4; - uint64_t lane_en:4; -#else - uint64_t lane_en:4; - uint64_t reserved_4_7:4; - uint64_t txmargin:5; - uint64_t reserved_13_15:3; - uint64_t txdeemph:5; - uint64_t reserved_21_30:10; - uint64_t txbypass:1; - uint64_t reserved_32_39:8; - uint64_t g2margin:5; - uint64_t reserved_45_47:3; - uint64_t g2deemph:5; - uint64_t reserved_53_62:10; - uint64_t g2bypass:1; -#endif - } s; - struct cvmx_ciu_qlm4_s cn68xx; - struct cvmx_ciu_qlm4_s cn68xxp1; -}; - -union cvmx_ciu_qlm_dcok { - uint64_t u64; - struct cvmx_ciu_qlm_dcok_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_4_63:60; - uint64_t qlm_dcok:4; -#else - uint64_t qlm_dcok:4; - uint64_t reserved_4_63:60; -#endif - } s; - struct cvmx_ciu_qlm_dcok_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_2_63:62; - uint64_t qlm_dcok:2; -#else - uint64_t qlm_dcok:2; - uint64_t reserved_2_63:62; -#endif - } cn52xx; - struct cvmx_ciu_qlm_dcok_cn52xx cn52xxp1; - struct cvmx_ciu_qlm_dcok_s cn56xx; - struct cvmx_ciu_qlm_dcok_s cn56xxp1; }; union cvmx_ciu_qlm_jtgc { uint64_t u64; struct cvmx_ciu_qlm_jtgc_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_17_63:47; - uint64_t bypass_ext:1; - uint64_t reserved_11_15:5; - uint64_t clk_div:3; - uint64_t reserved_7_7:1; - uint64_t mux_sel:3; - uint64_t bypass:4; -#else - uint64_t bypass:4; - uint64_t mux_sel:3; - uint64_t reserved_7_7:1; - uint64_t clk_div:3; - uint64_t reserved_11_15:5; - uint64_t bypass_ext:1; - uint64_t reserved_17_63:47; -#endif + __BITFIELD_FIELD(uint64_t reserved_17_63:47, + __BITFIELD_FIELD(uint64_t bypass_ext:1, + __BITFIELD_FIELD(uint64_t reserved_11_15:5, + __BITFIELD_FIELD(uint64_t clk_div:3, + __BITFIELD_FIELD(uint64_t reserved_7_7:1, + __BITFIELD_FIELD(uint64_t mux_sel:3, + __BITFIELD_FIELD(uint64_t bypass:4, + ;))))))) } s; - struct cvmx_ciu_qlm_jtgc_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_11_63:53; - uint64_t clk_div:3; - uint64_t reserved_5_7:3; - uint64_t mux_sel:1; - uint64_t reserved_2_3:2; - uint64_t bypass:2; -#else - uint64_t bypass:2; - uint64_t reserved_2_3:2; - uint64_t mux_sel:1; - uint64_t reserved_5_7:3; - uint64_t clk_div:3; - uint64_t reserved_11_63:53; -#endif - } cn52xx; - struct cvmx_ciu_qlm_jtgc_cn52xx cn52xxp1; - struct cvmx_ciu_qlm_jtgc_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_11_63:53; - uint64_t clk_div:3; - uint64_t reserved_6_7:2; - uint64_t mux_sel:2; - uint64_t bypass:4; -#else - uint64_t bypass:4; - uint64_t mux_sel:2; - uint64_t reserved_6_7:2; - uint64_t clk_div:3; - uint64_t reserved_11_63:53; -#endif - } cn56xx; - struct cvmx_ciu_qlm_jtgc_cn56xx cn56xxp1; - struct cvmx_ciu_qlm_jtgc_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_11_63:53; - uint64_t clk_div:3; - uint64_t reserved_6_7:2; - uint64_t mux_sel:2; - uint64_t reserved_3_3:1; - uint64_t bypass:3; -#else - uint64_t bypass:3; - uint64_t reserved_3_3:1; - uint64_t mux_sel:2; - uint64_t reserved_6_7:2; - uint64_t clk_div:3; - uint64_t reserved_11_63:53; -#endif - } cn61xx; - struct cvmx_ciu_qlm_jtgc_cn61xx cn63xx; - struct cvmx_ciu_qlm_jtgc_cn61xx cn63xxp1; - struct cvmx_ciu_qlm_jtgc_cn61xx cn66xx; - struct cvmx_ciu_qlm_jtgc_s cn68xx; - struct cvmx_ciu_qlm_jtgc_s cn68xxp1; - struct cvmx_ciu_qlm_jtgc_cn61xx cnf71xx; }; union cvmx_ciu_qlm_jtgd { uint64_t u64; struct cvmx_ciu_qlm_jtgd_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t capture:1; - uint64_t shift:1; - uint64_t update:1; - uint64_t reserved_45_60:16; - uint64_t select:5; - uint64_t reserved_37_39:3; - uint64_t shft_cnt:5; - uint64_t shft_reg:32; -#else - uint64_t shft_reg:32; - uint64_t shft_cnt:5; - uint64_t reserved_37_39:3; - uint64_t select:5; - uint64_t reserved_45_60:16; - uint64_t update:1; - uint64_t shift:1; - uint64_t capture:1; -#endif - } s; - struct cvmx_ciu_qlm_jtgd_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t capture:1; - uint64_t shift:1; - uint64_t update:1; - uint64_t reserved_42_60:19; - uint64_t select:2; - uint64_t reserved_37_39:3; - uint64_t shft_cnt:5; - uint64_t shft_reg:32; -#else - uint64_t shft_reg:32; - uint64_t shft_cnt:5; - uint64_t reserved_37_39:3; - uint64_t select:2; - uint64_t reserved_42_60:19; - uint64_t update:1; - uint64_t shift:1; - uint64_t capture:1; -#endif - } cn52xx; - struct cvmx_ciu_qlm_jtgd_cn52xx cn52xxp1; - struct cvmx_ciu_qlm_jtgd_cn56xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t capture:1; - uint64_t shift:1; - uint64_t update:1; - uint64_t reserved_44_60:17; - uint64_t select:4; - uint64_t reserved_37_39:3; - uint64_t shft_cnt:5; - uint64_t shft_reg:32; -#else - uint64_t shft_reg:32; - uint64_t shft_cnt:5; - uint64_t reserved_37_39:3; - uint64_t select:4; - uint64_t reserved_44_60:17; - uint64_t update:1; - uint64_t shift:1; - uint64_t capture:1; -#endif - } cn56xx; - struct cvmx_ciu_qlm_jtgd_cn56xxp1 { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t capture:1; - uint64_t shift:1; - uint64_t update:1; - uint64_t reserved_37_60:24; - uint64_t shft_cnt:5; - uint64_t shft_reg:32; -#else - uint64_t shft_reg:32; - uint64_t shft_cnt:5; - uint64_t reserved_37_60:24; - uint64_t update:1; - uint64_t shift:1; - uint64_t capture:1; -#endif - } cn56xxp1; - struct cvmx_ciu_qlm_jtgd_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t capture:1; - uint64_t shift:1; - uint64_t update:1; - uint64_t reserved_43_60:18; - uint64_t select:3; - uint64_t reserved_37_39:3; - uint64_t shft_cnt:5; - uint64_t shft_reg:32; -#else - uint64_t shft_reg:32; - uint64_t shft_cnt:5; - uint64_t reserved_37_39:3; - uint64_t select:3; - uint64_t reserved_43_60:18; - uint64_t update:1; - uint64_t shift:1; - uint64_t capture:1; -#endif - } cn61xx; - struct cvmx_ciu_qlm_jtgd_cn61xx cn63xx; - struct cvmx_ciu_qlm_jtgd_cn61xx cn63xxp1; - struct cvmx_ciu_qlm_jtgd_cn61xx cn66xx; - struct cvmx_ciu_qlm_jtgd_s cn68xx; - struct cvmx_ciu_qlm_jtgd_s cn68xxp1; - struct cvmx_ciu_qlm_jtgd_cn61xx cnf71xx; -}; - -union cvmx_ciu_soft_bist { - uint64_t u64; - struct cvmx_ciu_soft_bist_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_bist:1; -#else - uint64_t soft_bist:1; - uint64_t reserved_1_63:63; -#endif + __BITFIELD_FIELD(uint64_t capture:1, + __BITFIELD_FIELD(uint64_t shift:1, + __BITFIELD_FIELD(uint64_t update:1, + __BITFIELD_FIELD(uint64_t reserved_45_60:16, + __BITFIELD_FIELD(uint64_t select:5, + __BITFIELD_FIELD(uint64_t reserved_37_39:3, + __BITFIELD_FIELD(uint64_t shft_cnt:5, + __BITFIELD_FIELD(uint64_t shft_reg:32, + ;)))))))) } s; - struct cvmx_ciu_soft_bist_s cn30xx; - struct cvmx_ciu_soft_bist_s cn31xx; - struct cvmx_ciu_soft_bist_s cn38xx; - struct cvmx_ciu_soft_bist_s cn38xxp2; - struct cvmx_ciu_soft_bist_s cn50xx; - struct cvmx_ciu_soft_bist_s cn52xx; - struct cvmx_ciu_soft_bist_s cn52xxp1; - struct cvmx_ciu_soft_bist_s cn56xx; - struct cvmx_ciu_soft_bist_s cn56xxp1; - struct cvmx_ciu_soft_bist_s cn58xx; - struct cvmx_ciu_soft_bist_s cn58xxp1; - struct cvmx_ciu_soft_bist_s cn61xx; - struct cvmx_ciu_soft_bist_s cn63xx; - struct cvmx_ciu_soft_bist_s cn63xxp1; - struct cvmx_ciu_soft_bist_s cn66xx; - struct cvmx_ciu_soft_bist_s cn68xx; - struct cvmx_ciu_soft_bist_s cn68xxp1; - struct cvmx_ciu_soft_bist_s cnf71xx; }; union cvmx_ciu_soft_prst { uint64_t u64; struct cvmx_ciu_soft_prst_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_3_63:61; - uint64_t host64:1; - uint64_t npi:1; - uint64_t soft_prst:1; -#else - uint64_t soft_prst:1; - uint64_t npi:1; - uint64_t host64:1; - uint64_t reserved_3_63:61; -#endif - } s; - struct cvmx_ciu_soft_prst_s cn30xx; - struct cvmx_ciu_soft_prst_s cn31xx; - struct cvmx_ciu_soft_prst_s cn38xx; - struct cvmx_ciu_soft_prst_s cn38xxp2; - struct cvmx_ciu_soft_prst_s cn50xx; - struct cvmx_ciu_soft_prst_cn52xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_prst:1; -#else - uint64_t soft_prst:1; - uint64_t reserved_1_63:63; -#endif - } cn52xx; - struct cvmx_ciu_soft_prst_cn52xx cn52xxp1; - struct cvmx_ciu_soft_prst_cn52xx cn56xx; - struct cvmx_ciu_soft_prst_cn52xx cn56xxp1; - struct cvmx_ciu_soft_prst_s cn58xx; - struct cvmx_ciu_soft_prst_s cn58xxp1; - struct cvmx_ciu_soft_prst_cn52xx cn61xx; - struct cvmx_ciu_soft_prst_cn52xx cn63xx; - struct cvmx_ciu_soft_prst_cn52xx cn63xxp1; - struct cvmx_ciu_soft_prst_cn52xx cn66xx; - struct cvmx_ciu_soft_prst_cn52xx cn68xx; - struct cvmx_ciu_soft_prst_cn52xx cn68xxp1; - struct cvmx_ciu_soft_prst_cn52xx cnf71xx; -}; - -union cvmx_ciu_soft_prst1 { - uint64_t u64; - struct cvmx_ciu_soft_prst1_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_prst:1; -#else - uint64_t soft_prst:1; - uint64_t reserved_1_63:63; -#endif - } s; - struct cvmx_ciu_soft_prst1_s cn52xx; - struct cvmx_ciu_soft_prst1_s cn52xxp1; - struct cvmx_ciu_soft_prst1_s cn56xx; - struct cvmx_ciu_soft_prst1_s cn56xxp1; - struct cvmx_ciu_soft_prst1_s cn61xx; - struct cvmx_ciu_soft_prst1_s cn63xx; - struct cvmx_ciu_soft_prst1_s cn63xxp1; - struct cvmx_ciu_soft_prst1_s cn66xx; - struct cvmx_ciu_soft_prst1_s cn68xx; - struct cvmx_ciu_soft_prst1_s cn68xxp1; - struct cvmx_ciu_soft_prst1_s cnf71xx; -}; - -union cvmx_ciu_soft_prst2 { - uint64_t u64; - struct cvmx_ciu_soft_prst2_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_prst:1; -#else - uint64_t soft_prst:1; - uint64_t reserved_1_63:63; -#endif - } s; - struct cvmx_ciu_soft_prst2_s cn66xx; -}; - -union cvmx_ciu_soft_prst3 { - uint64_t u64; - struct cvmx_ciu_soft_prst3_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_prst:1; -#else - uint64_t soft_prst:1; - uint64_t reserved_1_63:63; -#endif - } s; - struct cvmx_ciu_soft_prst3_s cn66xx; -}; - -union cvmx_ciu_soft_rst { - uint64_t u64; - struct cvmx_ciu_soft_rst_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t soft_rst:1; -#else - uint64_t soft_rst:1; - uint64_t reserved_1_63:63; -#endif - } s; - struct cvmx_ciu_soft_rst_s cn30xx; - struct cvmx_ciu_soft_rst_s cn31xx; - struct cvmx_ciu_soft_rst_s cn38xx; - struct cvmx_ciu_soft_rst_s cn38xxp2; - struct cvmx_ciu_soft_rst_s cn50xx; - struct cvmx_ciu_soft_rst_s cn52xx; - struct cvmx_ciu_soft_rst_s cn52xxp1; - struct cvmx_ciu_soft_rst_s cn56xx; - struct cvmx_ciu_soft_rst_s cn56xxp1; - struct cvmx_ciu_soft_rst_s cn58xx; - struct cvmx_ciu_soft_rst_s cn58xxp1; - struct cvmx_ciu_soft_rst_s cn61xx; - struct cvmx_ciu_soft_rst_s cn63xx; - struct cvmx_ciu_soft_rst_s cn63xxp1; - struct cvmx_ciu_soft_rst_s cn66xx; - struct cvmx_ciu_soft_rst_s cn68xx; - struct cvmx_ciu_soft_rst_s cn68xxp1; - struct cvmx_ciu_soft_rst_s cnf71xx; -}; - -union cvmx_ciu_sum1_iox_int { - uint64_t u64; - struct cvmx_ciu_sum1_iox_int_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif + __BITFIELD_FIELD(uint64_t reserved_3_63:61, + __BITFIELD_FIELD(uint64_t host64:1, + __BITFIELD_FIELD(uint64_t npi:1, + __BITFIELD_FIELD(uint64_t soft_prst:1, + ;)))) } s; - struct cvmx_ciu_sum1_iox_int_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_sum1_iox_int_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_sum1_iox_int_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_sum1_ppx_ip2 { - uint64_t u64; - struct cvmx_ciu_sum1_ppx_ip2_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_sum1_ppx_ip2_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_sum1_ppx_ip2_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_sum1_ppx_ip2_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_sum1_ppx_ip3 { - uint64_t u64; - struct cvmx_ciu_sum1_ppx_ip3_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_sum1_ppx_ip3_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_sum1_ppx_ip3_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_sum1_ppx_ip3_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_sum1_ppx_ip4 { - uint64_t u64; - struct cvmx_ciu_sum1_ppx_ip4_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } s; - struct cvmx_ciu_sum1_ppx_ip4_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_41_45:5; - uint64_t dpi_dma:1; - uint64_t reserved_38_39:2; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_4_17:14; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_17:14; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_39:2; - uint64_t dpi_dma:1; - uint64_t reserved_41_45:5; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cn61xx; - struct cvmx_ciu_sum1_ppx_ip4_cn66xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_62_62:1; - uint64_t srio3:1; - uint64_t srio2:1; - uint64_t reserved_57_59:3; - uint64_t dfm:1; - uint64_t reserved_53_55:3; - uint64_t lmc0:1; - uint64_t reserved_51_51:1; - uint64_t srio0:1; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t agl:1; - uint64_t reserved_38_45:8; - uint64_t agx1:1; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t dfa:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t zip:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t mii1:1; - uint64_t reserved_10_17:8; - uint64_t wdog:10; -#else - uint64_t wdog:10; - uint64_t reserved_10_17:8; - uint64_t mii1:1; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t zip:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t dfa:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t agx1:1; - uint64_t reserved_38_45:8; - uint64_t agl:1; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t srio0:1; - uint64_t reserved_51_51:1; - uint64_t lmc0:1; - uint64_t reserved_53_55:3; - uint64_t dfm:1; - uint64_t reserved_57_59:3; - uint64_t srio2:1; - uint64_t srio3:1; - uint64_t reserved_62_62:1; - uint64_t rst:1; -#endif - } cn66xx; - struct cvmx_ciu_sum1_ppx_ip4_cnf71xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t rst:1; - uint64_t reserved_53_62:10; - uint64_t lmc0:1; - uint64_t reserved_50_51:2; - uint64_t pem1:1; - uint64_t pem0:1; - uint64_t ptp:1; - uint64_t reserved_41_46:6; - uint64_t dpi_dma:1; - uint64_t reserved_37_39:3; - uint64_t agx0:1; - uint64_t dpi:1; - uint64_t sli:1; - uint64_t usb:1; - uint64_t reserved_32_32:1; - uint64_t key:1; - uint64_t rad:1; - uint64_t tim:1; - uint64_t reserved_28_28:1; - uint64_t pko:1; - uint64_t pip:1; - uint64_t ipd:1; - uint64_t l2c:1; - uint64_t pow:1; - uint64_t fpa:1; - uint64_t iob:1; - uint64_t mio:1; - uint64_t nand:1; - uint64_t reserved_4_18:15; - uint64_t wdog:4; -#else - uint64_t wdog:4; - uint64_t reserved_4_18:15; - uint64_t nand:1; - uint64_t mio:1; - uint64_t iob:1; - uint64_t fpa:1; - uint64_t pow:1; - uint64_t l2c:1; - uint64_t ipd:1; - uint64_t pip:1; - uint64_t pko:1; - uint64_t reserved_28_28:1; - uint64_t tim:1; - uint64_t rad:1; - uint64_t key:1; - uint64_t reserved_32_32:1; - uint64_t usb:1; - uint64_t sli:1; - uint64_t dpi:1; - uint64_t agx0:1; - uint64_t reserved_37_39:3; - uint64_t dpi_dma:1; - uint64_t reserved_41_46:6; - uint64_t ptp:1; - uint64_t pem0:1; - uint64_t pem1:1; - uint64_t reserved_50_51:2; - uint64_t lmc0:1; - uint64_t reserved_53_62:10; - uint64_t rst:1; -#endif - } cnf71xx; -}; - -union cvmx_ciu_sum2_iox_int { - uint64_t u64; - struct cvmx_ciu_sum2_iox_int_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_sum2_iox_int_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_sum2_iox_int_cn61xx cn66xx; - struct cvmx_ciu_sum2_iox_int_s cnf71xx; -}; - -union cvmx_ciu_sum2_ppx_ip2 { - uint64_t u64; - struct cvmx_ciu_sum2_ppx_ip2_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_sum2_ppx_ip2_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_sum2_ppx_ip2_cn61xx cn66xx; - struct cvmx_ciu_sum2_ppx_ip2_s cnf71xx; -}; - -union cvmx_ciu_sum2_ppx_ip3 { - uint64_t u64; - struct cvmx_ciu_sum2_ppx_ip3_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_sum2_ppx_ip3_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_sum2_ppx_ip3_cn61xx cn66xx; - struct cvmx_ciu_sum2_ppx_ip3_s cnf71xx; -}; - -union cvmx_ciu_sum2_ppx_ip4 { - uint64_t u64; - struct cvmx_ciu_sum2_ppx_ip4_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_15_63:49; - uint64_t endor:2; - uint64_t eoi:1; - uint64_t reserved_10_11:2; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_11:2; - uint64_t eoi:1; - uint64_t endor:2; - uint64_t reserved_15_63:49; -#endif - } s; - struct cvmx_ciu_sum2_ppx_ip4_cn61xx { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_10_63:54; - uint64_t timer:6; - uint64_t reserved_0_3:4; -#else - uint64_t reserved_0_3:4; - uint64_t timer:6; - uint64_t reserved_10_63:54; -#endif - } cn61xx; - struct cvmx_ciu_sum2_ppx_ip4_cn61xx cn66xx; - struct cvmx_ciu_sum2_ppx_ip4_s cnf71xx; }; union cvmx_ciu_timx { uint64_t u64; struct cvmx_ciu_timx_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_37_63:27; - uint64_t one_shot:1; - uint64_t len:36; -#else - uint64_t len:36; - uint64_t one_shot:1; - uint64_t reserved_37_63:27; -#endif - } s; - struct cvmx_ciu_timx_s cn30xx; - struct cvmx_ciu_timx_s cn31xx; - struct cvmx_ciu_timx_s cn38xx; - struct cvmx_ciu_timx_s cn38xxp2; - struct cvmx_ciu_timx_s cn50xx; - struct cvmx_ciu_timx_s cn52xx; - struct cvmx_ciu_timx_s cn52xxp1; - struct cvmx_ciu_timx_s cn56xx; - struct cvmx_ciu_timx_s cn56xxp1; - struct cvmx_ciu_timx_s cn58xx; - struct cvmx_ciu_timx_s cn58xxp1; - struct cvmx_ciu_timx_s cn61xx; - struct cvmx_ciu_timx_s cn63xx; - struct cvmx_ciu_timx_s cn63xxp1; - struct cvmx_ciu_timx_s cn66xx; - struct cvmx_ciu_timx_s cn68xx; - struct cvmx_ciu_timx_s cn68xxp1; - struct cvmx_ciu_timx_s cnf71xx; -}; - -union cvmx_ciu_tim_multi_cast { - uint64_t u64; - struct cvmx_ciu_tim_multi_cast_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_1_63:63; - uint64_t en:1; -#else - uint64_t en:1; - uint64_t reserved_1_63:63; -#endif + __BITFIELD_FIELD(uint64_t reserved_37_63:27, + __BITFIELD_FIELD(uint64_t one_shot:1, + __BITFIELD_FIELD(uint64_t len:36, + ;))) } s; - struct cvmx_ciu_tim_multi_cast_s cn61xx; - struct cvmx_ciu_tim_multi_cast_s cn66xx; - struct cvmx_ciu_tim_multi_cast_s cnf71xx; }; union cvmx_ciu_wdogx { uint64_t u64; struct cvmx_ciu_wdogx_s { -#ifdef __BIG_ENDIAN_BITFIELD - uint64_t reserved_46_63:18; - uint64_t gstopen:1; - uint64_t dstop:1; - uint64_t cnt:24; - uint64_t len:16; - uint64_t state:2; - uint64_t mode:2; -#else - uint64_t mode:2; - uint64_t state:2; - uint64_t len:16; - uint64_t cnt:24; - uint64_t dstop:1; - uint64_t gstopen:1; - uint64_t reserved_46_63:18; -#endif + __BITFIELD_FIELD(uint64_t reserved_46_63:18, + __BITFIELD_FIELD(uint64_t gstopen:1, + __BITFIELD_FIELD(uint64_t dstop:1, + __BITFIELD_FIELD(uint64_t cnt:24, + __BITFIELD_FIELD(uint64_t len:16, + __BITFIELD_FIELD(uint64_t state:2, + __BITFIELD_FIELD(uint64_t mode:2, + ;))))))) } s; - struct cvmx_ciu_wdogx_s cn30xx; - struct cvmx_ciu_wdogx_s cn31xx; - struct cvmx_ciu_wdogx_s cn38xx; - struct cvmx_ciu_wdogx_s cn38xxp2; - struct cvmx_ciu_wdogx_s cn50xx; - struct cvmx_ciu_wdogx_s cn52xx; - struct cvmx_ciu_wdogx_s cn52xxp1; - struct cvmx_ciu_wdogx_s cn56xx; - struct cvmx_ciu_wdogx_s cn56xxp1; - struct cvmx_ciu_wdogx_s cn58xx; - struct cvmx_ciu_wdogx_s cn58xxp1; - struct cvmx_ciu_wdogx_s cn61xx; - struct cvmx_ciu_wdogx_s cn63xx; - struct cvmx_ciu_wdogx_s cn63xxp1; - struct cvmx_ciu_wdogx_s cn66xx; - struct cvmx_ciu_wdogx_s cn68xx; - struct cvmx_ciu_wdogx_s cn68xxp1; - struct cvmx_ciu_wdogx_s cnf71xx; }; -#endif +#endif /* __CVMX_CIU_DEFS_H__ */ diff --git a/arch/mips/include/asm/octeon/cvmx-gmxx-defs.h b/arch/mips/include/asm/octeon/cvmx-gmxx-defs.h index e347496a33c3..80e4f8358b81 100644 --- a/arch/mips/include/asm/octeon/cvmx-gmxx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-gmxx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -2070,6 +2070,8 @@ static inline uint64_t CVMX_GMXX_XAUI_EXT_LOOPBACK(unsigned long block_id) return CVMX_ADD_IO_SEG(0x0001180008000540ull) + (block_id) * 0x8000000ull; } +void __cvmx_interrupt_gmxx_enable(int interface); + union cvmx_gmxx_bad_reg { uint64_t u64; struct cvmx_gmxx_bad_reg_s { diff --git a/arch/mips/include/asm/octeon/cvmx-pcsx-defs.h b/arch/mips/include/asm/octeon/cvmx-pcsx-defs.h index a5e8fd861c37..39da7f9d7b3f 100644 --- a/arch/mips/include/asm/octeon/cvmx-pcsx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-pcsx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -334,6 +334,8 @@ static inline uint64_t CVMX_PCSX_TX_RXX_POLARITY_REG(unsigned long offset, unsig return CVMX_ADD_IO_SEG(0x00011800B0001048ull) + ((offset) + (block_id) * 0x20000ull) * 1024; } +void __cvmx_interrupt_pcsx_intx_en_reg_enable(int index, int block); + union cvmx_pcsx_anx_adv_reg { uint64_t u64; struct cvmx_pcsx_anx_adv_reg_s { diff --git a/arch/mips/include/asm/octeon/cvmx-pcsxx-defs.h b/arch/mips/include/asm/octeon/cvmx-pcsxx-defs.h index b5b45d26f1c5..847dd9dca6ea 100644 --- a/arch/mips/include/asm/octeon/cvmx-pcsxx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-pcsxx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -268,6 +268,8 @@ static inline uint64_t CVMX_PCSXX_TX_RX_STATES_REG(unsigned long block_id) return CVMX_ADD_IO_SEG(0x00011800B0000830ull) + (block_id) * 0x1000000ull; } +void __cvmx_interrupt_pcsxx_int_en_reg_enable(int index); + union cvmx_pcsxx_10gbx_status_reg { uint64_t u64; struct cvmx_pcsxx_10gbx_status_reg_s { diff --git a/arch/mips/include/asm/octeon/cvmx-spxx-defs.h b/arch/mips/include/asm/octeon/cvmx-spxx-defs.h index c7d601d9446e..f4c4e8051160 100644 --- a/arch/mips/include/asm/octeon/cvmx-spxx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-spxx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -45,6 +45,8 @@ #define CVMX_SPXX_TPA_SEL(block_id) (CVMX_ADD_IO_SEG(0x0001180090000328ull) + ((block_id) & 1) * 0x8000000ull) #define CVMX_SPXX_TRN4_CTL(block_id) (CVMX_ADD_IO_SEG(0x0001180090000360ull) + ((block_id) & 1) * 0x8000000ull) +void __cvmx_interrupt_spxx_int_msk_enable(int index); + union cvmx_spxx_bckprs_cnt { uint64_t u64; struct cvmx_spxx_bckprs_cnt_s { diff --git a/arch/mips/include/asm/octeon/cvmx-stxx-defs.h b/arch/mips/include/asm/octeon/cvmx-stxx-defs.h index 146354005d3b..3c409a854d91 100644 --- a/arch/mips/include/asm/octeon/cvmx-stxx-defs.h +++ b/arch/mips/include/asm/octeon/cvmx-stxx-defs.h @@ -4,7 +4,7 @@ * Contact: support@caviumnetworks.com * This file is part of the OCTEON SDK * - * Copyright (c) 2003-2012 Cavium Networks + * Copyright (C) 2003-2018 Cavium, Inc. * * This file is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License, Version 2, as @@ -45,6 +45,8 @@ #define CVMX_STXX_STAT_CTL(block_id) (CVMX_ADD_IO_SEG(0x0001180090000638ull) + ((block_id) & 1) * 0x8000000ull) #define CVMX_STXX_STAT_PKT_XMT(block_id) (CVMX_ADD_IO_SEG(0x0001180090000640ull) + ((block_id) & 1) * 0x8000000ull) +void __cvmx_interrupt_stxx_int_msk_enable(int index); + union cvmx_stxx_arb_ctl { uint64_t u64; struct cvmx_stxx_arb_ctl_s { diff --git a/arch/mips/include/asm/octeon/octeon.h b/arch/mips/include/asm/octeon/octeon.h index c99c4b6a79f4..60481502826a 100644 --- a/arch/mips/include/asm/octeon/octeon.h +++ b/arch/mips/include/asm/octeon/octeon.h @@ -279,13 +279,12 @@ union octeon_cvmemctl { } s; }; -extern void octeon_write_lcd(const char *s); extern void octeon_check_cpu_bist(void); -extern int octeon_get_boot_uart(void); -struct uart_port; -extern unsigned int octeon_serial_in(struct uart_port *, int); -extern void octeon_serial_out(struct uart_port *, int, int); +int octeon_prune_device_tree(void); +extern const char __appended_dtb; +extern const char __dtb_octeon_3xxx_begin; +extern const char __dtb_octeon_68xx_begin; /** * Write a 32bit value to the Octeon NPI register space diff --git a/arch/mips/include/asm/octeon/pci-octeon.h b/arch/mips/include/asm/octeon/pci-octeon.h index 1884609741a8..b12d9a3fbfb6 100644 --- a/arch/mips/include/asm/octeon/pci-octeon.h +++ b/arch/mips/include/asm/octeon/pci-octeon.h @@ -63,4 +63,7 @@ enum octeon_dma_bar_type { */ extern enum octeon_dma_bar_type octeon_dma_bar_type; +void octeon_pci_dma_init(void); +extern char *octeon_swiotlb; + #endif diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index ad461216b5a1..e8cc328fce2d 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -80,7 +80,12 @@ extern void build_copy_page(void); * used in our early mem init code for all memory models. * So always define it. */ -#define ARCH_PFN_OFFSET PFN_UP(PHYS_OFFSET) +#ifdef CONFIG_MIPS_AUTO_PFN_OFFSET +extern unsigned long ARCH_PFN_OFFSET; +# define ARCH_PFN_OFFSET ARCH_PFN_OFFSET +#else +# define ARCH_PFN_OFFSET PFN_UP(PHYS_OFFSET) +#endif extern void clear_page(void * page); extern void copy_page(void * to, void * from); @@ -252,8 +257,8 @@ extern int __virt_addr_valid(const volatile void *kaddr); ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define UNCAC_ADDR(addr) ((addr) - PAGE_OFFSET + UNCAC_BASE) -#define CAC_ADDR(addr) ((addr) - UNCAC_BASE + PAGE_OFFSET) +#define UNCAC_ADDR(addr) (UNCAC_BASE + __pa(addr)) +#define CAC_ADDR(addr) ((unsigned long)__va((addr) - UNCAC_BASE)) #include #include diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h index af34afbc32d9..b2fa62922d88 100644 --- a/arch/mips/include/asm/processor.h +++ b/arch/mips/include/asm/processor.h @@ -141,7 +141,7 @@ struct mips_fpu_struct { #define NUM_DSP_REGS 6 -typedef __u32 dspreg_t; +typedef unsigned long dspreg_t; struct mips_dsp_state { dspreg_t dspr[NUM_DSP_REGS]; @@ -386,7 +386,20 @@ unsigned long get_wchan(struct task_struct *p); #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29]) #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status) +#ifdef CONFIG_CPU_LOONGSON3 +/* + * Loongson-3's SFB (Store-Fill-Buffer) may buffer writes indefinitely when a + * tight read loop is executed, because reads take priority over writes & the + * hardware (incorrectly) doesn't ensure that writes will eventually occur. + * + * Since spin loops of any kind should have a cpu_relax() in them, force an SFB + * flush from cpu_relax() such that any pending writes will become visible as + * expected. + */ +#define cpu_relax() smp_mb() +#else #define cpu_relax() barrier() +#endif /* * Return_address is a replacement for __builtin_return_address(count) diff --git a/arch/mips/include/asm/setup.h b/arch/mips/include/asm/setup.h index d49d247d48a1..bb36a400203d 100644 --- a/arch/mips/include/asm/setup.h +++ b/arch/mips/include/asm/setup.h @@ -2,8 +2,10 @@ #ifndef _MIPS_SETUP_H #define _MIPS_SETUP_H +#include #include +extern void prom_putchar(char); extern void setup_early_printk(void); #ifdef CONFIG_EARLY_PRINTK_8250 diff --git a/arch/mips/include/asm/sgialib.h b/arch/mips/include/asm/sgialib.h index 195db5045ae5..0d9fad5915fe 100644 --- a/arch/mips/include/asm/sgialib.h +++ b/arch/mips/include/asm/sgialib.h @@ -31,7 +31,6 @@ extern int prom_flags; #define PROM_FLAG_DONT_FREE_TEMP 4 /* Simple char-by-char console I/O. */ -extern void prom_putchar(char c); extern char prom_getchar(void); /* Get next memory descriptor after CURR, returns first descriptor diff --git a/arch/mips/include/asm/sim.h b/arch/mips/include/asm/sim.h index 91831800c480..59f31a95facd 100644 --- a/arch/mips/include/asm/sim.h +++ b/arch/mips/include/asm/sim.h @@ -39,8 +39,6 @@ __asm__( \ ".end\t__" #symbol "\n\t" \ ".size\t__" #symbol",. - __" #symbol) -#define nabi_no_regargs - #endif /* CONFIG_32BIT */ #ifdef CONFIG_64BIT @@ -67,16 +65,6 @@ __asm__( \ ".end\t__" #symbol "\n\t" \ ".size\t__" #symbol",. - __" #symbol) -#define nabi_no_regargs \ - unsigned long __dummy0, \ - unsigned long __dummy1, \ - unsigned long __dummy2, \ - unsigned long __dummy3, \ - unsigned long __dummy4, \ - unsigned long __dummy5, \ - unsigned long __dummy6, \ - unsigned long __dummy7, - #endif /* CONFIG_64BIT */ #endif /* _ASM_SIM_H */ diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h index 88ebd83b3bf9..056a6bf13491 100644 --- a/arch/mips/include/asm/smp.h +++ b/arch/mips/include/asm/smp.h @@ -25,7 +25,17 @@ extern cpumask_t cpu_sibling_map[]; extern cpumask_t cpu_core_map[]; extern cpumask_t cpu_foreign_map[]; -#define raw_smp_processor_id() (current_thread_info()->cpu) +static inline int raw_smp_processor_id(void) +{ +#if defined(__VDSO__) + extern int vdso_smp_processor_id(void) + __compiletime_error("VDSO should not call smp_processor_id()"); + return vdso_smp_processor_id(); +#else + return current_thread_info()->cpu; +#endif +} +#define raw_smp_processor_id raw_smp_processor_id /* Map from cpu id to sequential logical cpu number. This will only not be idempotent when cpus failed to come on-line. */ diff --git a/arch/mips/include/asm/txx9/generic.h b/arch/mips/include/asm/txx9/generic.h index 64887d3c7ec3..9a2c47bf3c40 100644 --- a/arch/mips/include/asm/txx9/generic.h +++ b/arch/mips/include/asm/txx9/generic.h @@ -49,7 +49,6 @@ void txx9_spi_init(int busid, unsigned long base, int irq); void txx9_ethaddr_init(unsigned int id, unsigned char *ethaddr); void txx9_sio_init(unsigned long baseaddr, int irq, unsigned int line, unsigned int sclk, int nocts); -void prom_putchar(char c); #ifdef CONFIG_EARLY_PRINTK extern void (*txx9_prom_putchar)(char c); void txx9_sio_putchar_init(unsigned long baseaddr); diff --git a/arch/mips/include/asm/txx9/tx4939.h b/arch/mips/include/asm/txx9/tx4939.h index 6d667087f2aa..00805ac6e9fc 100644 --- a/arch/mips/include/asm/txx9/tx4939.h +++ b/arch/mips/include/asm/txx9/tx4939.h @@ -101,13 +101,6 @@ struct tx4939_irc_reg { struct tx4939_le_reg maskext; }; -struct tx4939_rtc_reg { - __u32 ctl; - __u32 adr; - __u32 dat; - __u32 tbc; -}; - struct tx4939_crypto_reg { struct tx4939_le_reg csr; struct tx4939_le_reg idesptr; @@ -369,26 +362,6 @@ struct tx4939_vpc_desc { #define TX4939_CLKCTR_SIO0RST 0x00000002 #define TX4939_CLKCTR_CYPRST 0x00000001 -/* - * RTC - */ -#define TX4939_RTCCTL_ALME 0x00000080 -#define TX4939_RTCCTL_ALMD 0x00000040 -#define TX4939_RTCCTL_BUSY 0x00000020 - -#define TX4939_RTCCTL_COMMAND 0x00000007 -#define TX4939_RTCCTL_COMMAND_NOP 0x00000000 -#define TX4939_RTCCTL_COMMAND_GETTIME 0x00000001 -#define TX4939_RTCCTL_COMMAND_SETTIME 0x00000002 -#define TX4939_RTCCTL_COMMAND_GETALARM 0x00000003 -#define TX4939_RTCCTL_COMMAND_SETALARM 0x00000004 - -#define TX4939_RTCTBC_PM 0x00000080 -#define TX4939_RTCTBC_COMP 0x0000007f - -#define TX4939_RTC_REG_RAMSIZE 0x00000100 -#define TX4939_RTC_REG_RWBSIZE 0x00000006 - /* * CRYPTO */ @@ -498,8 +471,6 @@ struct tx4939_vpc_desc { #define tx4939_ccfgptr \ ((struct tx4939_ccfg_reg __iomem *)TX4939_CCFG_REG) #define tx4939_sramcptr tx4938_sramcptr -#define tx4939_rtcptr \ - ((struct tx4939_rtc_reg __iomem *)TX4939_RTC_REG) #define tx4939_cryptoptr \ ((struct tx4939_crypto_reg __iomem *)TX4939_CRYPTO_REG) #define tx4939_vpcptr ((struct tx4939_vpc_reg __iomem *)TX4939_VPC_REG) diff --git a/arch/mips/include/uapi/asm/unistd.h b/arch/mips/include/uapi/asm/unistd.h index bb05e9916a5f..f25dd1d83fb7 100644 --- a/arch/mips/include/uapi/asm/unistd.h +++ b/arch/mips/include/uapi/asm/unistd.h @@ -388,17 +388,19 @@ #define __NR_pkey_alloc (__NR_Linux + 364) #define __NR_pkey_free (__NR_Linux + 365) #define __NR_statx (__NR_Linux + 366) +#define __NR_rseq (__NR_Linux + 367) +#define __NR_io_pgetevents (__NR_Linux + 368) /* * Offset of the last Linux o32 flavoured syscall */ -#define __NR_Linux_syscalls 366 +#define __NR_Linux_syscalls 368 #endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ #define __NR_O32_Linux 4000 -#define __NR_O32_Linux_syscalls 366 +#define __NR_O32_Linux_syscalls 368 #if _MIPS_SIM == _MIPS_SIM_ABI64 @@ -733,16 +735,18 @@ #define __NR_pkey_alloc (__NR_Linux + 324) #define __NR_pkey_free (__NR_Linux + 325) #define __NR_statx (__NR_Linux + 326) +#define __NR_rseq (__NR_Linux + 327) +#define __NR_io_pgetevents (__NR_Linux + 328) /* * Offset of the last Linux 64-bit flavoured syscall */ -#define __NR_Linux_syscalls 326 +#define __NR_Linux_syscalls 328 #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */ #define __NR_64_Linux 5000 -#define __NR_64_Linux_syscalls 326 +#define __NR_64_Linux_syscalls 328 #if _MIPS_SIM == _MIPS_SIM_NABI32 @@ -1081,15 +1085,17 @@ #define __NR_pkey_alloc (__NR_Linux + 328) #define __NR_pkey_free (__NR_Linux + 329) #define __NR_statx (__NR_Linux + 330) +#define __NR_rseq (__NR_Linux + 331) +#define __NR_io_pgetevents (__NR_Linux + 332) /* * Offset of the last N32 flavoured syscall */ -#define __NR_Linux_syscalls 330 +#define __NR_Linux_syscalls 332 #endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */ #define __NR_N32_Linux 6000 -#define __NR_N32_Linux_syscalls 330 +#define __NR_N32_Linux_syscalls 332 #endif /* _UAPI_ASM_UNISTD_H */ diff --git a/arch/mips/jazz/jazzdma.c b/arch/mips/jazz/jazzdma.c index d626a9a391cc..d31bc2f01208 100644 --- a/arch/mips/jazz/jazzdma.c +++ b/arch/mips/jazz/jazzdma.c @@ -16,6 +16,8 @@ #include #include #include +#include +#include #include #include #include @@ -86,6 +88,7 @@ static int __init vdma_init(void) printk(KERN_INFO "VDMA: R4030 DMA pagetables initialized.\n"); return 0; } +arch_initcall(vdma_init); /* * Allocate DMA pagetables using a simple first-fit algorithm @@ -556,4 +559,140 @@ int vdma_get_enable(int channel) return enable; } -arch_initcall(vdma_init); +static void *jazz_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +{ + void *ret; + + ret = dma_direct_alloc(dev, size, dma_handle, gfp, attrs); + if (!ret) + return NULL; + + *dma_handle = vdma_alloc(virt_to_phys(ret), size); + if (*dma_handle == VDMA_ERROR) { + dma_direct_free(dev, size, ret, *dma_handle, attrs); + return NULL; + } + + if (!(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_cache_wback_inv((unsigned long)ret, size); + ret = (void *)UNCAC_ADDR(ret); + } + return ret; +} + +static void jazz_dma_free(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_handle, unsigned long attrs) +{ + vdma_free(dma_handle); + if (!(attrs & DMA_ATTR_NON_CONSISTENT)) + vaddr = (void *)CAC_ADDR((unsigned long)vaddr); + return dma_direct_free(dev, size, vaddr, dma_handle, attrs); +} + +static dma_addr_t jazz_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + phys_addr_t phys = page_to_phys(page) + offset; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(dev, phys, size, dir); + return vdma_alloc(phys, size); +} + +static void jazz_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_cpu(dev, vdma_log2phys(dma_addr), size, dir); + vdma_free(dma_addr); +} + +static int jazz_dma_map_sg(struct device *dev, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + int i; + struct scatterlist *sg; + + for_each_sg(sglist, sg, nents, i) { + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(dev, sg_phys(sg), sg->length, + dir); + sg->dma_address = vdma_alloc(sg_phys(sg), sg->length); + if (sg->dma_address == VDMA_ERROR) + return 0; + sg_dma_len(sg) = sg->length; + } + + return nents; +} + +static void jazz_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + int i; + struct scatterlist *sg; + + for_each_sg(sglist, sg, nents, i) { + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, + dir); + vdma_free(sg->dma_address); + } +} + +static void jazz_dma_sync_single_for_device(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ + arch_sync_dma_for_device(dev, vdma_log2phys(addr), size, dir); +} + +static void jazz_dma_sync_single_for_cpu(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ + arch_sync_dma_for_cpu(dev, vdma_log2phys(addr), size, dir); +} + +static void jazz_dma_sync_sg_for_device(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, nents, i) + arch_sync_dma_for_device(dev, sg_phys(sg), sg->length, dir); +} + +static void jazz_dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, nents, i) + arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, dir); +} + +static int jazz_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) +{ + return dma_addr == VDMA_ERROR; +} + +const struct dma_map_ops jazz_dma_ops = { + .alloc = jazz_dma_alloc, + .free = jazz_dma_free, + .mmap = arch_dma_mmap, + .map_page = jazz_dma_map_page, + .unmap_page = jazz_dma_unmap_page, + .map_sg = jazz_dma_map_sg, + .unmap_sg = jazz_dma_unmap_sg, + .sync_single_for_cpu = jazz_dma_sync_single_for_cpu, + .sync_single_for_device = jazz_dma_sync_single_for_device, + .sync_sg_for_cpu = jazz_dma_sync_sg_for_cpu, + .sync_sg_for_device = jazz_dma_sync_sg_for_device, + .dma_supported = dma_direct_supported, + .cache_sync = arch_dma_cache_sync, + .mapping_error = jazz_dma_mapping_error, +}; +EXPORT_SYMBOL(jazz_dma_ops); diff --git a/arch/mips/jazz/setup.c b/arch/mips/jazz/setup.c index 448fd41792e4..1b5e121c3f0d 100644 --- a/arch/mips/jazz/setup.c +++ b/arch/mips/jazz/setup.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -136,10 +137,16 @@ static struct resource jazz_esp_rsrc[] = { } }; +static u64 jazz_esp_dma_mask = DMA_BIT_MASK(32); + static struct platform_device jazz_esp_pdev = { .name = "jazz_esp", .num_resources = ARRAY_SIZE(jazz_esp_rsrc), - .resource = jazz_esp_rsrc + .resource = jazz_esp_rsrc, + .dev = { + .dma_mask = &jazz_esp_dma_mask, + .coherent_dma_mask = DMA_BIT_MASK(32), + } }; static struct resource jazz_sonic_rsrc[] = { @@ -155,10 +162,16 @@ static struct resource jazz_sonic_rsrc[] = { } }; +static u64 jazz_sonic_dma_mask = DMA_BIT_MASK(32); + static struct platform_device jazz_sonic_pdev = { .name = "jazzsonic", .num_resources = ARRAY_SIZE(jazz_sonic_rsrc), - .resource = jazz_sonic_rsrc + .resource = jazz_sonic_rsrc, + .dev = { + .dma_mask = &jazz_sonic_dma_mask, + .coherent_dma_mask = DMA_BIT_MASK(32), + } }; static struct resource jazz_cmos_rsrc[] = { diff --git a/arch/mips/jz4740/Platform b/arch/mips/jz4740/Platform index 28448d358c10..a2a5a85ea1f9 100644 --- a/arch/mips/jz4740/Platform +++ b/arch/mips/jz4740/Platform @@ -1,4 +1,4 @@ platform-$(CONFIG_MACH_INGENIC) += jz4740/ cflags-$(CONFIG_MACH_INGENIC) += -I$(srctree)/arch/mips/include/asm/mach-jz4740 load-$(CONFIG_MACH_INGENIC) += 0xffffffff80010000 -zload-$(CONFIG_MACH_INGENIC) += 0xffffffff80600000 +zload-$(CONFIG_MACH_INGENIC) += 0xffffffff81000000 diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c index b2509c19cfb5..d535fc706a8b 100644 --- a/arch/mips/kernel/cpu-probe.c +++ b/arch/mips/kernel/cpu-probe.c @@ -1849,7 +1849,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) set_elf_platform(cpu, "loongson3a"); set_isa(c, MIPS_CPU_ISA_M64R2); break; - case PRID_REV_LOONGSON3A_R3: + case PRID_REV_LOONGSON3A_R3_0: + case PRID_REV_LOONGSON3A_R3_1: c->cputype = CPU_LOONGSON3; __cpu_name[cpu] = "ICT Loongson-3"; set_elf_platform(cpu, "loongson3a"); diff --git a/arch/mips/kernel/early_printk.c b/arch/mips/kernel/early_printk.c index 505cb77d1280..4a1647ddfbd9 100644 --- a/arch/mips/kernel/early_printk.c +++ b/arch/mips/kernel/early_printk.c @@ -14,8 +14,6 @@ #include -extern void prom_putchar(char); - static void early_console_write(struct console *con, const char *s, unsigned n) { while (n-- && *s) { diff --git a/arch/mips/kernel/early_printk_8250.c b/arch/mips/kernel/early_printk_8250.c index 83cea3767556..ea26614afac6 100644 --- a/arch/mips/kernel/early_printk_8250.c +++ b/arch/mips/kernel/early_printk_8250.c @@ -20,6 +20,7 @@ #include #include #include +#include static void __iomem *serial8250_base; static unsigned int serial8250_reg_shift; diff --git a/arch/mips/kernel/entry.S b/arch/mips/kernel/entry.S index 38a302919e6b..d7de8adcfcc8 100644 --- a/arch/mips/kernel/entry.S +++ b/arch/mips/kernel/entry.S @@ -79,6 +79,10 @@ FEXPORT(ret_from_fork) jal schedule_tail # a0 = struct task_struct *prev FEXPORT(syscall_exit) +#ifdef CONFIG_DEBUG_RSEQ + move a0, sp + jal rseq_syscall +#endif local_irq_disable # make sure need_resched and # signals dont change between # sampling and return @@ -141,6 +145,10 @@ work_notifysig: # deal with pending signals and j resume_userspace_check FEXPORT(syscall_exit_partial) +#ifdef CONFIG_DEBUG_RSEQ + move a0, sp + jal rseq_syscall +#endif local_irq_disable # make sure need_resched doesn't # change between and return LONG_L a2, TI_FLAGS($28) # current->work diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S index 37b9383eacd3..6c257b52f57f 100644 --- a/arch/mips/kernel/genex.S +++ b/arch/mips/kernel/genex.S @@ -354,16 +354,56 @@ NESTED(ejtag_debug_handler, PT_SIZE, sp) sll k0, k0, 30 # Check for SDBBP. bgez k0, ejtag_return +#ifdef CONFIG_SMP +1: PTR_LA k0, ejtag_debug_buffer_spinlock + ll k0, 0(k0) + bnez k0, 1b + PTR_LA k0, ejtag_debug_buffer_spinlock + sc k0, 0(k0) + beqz k0, 1b +# ifdef CONFIG_WEAK_REORDERING_BEYOND_LLSC + sync +# endif + + PTR_LA k0, ejtag_debug_buffer + LONG_S k1, 0(k0) + + ASM_CPUID_MFC0 k1, ASM_SMP_CPUID_REG + PTR_SRL k1, SMP_CPUID_PTRSHIFT + PTR_SLL k1, LONGLOG + PTR_LA k0, ejtag_debug_buffer_per_cpu + PTR_ADDU k0, k1 + + PTR_LA k1, ejtag_debug_buffer + LONG_L k1, 0(k1) + LONG_S k1, 0(k0) + + PTR_LA k0, ejtag_debug_buffer_spinlock + sw zero, 0(k0) +#else PTR_LA k0, ejtag_debug_buffer LONG_S k1, 0(k0) +#endif + SAVE_ALL move a0, sp jal ejtag_exception_handler RESTORE_ALL + +#ifdef CONFIG_SMP + ASM_CPUID_MFC0 k1, ASM_SMP_CPUID_REG + PTR_SRL k1, SMP_CPUID_PTRSHIFT + PTR_SLL k1, LONGLOG + PTR_LA k0, ejtag_debug_buffer_per_cpu + PTR_ADDU k0, k1 + LONG_L k1, 0(k0) +#else PTR_LA k0, ejtag_debug_buffer LONG_L k1, 0(k0) +#endif ejtag_return: + back_to_back_c0_hazard MFC0 k0, CP0_DESAVE .set mips32 deret @@ -377,6 +417,12 @@ ejtag_return: .data EXPORT(ejtag_debug_buffer) .fill LONGSIZE +#ifdef CONFIG_SMP +EXPORT(ejtag_debug_buffer_spinlock) + .fill LONGSIZE +EXPORT(ejtag_debug_buffer_per_cpu) + .fill LONGSIZE * NR_CPUS +#endif .previous __INIT diff --git a/arch/mips/kernel/idle.c b/arch/mips/kernel/idle.c index 7c246b69c545..046846999efd 100644 --- a/arch/mips/kernel/idle.c +++ b/arch/mips/kernel/idle.c @@ -33,21 +33,21 @@ void (*cpu_wait)(void); EXPORT_SYMBOL(cpu_wait); -static void r3081_wait(void) +static void __cpuidle r3081_wait(void) { unsigned long cfg = read_c0_conf(); write_c0_conf(cfg | R30XX_CONF_HALT); local_irq_enable(); } -static void r39xx_wait(void) +static void __cpuidle r39xx_wait(void) { if (!need_resched()) write_c0_conf(read_c0_conf() | TX39_CONF_HALT); local_irq_enable(); } -void r4k_wait(void) +void __cpuidle r4k_wait(void) { local_irq_enable(); __r4k_wait(); @@ -60,7 +60,7 @@ void r4k_wait(void) * interrupt is requested" restriction in the MIPS32/MIPS64 architecture makes * using this version a gamble. */ -void r4k_wait_irqoff(void) +void __cpuidle r4k_wait_irqoff(void) { if (!need_resched()) __asm__( @@ -75,7 +75,7 @@ void r4k_wait_irqoff(void) * The RM7000 variant has to handle erratum 38. The workaround is to not * have any pending stores when the WAIT instruction is executed. */ -static void rm7k_wait_irqoff(void) +static void __cpuidle rm7k_wait_irqoff(void) { if (!need_resched()) __asm__( @@ -96,7 +96,7 @@ static void rm7k_wait_irqoff(void) * since coreclock (and the cp0 counter) stops upon executing it. Only an * interrupt can wake it, so they must be enabled before entering idle modes. */ -static void au1k_wait(void) +static void __cpuidle au1k_wait(void) { unsigned long c0status = read_c0_status() | 1; /* irqs on */ diff --git a/arch/mips/kernel/kprobes.c b/arch/mips/kernel/kprobes.c index f5c8bce70db2..54cd675c5d1d 100644 --- a/arch/mips/kernel/kprobes.c +++ b/arch/mips/kernel/kprobes.c @@ -326,19 +326,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) preempt_enable_no_resched(); } return 1; - } else { - if (addr->word != breakpoint_insn.word) { - /* - * The breakpoint instruction was removed by - * another cpu right after we hit, no further - * handling of this interrupt is appropriate - */ - ret = 1; - goto no_kprobe; - } - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) - goto ss_probe; + } else if (addr->word != breakpoint_insn.word) { + /* + * The breakpoint instruction was removed by + * another cpu right after we hit, no further + * handling of this interrupt is appropriate + */ + ret = 1; } goto no_kprobe; } @@ -364,10 +358,11 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) if (p->pre_handler && p->pre_handler(p, regs)) { /* handler has already set things up, so skip ss setup */ + reset_current_kprobe(); + preempt_enable_no_resched(); return 1; } -ss_probe: prepare_singlestep(p, regs, kcb); if (kcb->flags & SKIP_DELAYSLOT) { kcb->kprobe_status = KPROBE_HIT_SSDONE; @@ -468,51 +463,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, return ret; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - kcb->jprobe_saved_regs = *regs; - kcb->jprobe_saved_sp = regs->regs[29]; - - memcpy(kcb->jprobes_stack, (void *)kcb->jprobe_saved_sp, - MIN_JPROBES_STACK_SIZE(kcb->jprobe_saved_sp)); - - regs->cp0_epc = (unsigned long)(jp->entry); - - return 1; -} - -/* Defined in the inline asm below. */ -void jprobe_return_end(void); - -void __kprobes jprobe_return(void) -{ - /* Assembler quirk necessitates this '0,code' business. */ - asm volatile( - "break 0,%0\n\t" - ".globl jprobe_return_end\n" - "jprobe_return_end:\n" - : : "n" (BRK_KPROBE_BP) : "memory"); -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - if (regs->cp0_epc >= (unsigned long)jprobe_return && - regs->cp0_epc <= (unsigned long)jprobe_return_end) { - *regs = kcb->jprobe_saved_regs; - memcpy((void *)kcb->jprobe_saved_sp, kcb->jprobes_stack, - MIN_JPROBES_STACK_SIZE(kcb->jprobe_saved_sp)); - preempt_enable_no_resched(); - - return 1; - } - return 0; -} - /* * Function return probe trampoline: * - init_kprobes() establishes a probepoint here @@ -595,9 +545,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, kretprobe_assert(ri, orig_ret_address, trampoline_address); instruction_pointer(regs) = orig_ret_address; - reset_current_kprobe(); kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); diff --git a/arch/mips/kernel/linux32.c b/arch/mips/kernel/linux32.c index 318f1c05c5b3..6b61be486303 100644 --- a/arch/mips/kernel/linux32.c +++ b/arch/mips/kernel/linux32.c @@ -43,17 +43,6 @@ #include #include -/* Use this to get at 32-bit user passed pointers. */ -/* A() macro should be used for places where you e.g. - have some internal variable u32 and just want to get - rid of a compiler warning. AA() has to be used in - places where you want to convert a function argument - to 32bit pointer or when you e.g. access pt_regs - structure and want to consider 32bit registers only. - */ -#define A(__x) ((unsigned long)(__x)) -#define AA(__x) ((unsigned long)((int)__x)) - #ifdef __MIPSEB__ #define merge_64(r1, r2) ((((r1) & 0xffffffffUL) << 32) + ((r2) & 0xffffffffUL)) #endif @@ -61,24 +50,6 @@ #define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL)) #endif -SYSCALL_DEFINE6(32_mmap2, unsigned long, addr, unsigned long, len, - unsigned long, prot, unsigned long, flags, unsigned long, fd, - unsigned long, pgoff) -{ - if (pgoff & (~PAGE_MASK >> 12)) - return -EINVAL; - return ksys_mmap_pgoff(addr, len, prot, flags, fd, - pgoff >> (PAGE_SHIFT-12)); -} - -#define RLIM_INFINITY32 0x7fffffff -#define RESOURCE32(x) ((x > RLIM_INFINITY32) ? RLIM_INFINITY32 : x) - -struct rlimit32 { - int rlim_cur; - int rlim_max; -}; - SYSCALL_DEFINE4(32_truncate64, const char __user *, path, unsigned long, __dummy, unsigned long, a2, unsigned long, a3) { diff --git a/arch/mips/kernel/mcount.S b/arch/mips/kernel/mcount.S index f2ee7e1e3342..cff52b283e03 100644 --- a/arch/mips/kernel/mcount.S +++ b/arch/mips/kernel/mcount.S @@ -119,10 +119,20 @@ NESTED(_mcount, PT_SIZE, ra) EXPORT_SYMBOL(_mcount) PTR_LA t1, ftrace_stub PTR_L t2, ftrace_trace_function /* Prepare t2 for (1) */ - bne t1, t2, static_trace + beq t1, t2, fgraph_trace nop + MCOUNT_SAVE_REGS + + move a0, ra /* arg1: self return address */ + jalr t2 /* (1) call *ftrace_trace_function */ + move a1, AT /* arg2: parent's return address */ + + MCOUNT_RESTORE_REGS + +fgraph_trace: #ifdef CONFIG_FUNCTION_GRAPH_TRACER + PTR_LA t1, ftrace_stub PTR_L t3, ftrace_graph_return bne t1, t3, ftrace_graph_caller nop @@ -131,24 +141,11 @@ EXPORT_SYMBOL(_mcount) bne t1, t3, ftrace_graph_caller nop #endif - b ftrace_stub -#ifdef CONFIG_32BIT - addiu sp, sp, 8 -#else - nop -#endif -static_trace: - MCOUNT_SAVE_REGS - - move a0, ra /* arg1: self return address */ - jalr t2 /* (1) call *ftrace_trace_function */ - move a1, AT /* arg2: parent's return address */ - - MCOUNT_RESTORE_REGS #ifdef CONFIG_32BIT addiu sp, sp, 8 #endif + .globl ftrace_stub ftrace_stub: RETURN_BACK diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c index 8d85046adcc8..8fc69891e117 100644 --- a/arch/mips/kernel/process.c +++ b/arch/mips/kernel/process.c @@ -29,6 +29,8 @@ #include #include #include +#include +#include #include #include @@ -655,28 +657,42 @@ unsigned long arch_align_stack(unsigned long sp) return sp & ALMASK; } -static void arch_dump_stack(void *info) +static DEFINE_PER_CPU(call_single_data_t, backtrace_csd); +static struct cpumask backtrace_csd_busy; + +static void handle_backtrace(void *info) { - struct pt_regs *regs; + nmi_cpu_backtrace(get_irq_regs()); + cpumask_clear_cpu(smp_processor_id(), &backtrace_csd_busy); +} - regs = get_irq_regs(); +static void raise_backtrace(cpumask_t *mask) +{ + call_single_data_t *csd; + int cpu; - if (regs) - show_regs(regs); + for_each_cpu(cpu, mask) { + /* + * If we previously sent an IPI to the target CPU & it hasn't + * cleared its bit in the busy cpumask then it didn't handle + * our previous IPI & it's not safe for us to reuse the + * call_single_data_t. + */ + if (cpumask_test_and_set_cpu(cpu, &backtrace_csd_busy)) { + pr_warn("Unable to send backtrace IPI to CPU%u - perhaps it hung?\n", + cpu); + continue; + } - dump_stack(); + csd = &per_cpu(backtrace_csd, cpu); + csd->func = handle_backtrace; + smp_call_function_single_async(cpu, csd); + } } void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self) { - long this_cpu = get_cpu(); - - if (cpumask_test_cpu(this_cpu, mask) && !exclude_self) - dump_stack(); - - smp_call_function_many(mask, arch_dump_stack, NULL, 1); - - put_cpu(); + nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace); } int mips_get_process_fp_mode(struct task_struct *task) @@ -691,19 +707,25 @@ int mips_get_process_fp_mode(struct task_struct *task) return value; } -static void prepare_for_fp_mode_switch(void *info) +static long prepare_for_fp_mode_switch(void *unused) { - struct mm_struct *mm = info; - - if (current->mm == mm) - lose_fpu(1); + /* + * This is icky, but we use this to simply ensure that all CPUs have + * context switched, regardless of whether they were previously running + * kernel or user code. This ensures that no CPU currently has its FPU + * enabled, or is about to attempt to enable it through any path other + * than enable_restore_fp_context() which will wait appropriately for + * fp_mode_switching to be zero. + */ + return 0; } int mips_set_process_fp_mode(struct task_struct *task, unsigned int value) { const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE; struct task_struct *t; - int max_users; + struct cpumask process_cpus; + int cpu; /* If nothing to change, return right away, successfully. */ if (value == mips_get_process_fp_mode(task)) @@ -736,35 +758,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value) if (!(value & PR_FP_MODE_FR) && raw_cpu_has_fpu && cpu_has_mips_r6) return -EOPNOTSUPP; - /* Proceed with the mode switch */ - preempt_disable(); - - /* Save FP & vector context, then disable FPU & MSA */ - if (task->signal == current->signal) - lose_fpu(1); - - /* Prevent any threads from obtaining live FP context */ - atomic_set(&task->mm->context.fp_mode_switching, 1); - smp_mb__after_atomic(); - - /* - * If there are multiple online CPUs then force any which are running - * threads in this process to lose their FPU context, which they can't - * regain until fp_mode_switching is cleared later. - */ - if (num_online_cpus() > 1) { - /* No need to send an IPI for the local CPU */ - max_users = (task->mm == current->mm) ? 1 : 0; - - if (atomic_read(¤t->mm->mm_users) > max_users) - smp_call_function(prepare_for_fp_mode_switch, - (void *)current->mm, 1); - } - - /* - * There are now no threads of the process with live FP context, so it - * is safe to proceed with the FP mode switch. - */ + /* Indicate the new FP mode in each thread */ for_each_thread(task, t) { /* Update desired FP register width */ if (value & PR_FP_MODE_FR) { @@ -781,9 +775,34 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value) clear_tsk_thread_flag(t, TIF_HYBRID_FPREGS); } - /* Allow threads to use FP again */ - atomic_set(&task->mm->context.fp_mode_switching, 0); - preempt_enable(); + /* + * We need to ensure that all threads in the process have switched mode + * before returning, in order to allow userland to not worry about + * races. We can do this by forcing all CPUs that any thread in the + * process may be running on to schedule something else - in this case + * prepare_for_fp_mode_switch(). + * + * We begin by generating a mask of all CPUs that any thread in the + * process may be running on. + */ + cpumask_clear(&process_cpus); + for_each_thread(task, t) + cpumask_set_cpu(task_cpu(t), &process_cpus); + + /* + * Now we schedule prepare_for_fp_mode_switch() on each of those CPUs. + * + * The CPUs may have rescheduled already since we switched mode or + * generated the cpumask, but that doesn't matter. If the task in this + * process is scheduled out then our scheduling + * prepare_for_fp_mode_switch() will simply be redundant. If it's + * scheduled in then it will already have picked up the new FP mode + * whilst doing so. + */ + get_online_cpus(); + for_each_cpu_and(cpu, &process_cpus, cpu_online_mask) + work_on_cpu(cpu, prepare_for_fp_mode_switch, NULL); + put_online_cpus(); wake_up_var(&task->mm->context.fp_mode_switching); diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c index 9f6c3f2aa2e2..e5ba56c01ee0 100644 --- a/arch/mips/kernel/ptrace.c +++ b/arch/mips/kernel/ptrace.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include #include @@ -589,9 +590,226 @@ static int fpr_set(struct task_struct *target, return err; } +#if defined(CONFIG_32BIT) || defined(CONFIG_MIPS32_O32) + +/* + * Copy the DSP context to the supplied 32-bit NT_MIPS_DSP buffer. + */ +static int dsp32_get(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf) +{ + unsigned int start, num_regs, i; + u32 dspregs[NUM_DSP_REGS + 1]; + + BUG_ON(count % sizeof(u32)); + + if (!cpu_has_dsp) + return -EIO; + + start = pos / sizeof(u32); + num_regs = count / sizeof(u32); + + if (start + num_regs > NUM_DSP_REGS + 1) + return -EIO; + + for (i = start; i < num_regs; i++) + switch (i) { + case 0 ... NUM_DSP_REGS - 1: + dspregs[i] = target->thread.dsp.dspr[i]; + break; + case NUM_DSP_REGS: + dspregs[i] = target->thread.dsp.dspcontrol; + break; + } + return user_regset_copyout(&pos, &count, &kbuf, &ubuf, dspregs, 0, + sizeof(dspregs)); +} + +/* + * Copy the supplied 32-bit NT_MIPS_DSP buffer to the DSP context. + */ +static int dsp32_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + unsigned int start, num_regs, i; + u32 dspregs[NUM_DSP_REGS + 1]; + int err; + + BUG_ON(count % sizeof(u32)); + + if (!cpu_has_dsp) + return -EIO; + + start = pos / sizeof(u32); + num_regs = count / sizeof(u32); + + if (start + num_regs > NUM_DSP_REGS + 1) + return -EIO; + + err = user_regset_copyin(&pos, &count, &kbuf, &ubuf, dspregs, 0, + sizeof(dspregs)); + if (err) + return err; + + for (i = start; i < num_regs; i++) + switch (i) { + case 0 ... NUM_DSP_REGS - 1: + target->thread.dsp.dspr[i] = (s32)dspregs[i]; + break; + case NUM_DSP_REGS: + target->thread.dsp.dspcontrol = (s32)dspregs[i]; + break; + } + + return 0; +} + +#endif /* CONFIG_32BIT || CONFIG_MIPS32_O32 */ + +#ifdef CONFIG_64BIT + +/* + * Copy the DSP context to the supplied 64-bit NT_MIPS_DSP buffer. + */ +static int dsp64_get(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf) +{ + unsigned int start, num_regs, i; + u64 dspregs[NUM_DSP_REGS + 1]; + + BUG_ON(count % sizeof(u64)); + + if (!cpu_has_dsp) + return -EIO; + + start = pos / sizeof(u64); + num_regs = count / sizeof(u64); + + if (start + num_regs > NUM_DSP_REGS + 1) + return -EIO; + + for (i = start; i < num_regs; i++) + switch (i) { + case 0 ... NUM_DSP_REGS - 1: + dspregs[i] = target->thread.dsp.dspr[i]; + break; + case NUM_DSP_REGS: + dspregs[i] = target->thread.dsp.dspcontrol; + break; + } + return user_regset_copyout(&pos, &count, &kbuf, &ubuf, dspregs, 0, + sizeof(dspregs)); +} + +/* + * Copy the supplied 64-bit NT_MIPS_DSP buffer to the DSP context. + */ +static int dsp64_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + unsigned int start, num_regs, i; + u64 dspregs[NUM_DSP_REGS + 1]; + int err; + + BUG_ON(count % sizeof(u64)); + + if (!cpu_has_dsp) + return -EIO; + + start = pos / sizeof(u64); + num_regs = count / sizeof(u64); + + if (start + num_regs > NUM_DSP_REGS + 1) + return -EIO; + + err = user_regset_copyin(&pos, &count, &kbuf, &ubuf, dspregs, 0, + sizeof(dspregs)); + if (err) + return err; + + for (i = start; i < num_regs; i++) + switch (i) { + case 0 ... NUM_DSP_REGS - 1: + target->thread.dsp.dspr[i] = dspregs[i]; + break; + case NUM_DSP_REGS: + target->thread.dsp.dspcontrol = dspregs[i]; + break; + } + + return 0; +} + +#endif /* CONFIG_64BIT */ + +/* + * Determine whether the DSP context is present. + */ +static int dsp_active(struct task_struct *target, + const struct user_regset *regset) +{ + return cpu_has_dsp ? NUM_DSP_REGS + 1 : -ENODEV; +} + +/* Copy the FP mode setting to the supplied NT_MIPS_FP_MODE buffer. */ +static int fp_mode_get(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf) +{ + int fp_mode; + + fp_mode = mips_get_process_fp_mode(target); + return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &fp_mode, 0, + sizeof(fp_mode)); +} + +/* + * Copy the supplied NT_MIPS_FP_MODE buffer to the FP mode setting. + * + * We optimize for the case where `count % sizeof(int) == 0', which + * is supposed to have been guaranteed by the kernel before calling + * us, e.g. in `ptrace_regset'. We enforce that requirement, so + * that we can safely avoid preinitializing temporaries for partial + * mode writes. + */ +static int fp_mode_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + int fp_mode; + int err; + + BUG_ON(count % sizeof(int)); + + if (pos + count > sizeof(fp_mode)) + return -EIO; + + err = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &fp_mode, 0, + sizeof(fp_mode)); + if (err) + return err; + + if (count > 0) + err = mips_set_process_fp_mode(target, fp_mode); + + return err; +} + enum mips_regset { REGSET_GPR, REGSET_FPR, + REGSET_DSP, + REGSET_FP_MODE, }; struct pt_regs_offset { @@ -697,6 +915,23 @@ static const struct user_regset mips_regsets[] = { .get = fpr_get, .set = fpr_set, }, + [REGSET_DSP] = { + .core_note_type = NT_MIPS_DSP, + .n = NUM_DSP_REGS + 1, + .size = sizeof(u32), + .align = sizeof(u32), + .get = dsp32_get, + .set = dsp32_set, + .active = dsp_active, + }, + [REGSET_FP_MODE] = { + .core_note_type = NT_MIPS_FP_MODE, + .n = 1, + .size = sizeof(int), + .align = sizeof(int), + .get = fp_mode_get, + .set = fp_mode_set, + }, }; static const struct user_regset_view user_mips_view = { @@ -728,6 +963,23 @@ static const struct user_regset mips64_regsets[] = { .get = fpr_get, .set = fpr_set, }, + [REGSET_DSP] = { + .core_note_type = NT_MIPS_DSP, + .n = NUM_DSP_REGS + 1, + .size = sizeof(u64), + .align = sizeof(u64), + .get = dsp64_get, + .set = dsp64_set, + .active = dsp_active, + }, + [REGSET_FP_MODE] = { + .core_note_type = NT_MIPS_FP_MODE, + .n = 1, + .size = sizeof(int), + .align = sizeof(int), + .get = fp_mode_get, + .set = fp_mode_set, + }, }; static const struct user_regset_view user_mips64_view = { @@ -856,7 +1108,7 @@ long arch_ptrace(struct task_struct *child, long request, goto out; } dregs = __get_dsp_regs(child); - tmp = (unsigned long) (dregs[addr - DSP_BASE]); + tmp = dregs[addr - DSP_BASE]; break; } case DSP_CONTROL: diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c index 7edc629304c8..bc348d44d151 100644 --- a/arch/mips/kernel/ptrace32.c +++ b/arch/mips/kernel/ptrace32.c @@ -142,7 +142,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, goto out; } dregs = __get_dsp_regs(child); - tmp = (unsigned long) (dregs[addr - DSP_BASE]); + tmp = dregs[addr - DSP_BASE]; break; } case DSP_CONTROL: diff --git a/arch/mips/kernel/relocate_kernel.S b/arch/mips/kernel/relocate_kernel.S index c6bbf2165051..419c92197b2f 100644 --- a/arch/mips/kernel/relocate_kernel.S +++ b/arch/mips/kernel/relocate_kernel.S @@ -85,7 +85,7 @@ done: #ifdef CONFIG_CPU_CAVIUM_OCTEON /* We need to flush I-cache before jumping to new kernel. - * Unfortunatelly, this code is cpu-specific. + * Unfortunately, this code is cpu-specific. */ .set push .set noreorder @@ -145,7 +145,7 @@ LEAF(kexec_smp_wait) #endif /* All parameters to new kernel are passed in registers a0-a3. - * kexec_args[0..3] are uses to prepare register values. + * kexec_args[0..3] are used to prepare register values. */ kexec_args: diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S index a9a7d78803cd..91d3c8c46097 100644 --- a/arch/mips/kernel/scall32-o32.S +++ b/arch/mips/kernel/scall32-o32.S @@ -590,3 +590,5 @@ EXPORT(sys_call_table) PTR sys_pkey_alloc PTR sys_pkey_free /* 4365 */ PTR sys_statx + PTR sys_rseq + PTR sys_io_pgetevents diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S index 65d5aeeb9bdb..358d9599983d 100644 --- a/arch/mips/kernel/scall64-64.S +++ b/arch/mips/kernel/scall64-64.S @@ -439,4 +439,6 @@ EXPORT(sys_call_table) PTR sys_pkey_alloc PTR sys_pkey_free /* 5325 */ PTR sys_statx + PTR sys_rseq + PTR sys_io_pgetevents .size sys_call_table,.-sys_call_table diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S index cbf190ef9e8a..c65eaacc1abf 100644 --- a/arch/mips/kernel/scall64-n32.S +++ b/arch/mips/kernel/scall64-n32.S @@ -434,4 +434,6 @@ EXPORT(sysn32_call_table) PTR sys_pkey_alloc PTR sys_pkey_free PTR sys_statx /* 6330 */ + PTR sys_rseq + PTR compat_sys_io_pgetevents .size sysn32_call_table,.-sysn32_call_table diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S index 9ebe3e2403b1..73913f072e39 100644 --- a/arch/mips/kernel/scall64-o32.S +++ b/arch/mips/kernel/scall64-o32.S @@ -583,4 +583,6 @@ EXPORT(sys32_call_table) PTR sys_pkey_alloc PTR sys_pkey_free /* 4365 */ PTR sys_statx + PTR sys_rseq + PTR compat_sys_io_pgetevents .size sys32_call_table,.-sys32_call_table diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c index 563188ac6fa2..c71d1eb7da59 100644 --- a/arch/mips/kernel/setup.c +++ b/arch/mips/kernel/setup.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -84,6 +85,11 @@ static struct resource bss_resource = { .name = "Kernel bss", }; static void *detect_magic __initdata = detect_memory_region; +#ifdef CONFIG_MIPS_AUTO_PFN_OFFSET +unsigned long ARCH_PFN_OFFSET; +EXPORT_SYMBOL(ARCH_PFN_OFFSET); +#endif + void __init add_memory_region(phys_addr_t start, phys_addr_t size, long type) { int x = boot_mem_map.nr_map; @@ -93,7 +99,7 @@ void __init add_memory_region(phys_addr_t start, phys_addr_t size, long type) * If the region reaches the top of the physical address space, adjust * the size slightly so that (start + size) doesn't overflow */ - if (start + size - 1 == (phys_addr_t)ULLONG_MAX) + if (start + size - 1 == PHYS_ADDR_MAX) --size; /* Sanity check */ @@ -376,7 +382,7 @@ static void __init bootmem_init(void) unsigned long reserved_end; unsigned long mapstart = ~0UL; unsigned long bootmap_size; - phys_addr_t ramstart = (phys_addr_t)ULLONG_MAX; + phys_addr_t ramstart = PHYS_ADDR_MAX; bool bootmap_valid = false; int i; @@ -441,6 +447,12 @@ static void __init bootmem_init(void) mapstart = max(reserved_end, start); } + if (min_low_pfn >= max_low_pfn) + panic("Incorrect memory mapping !!!"); + +#ifdef CONFIG_MIPS_AUTO_PFN_OFFSET + ARCH_PFN_OFFSET = PFN_UP(ramstart); +#else /* * Reserve any memory between the start of RAM and PHYS_OFFSET */ @@ -448,8 +460,6 @@ static void __init bootmem_init(void) add_memory_region(PHYS_OFFSET, ramstart - PHYS_OFFSET, BOOT_MEM_RESERVED); - if (min_low_pfn >= max_low_pfn) - panic("Incorrect memory mapping !!!"); if (min_low_pfn > ARCH_PFN_OFFSET) { pr_info("Wasting %lu bytes for tracking %lu unused pages\n", (min_low_pfn - ARCH_PFN_OFFSET) * sizeof(struct page), @@ -459,6 +469,7 @@ static void __init bootmem_init(void) ARCH_PFN_OFFSET - min_low_pfn); } min_low_pfn = ARCH_PFN_OFFSET; +#endif /* * Determine low and high memory ranges @@ -1055,3 +1066,26 @@ static int __init debugfs_mips(void) } arch_initcall(debugfs_mips); #endif + +#if defined(CONFIG_DMA_MAYBE_COHERENT) && !defined(CONFIG_DMA_PERDEV_COHERENT) +/* User defined DMA coherency from command line. */ +enum coherent_io_user_state coherentio = IO_COHERENCE_DEFAULT; +EXPORT_SYMBOL_GPL(coherentio); +int hw_coherentio = 0; /* Actual hardware supported DMA coherency setting. */ + +static int __init setcoherentio(char *str) +{ + coherentio = IO_COHERENCE_ENABLED; + pr_info("Hardware DMA cache coherency (command line)\n"); + return 0; +} +early_param("coherentio", setcoherentio); + +static int __init setnocoherentio(char *str) +{ + coherentio = IO_COHERENCE_DISABLED; + pr_info("Software DMA cache coherency (command line)\n"); + return 0; +} +early_param("nocoherentio", setnocoherentio); +#endif diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index 9e224469c788..109ed163a6a6 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -592,13 +592,15 @@ SYSCALL_DEFINE3(sigaction, int, sig, const struct sigaction __user *, act, #endif #ifdef CONFIG_TRAD_SIGNALS -asmlinkage void sys_sigreturn(nabi_no_regargs struct pt_regs regs) +asmlinkage void sys_sigreturn(void) { struct sigframe __user *frame; + struct pt_regs *regs; sigset_t blocked; int sig; - frame = (struct sigframe __user *) regs.regs[29]; + regs = current_pt_regs(); + frame = (struct sigframe __user *)regs->regs[29]; if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) goto badframe; if (__copy_from_user(&blocked, &frame->sf_mask, sizeof(blocked))) @@ -606,7 +608,7 @@ asmlinkage void sys_sigreturn(nabi_no_regargs struct pt_regs regs) set_current_blocked(&blocked); - sig = restore_sigcontext(®s, &frame->sf_sc); + sig = restore_sigcontext(regs, &frame->sf_sc); if (sig < 0) goto badframe; else if (sig) @@ -618,8 +620,8 @@ asmlinkage void sys_sigreturn(nabi_no_regargs struct pt_regs regs) __asm__ __volatile__( "move\t$29, %0\n\t" "j\tsyscall_exit" - :/* no outputs */ - :"r" (®s)); + : /* no outputs */ + : "r" (regs)); /* Unreached */ badframe: @@ -627,13 +629,15 @@ badframe: } #endif /* CONFIG_TRAD_SIGNALS */ -asmlinkage void sys_rt_sigreturn(nabi_no_regargs struct pt_regs regs) +asmlinkage void sys_rt_sigreturn(void) { struct rt_sigframe __user *frame; + struct pt_regs *regs; sigset_t set; int sig; - frame = (struct rt_sigframe __user *) regs.regs[29]; + regs = current_pt_regs(); + frame = (struct rt_sigframe __user *)regs->regs[29]; if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) goto badframe; if (__copy_from_user(&set, &frame->rs_uc.uc_sigmask, sizeof(set))) @@ -641,7 +645,7 @@ asmlinkage void sys_rt_sigreturn(nabi_no_regargs struct pt_regs regs) set_current_blocked(&set); - sig = restore_sigcontext(®s, &frame->rs_uc.uc_mcontext); + sig = restore_sigcontext(regs, &frame->rs_uc.uc_mcontext); if (sig < 0) goto badframe; else if (sig) @@ -656,8 +660,8 @@ asmlinkage void sys_rt_sigreturn(nabi_no_regargs struct pt_regs regs) __asm__ __volatile__( "move\t$29, %0\n\t" "j\tsyscall_exit" - :/* no outputs */ - :"r" (®s)); + : /* no outputs */ + : "r" (regs)); /* Unreached */ badframe: @@ -801,6 +805,8 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) regs->regs[0] = 0; /* Don't deal with this again. */ } + rseq_signal_deliver(ksig, regs); + if (sig_uses_siginfo(&ksig->ka, abi)) ret = abi->setup_rt_frame(vdso + abi->vdso->off_rt_sigreturn, ksig, regs, oldset); @@ -868,6 +874,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused, if (thread_info_flags & _TIF_NOTIFY_RESUME) { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } user_enter(); diff --git a/arch/mips/kernel/signal_n32.c b/arch/mips/kernel/signal_n32.c index b672cebb4a1a..8f65aaf9206d 100644 --- a/arch/mips/kernel/signal_n32.c +++ b/arch/mips/kernel/signal_n32.c @@ -64,13 +64,15 @@ struct rt_sigframe_n32 { struct ucontextn32 rs_uc; }; -asmlinkage void sysn32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) +asmlinkage void sysn32_rt_sigreturn(void) { struct rt_sigframe_n32 __user *frame; + struct pt_regs *regs; sigset_t set; int sig; - frame = (struct rt_sigframe_n32 __user *) regs.regs[29]; + regs = current_pt_regs(); + frame = (struct rt_sigframe_n32 __user *)regs->regs[29]; if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) goto badframe; if (__copy_conv_sigset_from_user(&set, &frame->rs_uc.uc_sigmask)) @@ -78,7 +80,7 @@ asmlinkage void sysn32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) set_current_blocked(&set); - sig = restore_sigcontext(®s, &frame->rs_uc.uc_mcontext); + sig = restore_sigcontext(regs, &frame->rs_uc.uc_mcontext); if (sig < 0) goto badframe; else if (sig) @@ -93,8 +95,8 @@ asmlinkage void sysn32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) __asm__ __volatile__( "move\t$29, %0\n\t" "j\tsyscall_exit" - :/* no outputs */ - :"r" (®s)); + : /* no outputs */ + : "r" (regs)); /* Unreached */ badframe: diff --git a/arch/mips/kernel/signal_o32.c b/arch/mips/kernel/signal_o32.c index 2b3572fb5f1b..b6e3ddef48a0 100644 --- a/arch/mips/kernel/signal_o32.c +++ b/arch/mips/kernel/signal_o32.c @@ -151,13 +151,15 @@ static int setup_frame_32(void *sig_return, struct ksignal *ksig, return 0; } -asmlinkage void sys32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) +asmlinkage void sys32_rt_sigreturn(void) { struct rt_sigframe32 __user *frame; + struct pt_regs *regs; sigset_t set; int sig; - frame = (struct rt_sigframe32 __user *) regs.regs[29]; + regs = current_pt_regs(); + frame = (struct rt_sigframe32 __user *)regs->regs[29]; if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) goto badframe; if (__copy_conv_sigset_from_user(&set, &frame->rs_uc.uc_sigmask)) @@ -165,7 +167,7 @@ asmlinkage void sys32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) set_current_blocked(&set); - sig = restore_sigcontext32(®s, &frame->rs_uc.uc_mcontext); + sig = restore_sigcontext32(regs, &frame->rs_uc.uc_mcontext); if (sig < 0) goto badframe; else if (sig) @@ -180,8 +182,8 @@ asmlinkage void sys32_rt_sigreturn(nabi_no_regargs struct pt_regs regs) __asm__ __volatile__( "move\t$29, %0\n\t" "j\tsyscall_exit" - :/* no outputs */ - :"r" (®s)); + : /* no outputs */ + : "r" (regs)); /* Unreached */ badframe: @@ -251,13 +253,15 @@ struct mips_abi mips_abi_32 = { }; -asmlinkage void sys32_sigreturn(nabi_no_regargs struct pt_regs regs) +asmlinkage void sys32_sigreturn(void) { struct sigframe32 __user *frame; + struct pt_regs *regs; sigset_t blocked; int sig; - frame = (struct sigframe32 __user *) regs.regs[29]; + regs = current_pt_regs(); + frame = (struct sigframe32 __user *)regs->regs[29]; if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) goto badframe; if (__copy_conv_sigset_from_user(&blocked, &frame->sf_mask)) @@ -265,7 +269,7 @@ asmlinkage void sys32_sigreturn(nabi_no_regargs struct pt_regs regs) set_current_blocked(&blocked); - sig = restore_sigcontext32(®s, &frame->sf_sc); + sig = restore_sigcontext32(regs, &frame->sf_sc); if (sig < 0) goto badframe; else if (sig) @@ -277,8 +281,8 @@ asmlinkage void sys32_sigreturn(nabi_no_regargs struct pt_regs regs) __asm__ __volatile__( "move\t$29, %0\n\t" "j\tsyscall_exit" - :/* no outputs */ - :"r" (®s)); + : /* no outputs */ + : "r" (regs)); /* Unreached */ badframe: diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index d67fa74622ee..f8871d5b7eb3 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -351,6 +351,7 @@ static void __show_regs(const struct pt_regs *regs) void show_regs(struct pt_regs *regs) { __show_regs((struct pt_regs *)regs); + dump_stack(); } void show_registers(struct pt_regs *regs) @@ -1220,13 +1221,6 @@ static int enable_restore_fp_context(int msa) { int err, was_fpu_owner, prior_msa; - /* - * If an FP mode switch is currently underway, wait for it to - * complete before proceeding. - */ - wait_var_event(¤t->mm->context.fp_mode_switching, - !atomic_read(¤t->mm->context.fp_mode_switching)); - if (!used_math()) { /* First time FP context user. */ preempt_disable(); diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 7cd76f93a438..f7ea8e21656b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -515,7 +515,7 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, dvcpu->arch.wait = 0; if (swq_has_sleeper(&dvcpu->wq)) - swake_up(&dvcpu->wq); + swake_up_one(&dvcpu->wq); return 0; } @@ -1204,7 +1204,7 @@ static void kvm_mips_comparecount_func(unsigned long data) vcpu->arch.wait = 0; if (swq_has_sleeper(&vcpu->wq)) - swake_up(&vcpu->wq); + swake_up_one(&vcpu->wq); } /* low level hrtimer wake routine */ diff --git a/arch/mips/lantiq/early_printk.c b/arch/mips/lantiq/early_printk.c index 44bccaee822b..c4aa140b7c91 100644 --- a/arch/mips/lantiq/early_printk.c +++ b/arch/mips/lantiq/early_printk.c @@ -8,6 +8,7 @@ #include #include +#include #define ASC_BUF 1024 #define LTQ_ASC_FSTAT ((u32 *)(LTQ_EARLY_ASC + 0x0048)) diff --git a/arch/mips/lantiq/prom.c b/arch/mips/lantiq/prom.c index 9ff7ccde9de0..d984bd5c2ec5 100644 --- a/arch/mips/lantiq/prom.c +++ b/arch/mips/lantiq/prom.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include @@ -114,10 +113,3 @@ void __init prom_init(void) panic("failed to register_vsmp_smp_ops()"); #endif } - -int __init plat_of_setup(void) -{ - return of_platform_default_populate(NULL, NULL, NULL); -} - -arch_initcall(plat_of_setup); diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c index 805b3a6ab2d6..4b9fbb6744ad 100644 --- a/arch/mips/lantiq/xway/dma.c +++ b/arch/mips/lantiq/xway/dma.c @@ -130,10 +130,9 @@ ltq_dma_alloc(struct ltq_dma_channel *ch) unsigned long flags; ch->desc = 0; - ch->desc_base = dma_alloc_coherent(NULL, + ch->desc_base = dma_zalloc_coherent(NULL, LTQ_DESC_NUM * LTQ_DESC_SIZE, &ch->phys, GFP_ATOMIC); - memset(ch->desc_base, 0, LTQ_DESC_NUM * LTQ_DESC_SIZE); spin_lock_irqsave(<q_dma_lock, flags); ltq_dma_w32(ch->nr, LTQ_DMA_CS); diff --git a/arch/mips/lasat/prom.c b/arch/mips/lasat/prom.c index 17e15b50a551..37b8fc5b9ac9 100644 --- a/arch/mips/lasat/prom.c +++ b/arch/mips/lasat/prom.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "at93c.h" #include diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S index 1cc306520a55..3a6f34ef5ffc 100644 --- a/arch/mips/lib/memset.S +++ b/arch/mips/lib/memset.S @@ -195,6 +195,7 @@ #endif #else PTR_SUBU t0, $0, a2 + move a2, zero /* No remaining longs */ PTR_ADDIU t0, 1 STORE_BYTE(0) STORE_BYTE(1) @@ -231,16 +232,25 @@ #ifdef CONFIG_CPU_MIPSR6 .Lbyte_fixup\@: - PTR_SUBU a2, $0, t0 + /* + * unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1 + * a2 = a2 - t0 + 1 + */ + PTR_SUBU a2, t0 jr ra PTR_ADDIU a2, 1 #endif /* CONFIG_CPU_MIPSR6 */ .Lfirst_fixup\@: + /* unset_bytes already in a2 */ jr ra nop .Lfwd_fixup\@: + /* + * unset_bytes = partial_start_addr + #bytes - fault_addr + * a2 = t1 + (a2 & 3f) - $28->task->BUADDR + */ PTR_L t0, TI_TASK($28) andi a2, 0x3f LONG_L t0, THREAD_BUADDR(t0) @@ -249,6 +259,10 @@ LONG_SUBU a2, t0 .Lpartial_fixup\@: + /* + * unset_bytes = partial_end_addr + #bytes - fault_addr + * a2 = a0 + (a2 & STORMASK) - $28->task->BUADDR + */ PTR_L t0, TI_TASK($28) andi a2, STORMASK LONG_L t0, THREAD_BUADDR(t0) @@ -257,10 +271,15 @@ LONG_SUBU a2, t0 .Llast_fixup\@: + /* unset_bytes already in a2 */ jr ra nop .Lsmall_fixup\@: + /* + * unset_bytes = end_addr - current_addr + 1 + * a2 = t1 - a0 + 1 + */ PTR_SUBU a2, t1, a0 jr ra PTR_ADDIU a2, 1 diff --git a/arch/mips/loongson32/Platform b/arch/mips/loongson32/Platform index ffe01c6d0037..a0dbb3b2f2de 100644 --- a/arch/mips/loongson32/Platform +++ b/arch/mips/loongson32/Platform @@ -1,8 +1,4 @@ -cflags-$(CONFIG_CPU_LOONGSON1) += \ - $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \ - -Wa,-mips32r2 -Wa,--trap - +cflags-$(CONFIG_CPU_LOONGSON1) += -march=mips32 -Wa,--trap platform-$(CONFIG_MACH_LOONGSON32) += loongson32/ cflags-$(CONFIG_MACH_LOONGSON32) += -I$(srctree)/arch/mips/include/asm/mach-loongson32 -load-$(CONFIG_LOONGSON1_LS1B) += 0xffffffff80100000 -load-$(CONFIG_LOONGSON1_LS1C) += 0xffffffff80100000 +load-$(CONFIG_CPU_LOONGSON1) += 0xffffffff80100000 diff --git a/arch/mips/loongson64/Kconfig b/arch/mips/loongson64/Kconfig index c79e6a565572..c865b4b9b775 100644 --- a/arch/mips/loongson64/Kconfig +++ b/arch/mips/loongson64/Kconfig @@ -91,7 +91,6 @@ config LOONGSON_MACH3X select LOONGSON_MC146818 select ZONE_DMA32 select LEFI_FIRMWARE_INTERFACE - select PHYS48_TO_HT40 help Generic Loongson 3 family machines utilize the 3A/3B revision of Loongson processor and RS780/SBX00 chipset. @@ -130,10 +129,6 @@ config LOONGSON_UART_BASE default y depends on EARLY_PRINTK || SERIAL_8250 -config PHYS48_TO_HT40 - bool - default y if CPU_LOONGSON3 - config LOONGSON_MC146818 bool default n diff --git a/arch/mips/loongson64/common/Makefile b/arch/mips/loongson64/common/Makefile index 8235ac7eac95..57ee03022941 100644 --- a/arch/mips/loongson64/common/Makefile +++ b/arch/mips/loongson64/common/Makefile @@ -6,6 +6,7 @@ obj-y += setup.o init.o cmdline.o env.o time.o reset.o irq.o \ bonito-irq.o mem.o machtype.o platform.o serial.o obj-$(CONFIG_PCI) += pci.o +obj-$(CONFIG_CPU_LOONGSON2) += dma.o # # Serial port support @@ -25,8 +26,3 @@ obj-$(CONFIG_CS5536) += cs5536/ # obj-$(CONFIG_SUSPEND) += pm.o - -# -# Big Memory (SWIOTLB) Support -# -obj-$(CONFIG_SWIOTLB) += dma-swiotlb.o diff --git a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c index f7c905e50dc4..92dc6bafc127 100644 --- a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c +++ b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c @@ -138,7 +138,7 @@ u32 pci_ohci_read_reg(int reg) break; case PCI_OHCI_INT_REG: _rdmsr(DIVIL_MSR_REG(PIC_YSEL_LOW), &hi, &lo); - if ((lo & 0x00000f00) == CS5536_USB_INTR) + if (((lo >> PIC_YSEL_LOW_USB_SHIFT) & 0xf) == CS5536_USB_INTR) conf_data = 1; break; default: diff --git a/arch/mips/loongson64/common/dma-swiotlb.c b/arch/mips/loongson64/common/dma-swiotlb.c deleted file mode 100644 index 6a739f8ae110..000000000000 --- a/arch/mips/loongson64/common/dma-swiotlb.c +++ /dev/null @@ -1,109 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -static void *loongson_dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - void *ret = swiotlb_alloc(dev, size, dma_handle, gfp, attrs); - - mb(); - return ret; -} - -static dma_addr_t loongson_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - dma_addr_t daddr = swiotlb_map_page(dev, page, offset, size, - dir, attrs); - mb(); - return daddr; -} - -static int loongson_dma_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, - unsigned long attrs) -{ - int r = swiotlb_map_sg_attrs(dev, sg, nents, dir, attrs); - mb(); - - return r; -} - -static void loongson_dma_sync_single_for_device(struct device *dev, - dma_addr_t dma_handle, size_t size, - enum dma_data_direction dir) -{ - swiotlb_sync_single_for_device(dev, dma_handle, size, dir); - mb(); -} - -static void loongson_dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sg, int nents, - enum dma_data_direction dir) -{ - swiotlb_sync_sg_for_device(dev, sg, nents, dir); - mb(); -} - -static int loongson_dma_supported(struct device *dev, u64 mask) -{ - if (mask > DMA_BIT_MASK(loongson_sysconf.dma_mask_bits)) - return 0; - return swiotlb_dma_supported(dev, mask); -} - -dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) -{ - long nid; -#ifdef CONFIG_PHYS48_TO_HT40 - /* We extract 2bit node id (bit 44~47, only bit 44~45 used now) from - * Loongson-3's 48bit address space and embed it into 40bit */ - nid = (paddr >> 44) & 0x3; - paddr = ((nid << 44) ^ paddr) | (nid << 37); -#endif - return paddr; -} - -phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr) -{ - long nid; -#ifdef CONFIG_PHYS48_TO_HT40 - /* We extract 2bit node id (bit 44~47, only bit 44~45 used now) from - * Loongson-3's 48bit address space and embed it into 40bit */ - nid = (daddr >> 37) & 0x3; - daddr = ((nid << 37) ^ daddr) | (nid << 44); -#endif - return daddr; -} - -static const struct dma_map_ops loongson_dma_map_ops = { - .alloc = loongson_dma_alloc_coherent, - .free = swiotlb_free, - .map_page = loongson_dma_map_page, - .unmap_page = swiotlb_unmap_page, - .map_sg = loongson_dma_map_sg, - .unmap_sg = swiotlb_unmap_sg_attrs, - .sync_single_for_cpu = swiotlb_sync_single_for_cpu, - .sync_single_for_device = loongson_dma_sync_single_for_device, - .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, - .sync_sg_for_device = loongson_dma_sync_sg_for_device, - .mapping_error = swiotlb_dma_mapping_error, - .dma_supported = loongson_dma_supported, -}; - -void __init plat_swiotlb_setup(void) -{ - swiotlb_init(1); - mips_dma_map_ops = &loongson_dma_map_ops; -} diff --git a/arch/mips/loongson64/common/dma.c b/arch/mips/loongson64/common/dma.c new file mode 100644 index 000000000000..48f04126bde2 --- /dev/null +++ b/arch/mips/loongson64/common/dma.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 +#include + +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + return paddr | 0x80000000; +} + +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr) +{ +#if defined(CONFIG_CPU_LOONGSON2F) && defined(CONFIG_64BIT) + if (dma_addr > 0x8fffffff) + return dma_addr; + return dma_addr & 0x0fffffff; +#else + return dma_addr & 0x7fffffff; +#endif +} diff --git a/arch/mips/loongson64/common/early_printk.c b/arch/mips/loongson64/common/early_printk.c index 6ca632e529dc..a782e2b24747 100644 --- a/arch/mips/loongson64/common/early_printk.c +++ b/arch/mips/loongson64/common/early_printk.c @@ -10,6 +10,7 @@ * option) any later version. */ #include +#include #include diff --git a/arch/mips/loongson64/common/env.c b/arch/mips/loongson64/common/env.c index 1e8a955ae5a8..8f68ee02a8c2 100644 --- a/arch/mips/loongson64/common/env.c +++ b/arch/mips/loongson64/common/env.c @@ -198,7 +198,8 @@ void __init prom_init_env(void) break; case PRID_REV_LOONGSON3A_R1: case PRID_REV_LOONGSON3A_R2: - case PRID_REV_LOONGSON3A_R3: + case PRID_REV_LOONGSON3A_R3_0: + case PRID_REV_LOONGSON3A_R3_1: cpu_clock_freq = 900000000; break; case PRID_REV_LOONGSON3B_R1: diff --git a/arch/mips/loongson64/loongson-3/Makefile b/arch/mips/loongson64/loongson-3/Makefile index 44bc1482158b..b5a0c2fa5446 100644 --- a/arch/mips/loongson64/loongson-3/Makefile +++ b/arch/mips/loongson64/loongson-3/Makefile @@ -1,7 +1,7 @@ # # Makefile for Loongson-3 family machines # -obj-y += irq.o cop2-ex.o platform.o acpi_init.o +obj-y += irq.o cop2-ex.o platform.o acpi_init.o dma.o obj-$(CONFIG_SMP) += smp.o diff --git a/arch/mips/loongson64/loongson-3/dma.c b/arch/mips/loongson64/loongson-3/dma.c new file mode 100644 index 000000000000..5e86635f71db --- /dev/null +++ b/arch/mips/loongson64/loongson-3/dma.c @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + /* We extract 2bit node id (bit 44~47, only bit 44~45 used now) from + * Loongson-3's 48bit address space and embed it into 40bit */ + long nid = (paddr >> 44) & 0x3; + return ((nid << 44) ^ paddr) | (nid << 37); +} + +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr) +{ + /* We extract 2bit node id (bit 44~47, only bit 44~45 used now) from + * Loongson-3's 48bit address space and embed it into 40bit */ + long nid = (daddr >> 37) & 0x3; + return ((nid << 37) ^ daddr) | (nid << 44); +} + +void __init plat_swiotlb_setup(void) +{ + swiotlb_init(1); +} diff --git a/arch/mips/loongson64/loongson-3/smp.c b/arch/mips/loongson64/loongson-3/smp.c index 8501109bb0f0..fea95d003269 100644 --- a/arch/mips/loongson64/loongson-3/smp.c +++ b/arch/mips/loongson64/loongson-3/smp.c @@ -682,7 +682,8 @@ void play_dead(void) (void *)CKSEG1ADDR((unsigned long)loongson3a_r1_play_dead); break; case PRID_REV_LOONGSON3A_R2: - case PRID_REV_LOONGSON3A_R3: + case PRID_REV_LOONGSON3A_R3_0: + case PRID_REV_LOONGSON3A_R3_1: play_dead_at_ckseg1 = (void *)CKSEG1ADDR((unsigned long)loongson3a_r2r3_play_dead); break; diff --git a/arch/mips/mm/Makefile b/arch/mips/mm/Makefile index c463bdad45c7..3e5bb203c95a 100644 --- a/arch/mips/mm/Makefile +++ b/arch/mips/mm/Makefile @@ -3,7 +3,7 @@ # Makefile for the Linux/MIPS-specific parts of the memory manager. # -obj-y += cache.o dma-default.o extable.o fault.o \ +obj-y += cache.o extable.o fault.o \ gup.o init.o mmap.o page.o page-funcs.o \ pgtable.o tlbex.o tlbex-fault.o tlb-funcs.o @@ -17,6 +17,7 @@ obj-$(CONFIG_32BIT) += ioremap.o pgtable-32.o obj-$(CONFIG_64BIT) += pgtable-64.o obj-$(CONFIG_HIGHMEM) += highmem.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o +obj-$(CONFIG_DMA_NONCOHERENT) += dma-noncoherent.o obj-$(CONFIG_CPU_R4K_CACHE_TLB) += c-r4k.o cex-gen.o tlb-r4k.o obj-$(CONFIG_CPU_R3000) += c-r3k.o tlb-r3k.o diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index e12dfa48b478..a9ef057c79fe 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -830,12 +830,13 @@ static void r4k_flush_icache_user_range(unsigned long start, unsigned long end) return __r4k_flush_icache_range(start, end, true); } -#if defined(CONFIG_DMA_NONCOHERENT) || defined(CONFIG_DMA_MAYBE_COHERENT) +#ifdef CONFIG_DMA_NONCOHERENT static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) { /* Catch bad driver code */ - BUG_ON(size == 0); + if (WARN_ON(size == 0)) + return; preempt_disable(); if (cpu_has_inclusive_pcaches) { @@ -871,7 +872,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) { /* Catch bad driver code */ - BUG_ON(size == 0); + if (WARN_ON(size == 0)) + return; preempt_disable(); if (cpu_has_inclusive_pcaches) { @@ -904,7 +906,7 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) bc_inv(addr, size); __sync(); } -#endif /* CONFIG_DMA_NONCOHERENT || CONFIG_DMA_MAYBE_COHERENT */ +#endif /* CONFIG_DMA_NONCOHERENT */ struct flush_cache_sigtramp_args { struct mm_struct *mm; @@ -1505,6 +1507,14 @@ static void probe_pcache(void) if (c->dcache.flags & MIPS_CACHE_PINDEX) c->dcache.flags &= ~MIPS_CACHE_ALIASES; + /* + * In systems with CM the icache fills from L2 or closer caches, and + * thus sees remote stores without needing to write them back any + * further than that. + */ + if (mips_cm_present()) + c->icache.flags |= MIPS_IC_SNOOPS_REMOTE; + switch (current_cpu_type()) { case CPU_20KC: /* diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 0d3c656feba0..70a523151ff3 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -56,7 +56,7 @@ EXPORT_SYMBOL_GPL(local_flush_data_cache_page); EXPORT_SYMBOL(flush_data_cache_page); EXPORT_SYMBOL(flush_icache_all); -#if defined(CONFIG_DMA_NONCOHERENT) || defined(CONFIG_DMA_MAYBE_COHERENT) +#ifdef CONFIG_DMA_NONCOHERENT /* DMA cache operations. */ void (*_dma_cache_wback_inv)(unsigned long start, unsigned long size); @@ -65,7 +65,7 @@ void (*_dma_cache_inv)(unsigned long start, unsigned long size); EXPORT_SYMBOL(_dma_cache_wback_inv); -#endif /* CONFIG_DMA_NONCOHERENT || CONFIG_DMA_MAYBE_COHERENT */ +#endif /* CONFIG_DMA_NONCOHERENT */ /* * We could optimize the case where the cache argument is not BCACHE but diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c deleted file mode 100644 index f9fef0028ca2..000000000000 --- a/arch/mips/mm/dma-default.c +++ /dev/null @@ -1,404 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Ani Joshi - * Copyright (C) 2000, 2001, 06 Ralf Baechle - * swiped from i386, and cloned for MIPS by Geert, polished by Ralf. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -#include - -#if defined(CONFIG_DMA_MAYBE_COHERENT) && !defined(CONFIG_DMA_PERDEV_COHERENT) -/* User defined DMA coherency from command line. */ -enum coherent_io_user_state coherentio = IO_COHERENCE_DEFAULT; -EXPORT_SYMBOL_GPL(coherentio); -int hw_coherentio = 0; /* Actual hardware supported DMA coherency setting. */ - -static int __init setcoherentio(char *str) -{ - coherentio = IO_COHERENCE_ENABLED; - pr_info("Hardware DMA cache coherency (command line)\n"); - return 0; -} -early_param("coherentio", setcoherentio); - -static int __init setnocoherentio(char *str) -{ - coherentio = IO_COHERENCE_DISABLED; - pr_info("Software DMA cache coherency (command line)\n"); - return 0; -} -early_param("nocoherentio", setnocoherentio); -#endif - -static inline struct page *dma_addr_to_page(struct device *dev, - dma_addr_t dma_addr) -{ - return pfn_to_page( - plat_dma_addr_to_phys(dev, dma_addr) >> PAGE_SHIFT); -} - -/* - * The affected CPUs below in 'cpu_needs_post_dma_flush()' can - * speculatively fill random cachelines with stale data at any time, - * requiring an extra flush post-DMA. - * - * Warning on the terminology - Linux calls an uncached area coherent; - * MIPS terminology calls memory areas with hardware maintained coherency - * coherent. - * - * Note that the R14000 and R16000 should also be checked for in this - * condition. However this function is only called on non-I/O-coherent - * systems and only the R10000 and R12000 are used in such systems, the - * SGI IP28 Indigo² rsp. SGI IP32 aka O2. - */ -static inline bool cpu_needs_post_dma_flush(struct device *dev) -{ - if (plat_device_is_coherent(dev)) - return false; - - switch (boot_cpu_type()) { - case CPU_R10000: - case CPU_R12000: - case CPU_BMIPS5000: - return true; - - default: - /* - * Presence of MAARs suggests that the CPU supports - * speculatively prefetching data, and therefore requires - * the post-DMA flush/invalidate. - */ - return cpu_has_maar; - } -} - -static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp) -{ - gfp_t dma_flag; - -#ifdef CONFIG_ISA - if (dev == NULL) - dma_flag = __GFP_DMA; - else -#endif -#if defined(CONFIG_ZONE_DMA32) && defined(CONFIG_ZONE_DMA) - if (dev == NULL || dev->coherent_dma_mask < DMA_BIT_MASK(32)) - dma_flag = __GFP_DMA; - else if (dev->coherent_dma_mask < DMA_BIT_MASK(64)) - dma_flag = __GFP_DMA32; - else -#endif -#if defined(CONFIG_ZONE_DMA32) && !defined(CONFIG_ZONE_DMA) - if (dev == NULL || dev->coherent_dma_mask < DMA_BIT_MASK(64)) - dma_flag = __GFP_DMA32; - else -#endif -#if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32) - if (dev == NULL || - dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8)) - dma_flag = __GFP_DMA; - else -#endif - dma_flag = 0; - - /* Don't invoke OOM killer */ - gfp |= __GFP_NORETRY; - - return gfp | dma_flag; -} - -static void *mips_dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - void *ret; - struct page *page = NULL; - unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; - - gfp = massage_gfp_flags(dev, gfp); - - if (IS_ENABLED(CONFIG_DMA_CMA) && gfpflags_allow_blocking(gfp)) - page = dma_alloc_from_contiguous(dev, count, get_order(size), - gfp); - if (!page) - page = alloc_pages(gfp, get_order(size)); - - if (!page) - return NULL; - - ret = page_address(page); - memset(ret, 0, size); - *dma_handle = plat_map_dma_mem(dev, ret, size); - if (!(attrs & DMA_ATTR_NON_CONSISTENT) && - !plat_device_is_coherent(dev)) { - dma_cache_wback_inv((unsigned long) ret, size); - ret = UNCAC_ADDR(ret); - } - - return ret; -} - -static void mips_dma_free_coherent(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - unsigned long addr = (unsigned long) vaddr; - unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct page *page = NULL; - - plat_unmap_dma_mem(dev, dma_handle, size, DMA_BIDIRECTIONAL); - - if (!(attrs & DMA_ATTR_NON_CONSISTENT) && !plat_device_is_coherent(dev)) - addr = CAC_ADDR(addr); - - page = virt_to_page((void *) addr); - - if (!dma_release_from_contiguous(dev, page, count)) - __free_pages(page, get_order(size)); -} - -static int mips_dma_mmap(struct device *dev, struct vm_area_struct *vma, - void *cpu_addr, dma_addr_t dma_addr, size_t size, - unsigned long attrs) -{ - unsigned long user_count = vma_pages(vma); - unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; - unsigned long addr = (unsigned long)cpu_addr; - unsigned long off = vma->vm_pgoff; - unsigned long pfn; - int ret = -ENXIO; - - if (!plat_device_is_coherent(dev)) - addr = CAC_ADDR(addr); - - pfn = page_to_pfn(virt_to_page((void *)addr)); - - if (attrs & DMA_ATTR_WRITE_COMBINE) - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); - else - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - - if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) - return ret; - - if (off < count && user_count <= (count - off)) { - ret = remap_pfn_range(vma, vma->vm_start, - pfn + off, - user_count << PAGE_SHIFT, - vma->vm_page_prot); - } - - return ret; -} - -static inline void __dma_sync_virtual(void *addr, size_t size, - enum dma_data_direction direction) -{ - switch (direction) { - case DMA_TO_DEVICE: - dma_cache_wback((unsigned long)addr, size); - break; - - case DMA_FROM_DEVICE: - dma_cache_inv((unsigned long)addr, size); - break; - - case DMA_BIDIRECTIONAL: - dma_cache_wback_inv((unsigned long)addr, size); - break; - - default: - BUG(); - } -} - -/* - * A single sg entry may refer to multiple physically contiguous - * pages. But we still need to process highmem pages individually. - * If highmem is not configured then the bulk of this loop gets - * optimized out. - */ -static inline void __dma_sync(struct page *page, - unsigned long offset, size_t size, enum dma_data_direction direction) -{ - size_t left = size; - - do { - size_t len = left; - - if (PageHighMem(page)) { - void *addr; - - if (offset + len > PAGE_SIZE) { - if (offset >= PAGE_SIZE) { - page += offset >> PAGE_SHIFT; - offset &= ~PAGE_MASK; - } - len = PAGE_SIZE - offset; - } - - addr = kmap_atomic(page); - __dma_sync_virtual(addr + offset, len, direction); - kunmap_atomic(addr); - } else - __dma_sync_virtual(page_address(page) + offset, - size, direction); - offset = 0; - page++; - left -= len; - } while (left); -} - -static void mips_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction, unsigned long attrs) -{ - if (cpu_needs_post_dma_flush(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_sync(dma_addr_to_page(dev, dma_addr), - dma_addr & ~PAGE_MASK, size, direction); - plat_post_dma_flush(dev); - plat_unmap_dma_mem(dev, dma_addr, size, direction); -} - -static int mips_dma_map_sg(struct device *dev, struct scatterlist *sglist, - int nents, enum dma_data_direction direction, unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nents, i) { - if (!plat_device_is_coherent(dev) && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_sync(sg_page(sg), sg->offset, sg->length, - direction); -#ifdef CONFIG_NEED_SG_DMA_LENGTH - sg->dma_length = sg->length; -#endif - sg->dma_address = plat_map_dma_mem_page(dev, sg_page(sg)) + - sg->offset; - } - - return nents; -} - -static dma_addr_t mips_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction direction, - unsigned long attrs) -{ - if (!plat_device_is_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_sync(page, offset, size, direction); - - return plat_map_dma_mem_page(dev, page) + offset; -} - -static void mips_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, - int nhwentries, enum dma_data_direction direction, - unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nhwentries, i) { - if (!plat_device_is_coherent(dev) && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - direction != DMA_TO_DEVICE) - __dma_sync(sg_page(sg), sg->offset, sg->length, - direction); - plat_unmap_dma_mem(dev, sg->dma_address, sg->length, direction); - } -} - -static void mips_dma_sync_single_for_cpu(struct device *dev, - dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) -{ - if (cpu_needs_post_dma_flush(dev)) - __dma_sync(dma_addr_to_page(dev, dma_handle), - dma_handle & ~PAGE_MASK, size, direction); - plat_post_dma_flush(dev); -} - -static void mips_dma_sync_single_for_device(struct device *dev, - dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) -{ - if (!plat_device_is_coherent(dev)) - __dma_sync(dma_addr_to_page(dev, dma_handle), - dma_handle & ~PAGE_MASK, size, direction); -} - -static void mips_dma_sync_sg_for_cpu(struct device *dev, - struct scatterlist *sglist, int nelems, - enum dma_data_direction direction) -{ - int i; - struct scatterlist *sg; - - if (cpu_needs_post_dma_flush(dev)) { - for_each_sg(sglist, sg, nelems, i) { - __dma_sync(sg_page(sg), sg->offset, sg->length, - direction); - } - } - plat_post_dma_flush(dev); -} - -static void mips_dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sglist, int nelems, - enum dma_data_direction direction) -{ - int i; - struct scatterlist *sg; - - if (!plat_device_is_coherent(dev)) { - for_each_sg(sglist, sg, nelems, i) { - __dma_sync(sg_page(sg), sg->offset, sg->length, - direction); - } - } -} - -static int mips_dma_supported(struct device *dev, u64 mask) -{ - return plat_dma_supported(dev, mask); -} - -static void mips_dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction direction) -{ - BUG_ON(direction == DMA_NONE); - - if (!plat_device_is_coherent(dev)) - __dma_sync_virtual(vaddr, size, direction); -} - -static const struct dma_map_ops mips_default_dma_map_ops = { - .alloc = mips_dma_alloc_coherent, - .free = mips_dma_free_coherent, - .mmap = mips_dma_mmap, - .map_page = mips_dma_map_page, - .unmap_page = mips_dma_unmap_page, - .map_sg = mips_dma_map_sg, - .unmap_sg = mips_dma_unmap_sg, - .sync_single_for_cpu = mips_dma_sync_single_for_cpu, - .sync_single_for_device = mips_dma_sync_single_for_device, - .sync_sg_for_cpu = mips_dma_sync_sg_for_cpu, - .sync_sg_for_device = mips_dma_sync_sg_for_device, - .dma_supported = mips_dma_supported, - .cache_sync = mips_dma_cache_sync, -}; - -const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops; -EXPORT_SYMBOL(mips_dma_map_ops); diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c new file mode 100644 index 000000000000..2aca1236af36 --- /dev/null +++ b/arch/mips/mm/dma-noncoherent.c @@ -0,0 +1,208 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2000 Ani Joshi + * Copyright (C) 2000, 2001, 06 Ralf Baechle + * swiped from i386, and cloned for MIPS by Geert, polished by Ralf. + */ +#include +#include +#include +#include + +#include +#include +#include +#include + +#ifdef CONFIG_DMA_PERDEV_COHERENT +static inline int dev_is_coherent(struct device *dev) +{ + return dev->archdata.dma_coherent; +} +#else +static inline int dev_is_coherent(struct device *dev) +{ + switch (coherentio) { + default: + case IO_COHERENCE_DEFAULT: + return hw_coherentio; + case IO_COHERENCE_ENABLED: + return 1; + case IO_COHERENCE_DISABLED: + return 0; + } +} +#endif /* CONFIG_DMA_PERDEV_COHERENT */ + +/* + * The affected CPUs below in 'cpu_needs_post_dma_flush()' can speculatively + * fill random cachelines with stale data at any time, requiring an extra + * flush post-DMA. + * + * Warning on the terminology - Linux calls an uncached area coherent; MIPS + * terminology calls memory areas with hardware maintained coherency coherent. + * + * Note that the R14000 and R16000 should also be checked for in this condition. + * However this function is only called on non-I/O-coherent systems and only the + * R10000 and R12000 are used in such systems, the SGI IP28 Indigo² rsp. + * SGI IP32 aka O2. + */ +static inline bool cpu_needs_post_dma_flush(struct device *dev) +{ + if (dev_is_coherent(dev)) + return false; + + switch (boot_cpu_type()) { + case CPU_R10000: + case CPU_R12000: + case CPU_BMIPS5000: + return true; + default: + /* + * Presence of MAARs suggests that the CPU supports + * speculatively prefetching data, and therefore requires + * the post-DMA flush/invalidate. + */ + return cpu_has_maar; + } +} + +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +{ + void *ret; + + ret = dma_direct_alloc(dev, size, dma_handle, gfp, attrs); + if (!ret) + return NULL; + + if (!dev_is_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_cache_wback_inv((unsigned long) ret, size); + ret = (void *)UNCAC_ADDR(ret); + } + + return ret; +} + +void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_addr, unsigned long attrs) +{ + if (!(attrs & DMA_ATTR_NON_CONSISTENT) && !dev_is_coherent(dev)) + cpu_addr = (void *)CAC_ADDR((unsigned long)cpu_addr); + dma_direct_free(dev, size, cpu_addr, dma_addr, attrs); +} + +int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long addr = (unsigned long)cpu_addr; + unsigned long off = vma->vm_pgoff; + unsigned long pfn; + int ret = -ENXIO; + + if (!dev_is_coherent(dev)) + addr = CAC_ADDR(addr); + + pfn = page_to_pfn(virt_to_page((void *)addr)); + + if (attrs & DMA_ATTR_WRITE_COMBINE) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return ret; + + if (off < count && user_count <= (count - off)) { + ret = remap_pfn_range(vma, vma->vm_start, + pfn + off, + user_count << PAGE_SHIFT, + vma->vm_page_prot); + } + + return ret; +} + +static inline void dma_sync_virt(void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_TO_DEVICE: + dma_cache_wback((unsigned long)addr, size); + break; + + case DMA_FROM_DEVICE: + dma_cache_inv((unsigned long)addr, size); + break; + + case DMA_BIDIRECTIONAL: + dma_cache_wback_inv((unsigned long)addr, size); + break; + + default: + BUG(); + } +} + +/* + * A single sg entry may refer to multiple physically contiguous pages. But + * we still need to process highmem pages individually. If highmem is not + * configured then the bulk of this loop gets optimized out. + */ +static inline void dma_sync_phys(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); + unsigned long offset = paddr & ~PAGE_MASK; + size_t left = size; + + do { + size_t len = left; + + if (PageHighMem(page)) { + void *addr; + + if (offset + len > PAGE_SIZE) { + if (offset >= PAGE_SIZE) { + page += offset >> PAGE_SHIFT; + offset &= ~PAGE_MASK; + } + len = PAGE_SIZE - offset; + } + + addr = kmap_atomic(page); + dma_sync_virt(addr + offset, len, dir); + kunmap_atomic(addr); + } else + dma_sync_virt(page_address(page) + offset, size, dir); + offset = 0; + page++; + left -= len; + } while (left); +} + +void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) +{ + if (!dev_is_coherent(dev)) + dma_sync_phys(paddr, size, dir); +} + +void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) +{ + if (cpu_needs_post_dma_flush(dev)) + dma_sync_phys(paddr, size, dir); +} + +void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size, + enum dma_data_direction direction) +{ + BUG_ON(direction == DMA_NONE); + + if (!dev_is_coherent(dev)) + dma_sync_virt(vaddr, size, direction); +} diff --git a/arch/mips/mm/ioremap.c b/arch/mips/mm/ioremap.c index 1986e09fb457..1601d90b087b 100644 --- a/arch/mips/mm/ioremap.c +++ b/arch/mips/mm/ioremap.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -98,6 +99,20 @@ static int remap_area_pages(unsigned long address, phys_addr_t phys_addr, return error; } +static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, + void *arg) +{ + unsigned long i; + + for (i = 0; i < nr_pages; i++) { + if (pfn_valid(start_pfn + i) && + !PageReserved(pfn_to_page(start_pfn + i))) + return 1; + } + + return 0; +} + /* * Generic mapping function (not visible outside): */ @@ -116,8 +131,8 @@ static int remap_area_pages(unsigned long address, phys_addr_t phys_addr, void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long flags) { + unsigned long offset, pfn, last_pfn; struct vm_struct * area; - unsigned long offset; phys_addr_t last_addr; void * addr; @@ -137,18 +152,16 @@ void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long return (void __iomem *) CKSEG1ADDR(phys_addr); /* - * Don't allow anybody to remap normal RAM that we're using.. + * Don't allow anybody to remap RAM that may be allocated by the page + * allocator, since that could lead to races & data clobbering. */ - if (phys_addr < virt_to_phys(high_memory)) { - char *t_addr, *t_end; - struct page *page; - - t_addr = __va(phys_addr); - t_end = t_addr + (size - 1); - - for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++) - if(!PageReserved(page)) - return NULL; + pfn = PFN_DOWN(phys_addr); + last_pfn = PFN_DOWN(last_addr); + if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL, + __ioremap_check_ram) == 1) { + WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n", + &phys_addr, &last_addr); + return NULL; } /* diff --git a/arch/mips/mm/page.c b/arch/mips/mm/page.c index d5d02993aa21..56e4f8bffd4c 100644 --- a/arch/mips/mm/page.c +++ b/arch/mips/mm/page.c @@ -623,21 +623,6 @@ struct dmadscr { u64 pad_b; } ____cacheline_aligned_in_smp page_descr[DM_NUM_CHANNELS]; -void sb1_dma_init(void) -{ - int i; - - for (i = 0; i < DM_NUM_CHANNELS; i++) { - const u64 base_val = CPHYSADDR((unsigned long)&page_descr[i]) | - V_DM_DSCR_BASE_RINGSZ(1); - void *base_reg = IOADDR(A_DM_REGISTER(i, R_DM_DSCR_BASE)); - - __raw_writeq(base_val, base_reg); - __raw_writeq(base_val | M_DM_DSCR_BASE_RESET, base_reg); - __raw_writeq(base_val | M_DM_DSCR_BASE_ENABL, base_reg); - } -} - void clear_page(void *page) { u64 to_phys = CPHYSADDR((unsigned long)page); diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 79b9f2ad3ff5..49312a14cd17 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -1509,7 +1509,7 @@ static void setup_pw(void) #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT write_c0_pwctl(1 << 6 | psn); #endif - write_c0_kpgd(swapper_pg_dir); + write_c0_kpgd((long)swapper_pg_dir); kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */ } diff --git a/arch/mips/mm/uasm-micromips.c b/arch/mips/mm/uasm-micromips.c index 9bb6baa45da3..24e5b0d06899 100644 --- a/arch/mips/mm/uasm-micromips.c +++ b/arch/mips/mm/uasm-micromips.c @@ -19,7 +19,6 @@ #include #include #include -#define UASM_ISA _UASM_ISA_MICROMIPS #include #define RS_MASK 0x1f diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c index 9fea6c6bbf49..60ceb93c71a0 100644 --- a/arch/mips/mm/uasm-mips.c +++ b/arch/mips/mm/uasm-mips.c @@ -19,7 +19,6 @@ #include #include #include -#define UASM_ISA _UASM_ISA_CLASSIC #include #define RS_MASK 0x1f diff --git a/arch/mips/mti-malta/Makefile b/arch/mips/mti-malta/Makefile index 63940bdce698..17c7fd471a27 100644 --- a/arch/mips/mti-malta/Makefile +++ b/arch/mips/mti-malta/Makefile @@ -13,11 +13,9 @@ obj-y += malta-init.o obj-y += malta-int.o obj-y += malta-memory.o obj-y += malta-platform.o -obj-y += malta-reset.o obj-y += malta-setup.o obj-y += malta-time.o obj-$(CONFIG_MIPS_CMP) += malta-amon.o -obj-$(CONFIG_MIPS_MALTA_PM) += malta-pm.o CFLAGS_malta-dtshim.o = -I$(src)/../../../scripts/dtc/libfdt diff --git a/arch/mips/mti-malta/malta-pm.c b/arch/mips/mti-malta/malta-pm.c deleted file mode 100644 index efbd659fb602..000000000000 --- a/arch/mips/mti-malta/malta-pm.c +++ /dev/null @@ -1,96 +0,0 @@ -/* - * Copyright (C) 2014 Imagination Technologies - * Author: Paul Burton - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. - */ - -#include -#include -#include -#include - -#include - -static struct pci_bus *pm_pci_bus; -static resource_size_t pm_io_offset; - -int mips_pm_suspend(unsigned state) -{ - int spec_devid; - u16 sts; - - if (!pm_pci_bus || !pm_io_offset) - return -ENODEV; - - /* Ensure the power button status is clear */ - while (1) { - sts = inw(pm_io_offset + PIIX4_FUNC3IO_PMSTS); - if (!(sts & PIIX4_FUNC3IO_PMSTS_PWRBTN_STS)) - break; - outw(sts, pm_io_offset + PIIX4_FUNC3IO_PMSTS); - } - - /* Enable entry to suspend */ - outw(state | PIIX4_FUNC3IO_PMCNTRL_SUS_EN, - pm_io_offset + PIIX4_FUNC3IO_PMCNTRL); - - /* If the special cycle occurs too soon this doesn't work... */ - mdelay(10); - - /* - * The PIIX4 will enter the suspend state only after seeing a special - * cycle with the correct magic data on the PCI bus. Generate that - * cycle now. - */ - spec_devid = PCI_DEVID(0, PCI_DEVFN(0x1f, 0x7)); - pci_bus_write_config_dword(pm_pci_bus, spec_devid, 0, - PIIX4_SUSPEND_MAGIC); - - /* Give the system some time to power down */ - mdelay(1000); - - return 0; -} - -static int __init malta_pm_setup(void) -{ - struct pci_dev *dev; - int res, io_region = PCI_BRIDGE_RESOURCES; - - /* Find a reference to the PCI bus */ - pm_pci_bus = pci_find_next_bus(NULL); - if (!pm_pci_bus) { - pr_warn("malta-pm: failed to find reference to PCI bus\n"); - return -ENODEV; - } - - /* Find the PIIX4 PM device */ - dev = pci_get_subsys(PCI_VENDOR_ID_INTEL, - PCI_DEVICE_ID_INTEL_82371AB_3, PCI_ANY_ID, - PCI_ANY_ID, NULL); - if (!dev) { - pr_warn("malta-pm: failed to find PIIX4 PM\n"); - return -ENODEV; - } - - /* Request access to the PIIX4 PM IO registers */ - res = pci_request_region(dev, io_region, "PIIX4 PM IO registers"); - if (res) { - pr_warn("malta-pm: failed to request PM IO registers (%d)\n", - res); - pci_dev_put(dev); - return -ENODEV; - } - - /* Find the offset to the PIIX4 PM IO registers */ - pm_io_offset = pci_resource_start(dev, io_region); - - pci_dev_put(dev); - return 0; -} - -late_initcall(malta_pm_setup); diff --git a/arch/mips/mti-malta/malta-reset.c b/arch/mips/mti-malta/malta-reset.c deleted file mode 100644 index dd6f62ad4417..000000000000 --- a/arch/mips/mti-malta/malta-reset.c +++ /dev/null @@ -1,30 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Carsten Langgaard, carstenl@mips.com - * Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved. - */ -#include -#include -#include - -#include -#include - -static void mips_machine_power_off(void) -{ - mips_pm_suspend(PIIX4_FUNC3IO_PMCNTRL_SUS_TYP_SOFF); - - pr_info("Failed to power down, resetting\n"); - machine_restart(NULL); -} - -static int __init mips_reboot_setup(void) -{ - pm_power_off = mips_machine_power_off; - - return 0; -} -arch_initcall(mips_reboot_setup); diff --git a/arch/mips/mti-malta/malta-setup.c b/arch/mips/mti-malta/malta-setup.c index 7b63914d2e58..5d4c5e5fbd69 100644 --- a/arch/mips/mti-malta/malta-setup.c +++ b/arch/mips/mti-malta/malta-setup.c @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -144,12 +145,6 @@ static int __init plat_enable_iocoherency(void) static void __init plat_setup_iocoherency(void) { -#ifdef CONFIG_DMA_NONCOHERENT - /* - * Kernel has been configured with software coherency - * but we might choose to turn it off and use hardware - * coherency instead. - */ if (plat_enable_iocoherency()) { if (coherentio == IO_COHERENCE_DISABLED) pr_info("Hardware DMA cache coherency disabled\n"); @@ -161,10 +156,6 @@ static void __init plat_setup_iocoherency(void) else pr_info("Software DMA cache coherency enabled\n"); } -#else - if (!plat_enable_iocoherency()) - panic("Hardware DMA cache coherency not supported!"); -#endif } static void __init pci_clock_check(void) @@ -226,29 +217,6 @@ static void __init bonito_quirks_setup(void) pr_info("Enabled Bonito debug mode\n"); } else BONITO_BONGENCFG &= ~BONITO_BONGENCFG_DEBUGMODE; - -#ifdef CONFIG_DMA_COHERENT - if (BONITO_PCICACHECTRL & BONITO_PCICACHECTRL_CPUCOH_PRES) { - BONITO_PCICACHECTRL |= BONITO_PCICACHECTRL_CPUCOH_EN; - pr_info("Enabled Bonito CPU coherency\n"); - - argptr = fw_getcmdline(); - if (strstr(argptr, "iobcuncached")) { - BONITO_PCICACHECTRL &= ~BONITO_PCICACHECTRL_IOBCCOH_EN; - BONITO_PCIMEMBASECFG = BONITO_PCIMEMBASECFG & - ~(BONITO_PCIMEMBASECFG_MEMBASE0_CACHED | - BONITO_PCIMEMBASECFG_MEMBASE1_CACHED); - pr_info("Disabled Bonito IOBC coherency\n"); - } else { - BONITO_PCICACHECTRL |= BONITO_PCICACHECTRL_IOBCCOH_EN; - BONITO_PCIMEMBASECFG |= - (BONITO_PCIMEMBASECFG_MEMBASE0_CACHED | - BONITO_PCIMEMBASECFG_MEMBASE1_CACHED); - pr_info("Enabled Bonito IOBC coherency\n"); - } - } else - panic("Hardware DMA cache coherency not supported"); -#endif } void __init *plat_get_fdt(void) @@ -279,11 +247,6 @@ void __init plat_mem_setup(void) */ enable_dma(4); -#ifdef CONFIG_DMA_COHERENT - if (mips_revision_sconid != MIPS_REVISION_SCON_BONITO) - panic("Hardware DMA cache coherency not supported"); -#endif - if (mips_revision_sconid == MIPS_REVISION_SCON_BONITO) bonito_quirks_setup(); diff --git a/arch/mips/netlogic/common/earlycons.c b/arch/mips/netlogic/common/earlycons.c index 769f93032c53..8f5bc1597550 100644 --- a/arch/mips/netlogic/common/earlycons.c +++ b/arch/mips/netlogic/common/earlycons.c @@ -36,6 +36,7 @@ #include #include +#include #include #include diff --git a/arch/mips/netlogic/xlp/dt.c b/arch/mips/netlogic/xlp/dt.c index 856a6e6d296e..b5ba83f4c646 100644 --- a/arch/mips/netlogic/xlp/dt.c +++ b/arch/mips/netlogic/xlp/dt.c @@ -93,17 +93,3 @@ void __init device_tree_init(void) { unflatten_and_copy_device_tree(); } - -static struct of_device_id __initdata xlp_ids[] = { - { .compatible = "simple-bus", }, - {}, -}; - -int __init xlp8xx_ds_publish_devices(void) -{ - if (!of_have_populated_dt()) - return 0; - return of_platform_bus_probe(NULL, xlp_ids, NULL); -} - -device_initcall(xlp8xx_ds_publish_devices); diff --git a/arch/mips/paravirt/serial.c b/arch/mips/paravirt/serial.c index 02b665c02272..a37b6f9f0ede 100644 --- a/arch/mips/paravirt/serial.c +++ b/arch/mips/paravirt/serial.c @@ -9,16 +9,15 @@ #include #include #include +#include /* * Emit one character to the boot console. */ -int prom_putchar(char c) +void prom_putchar(char c) { kvm_hypercall3(KVM_HC_MIPS_CONSOLE_OUTPUT, 0 /* port 0 */, (unsigned long)&c, 1 /* len == 1 */); - - return 1; } #ifdef CONFIG_VIRTIO_CONSOLE diff --git a/arch/mips/pci/ops-bridge.c b/arch/mips/pci/ops-bridge.c index 57e1463fcd02..a1d2c4ae0d1b 100644 --- a/arch/mips/pci/ops-bridge.c +++ b/arch/mips/pci/ops-bridge.c @@ -167,7 +167,7 @@ oh_my_gawd: static int pci_read_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 * value) { - if (bus->number > 0) + if (!pci_is_root_bus(bus)) return pci_conf1_read_config(bus, devfn, where, size, value); return pci_conf0_read_config(bus, devfn, where, size, value); @@ -310,7 +310,7 @@ oh_my_gawd: static int pci_write_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 value) { - if (bus->number > 0) + if (!pci_is_root_bus(bus)) return pci_conf1_write_config(bus, devfn, where, size, value); return pci_conf0_write_config(bus, devfn, where, size, value); diff --git a/arch/mips/pci/pci-ar2315.c b/arch/mips/pci/pci-ar2315.c index b4fa6413c4e5..c539d0d2b0cf 100644 --- a/arch/mips/pci/pci-ar2315.c +++ b/arch/mips/pci/pci-ar2315.c @@ -149,6 +149,13 @@ #define AR2315_PCI_HOST_SLOT 3 #define AR2315_PCI_HOST_DEVID ((0xff18 << 16) | PCI_VENDOR_ID_ATHEROS) +/* + * We need some arbitrary non-zero value to be programmed to the BAR1 register + * of PCI host controller to enable DMA. The same value should be used as the + * offset to calculate the physical address of DMA buffer for PCI devices. + */ +#define AR2315_PCI_HOST_SDRAM_BASEADDR 0x20000000 + /* ??? access BAR */ #define AR2315_PCI_HOST_MBAR0 0x10000000 /* RAM access BAR */ @@ -167,6 +174,23 @@ struct ar2315_pci_ctrl { struct resource io_res; }; +static inline dma_addr_t ar2315_dev_offset(struct device *dev) +{ + if (dev && dev_is_pci(dev)) + return AR2315_PCI_HOST_SDRAM_BASEADDR; + return 0; +} + +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + return paddr + ar2315_dev_offset(dev); +} + +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr) +{ + return dma_addr - ar2315_dev_offset(dev); +} + static inline struct ar2315_pci_ctrl *ar2315_pci_bus_to_apc(struct pci_bus *bus) { struct pci_controller *hose = bus->sysdata; diff --git a/arch/mips/pci/pci-ar724x.c b/arch/mips/pci/pci-ar724x.c index 1e23c8d587bd..64b58cc48a91 100644 --- a/arch/mips/pci/pci-ar724x.c +++ b/arch/mips/pci/pci-ar724x.c @@ -12,14 +12,18 @@ #include #include #include +#include #include #include #include +#define AR724X_PCI_REG_APP 0x00 #define AR724X_PCI_REG_RESET 0x18 #define AR724X_PCI_REG_INT_STATUS 0x4c #define AR724X_PCI_REG_INT_MASK 0x50 +#define AR724X_PCI_APP_LTSSM_ENABLE BIT(0) + #define AR724X_PCI_RESET_LINK_UP BIT(0) #define AR724X_PCI_INT_DEV0 BIT(14) @@ -325,6 +329,37 @@ static void ar724x_pci_irq_init(struct ar724x_pci_controller *apc, apc); } +static void ar724x_pci_hw_init(struct ar724x_pci_controller *apc) +{ + u32 ppl, app; + int wait = 0; + + /* deassert PCIe host controller and PCIe PHY reset */ + ath79_device_reset_clear(AR724X_RESET_PCIE); + ath79_device_reset_clear(AR724X_RESET_PCIE_PHY); + + /* remove the reset of the PCIE PLL */ + ppl = ath79_pll_rr(AR724X_PLL_REG_PCIE_CONFIG); + ppl &= ~AR724X_PLL_REG_PCIE_CONFIG_PPL_RESET; + ath79_pll_wr(AR724X_PLL_REG_PCIE_CONFIG, ppl); + + /* deassert bypass for the PCIE PLL */ + ppl = ath79_pll_rr(AR724X_PLL_REG_PCIE_CONFIG); + ppl &= ~AR724X_PLL_REG_PCIE_CONFIG_PPL_BYPASS; + ath79_pll_wr(AR724X_PLL_REG_PCIE_CONFIG, ppl); + + /* set PCIE Application Control to ready */ + app = __raw_readl(apc->ctrl_base + AR724X_PCI_REG_APP); + app |= AR724X_PCI_APP_LTSSM_ENABLE; + __raw_writel(app, apc->ctrl_base + AR724X_PCI_REG_APP); + + /* wait up to 100ms for PHY link up */ + do { + mdelay(10); + wait++; + } while (wait < 10 && !ar724x_pci_check_link(apc)); +} + static int ar724x_pci_probe(struct platform_device *pdev) { struct ar724x_pci_controller *apc; @@ -383,6 +418,13 @@ static int ar724x_pci_probe(struct platform_device *pdev) apc->pci_controller.io_resource = &apc->io_res; apc->pci_controller.mem_resource = &apc->mem_res; + /* + * Do the full PCIE Root Complex Initialization Sequence if the PCIe + * host controller is in reset. + */ + if (ath79_reset_rr(AR724X_RESET_REG_RESET_MODULE) & AR724X_RESET_PCIE) + ar724x_pci_hw_init(apc); + apc->link_up = ar724x_pci_check_link(apc); if (!apc->link_up) dev_warn(&pdev->dev, "PCIe link is down\n"); diff --git a/arch/mips/pci/pci-ip27.c b/arch/mips/pci/pci-ip27.c index 0f09eafa5e3a..c94a66070a60 100644 --- a/arch/mips/pci/pci-ip27.c +++ b/arch/mips/pci/pci-ip27.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -182,6 +183,19 @@ int pcibios_plat_dev_init(struct pci_dev *dev) return 0; } +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct bridge_controller *bc = BRIDGE_CONTROLLER(pdev->bus); + + return bc->baddr + paddr; +} + +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr) +{ + return dma_addr & ~(0xffUL << 56); +} + /* * Device might live on a subordinate PCI bus. XXX Walk up the chain of buses * to find the slot number in sense of the bridge device register. @@ -200,17 +214,6 @@ static inline void pci_disable_swapping(struct pci_dev *dev) bridge->b_widget.w_tflush; /* Flush */ } -static inline void pci_enable_swapping(struct pci_dev *dev) -{ - struct bridge_controller *bc = BRIDGE_CONTROLLER(dev->bus); - bridge_t *bridge = bc->base; - int slot = PCI_SLOT(dev->devfn); - - /* Turn on byte swapping */ - bridge->b_device[slot].reg |= BRIDGE_DEV_SWAP_DIR; - bridge->b_widget.w_tflush; /* Flush */ -} - static void pci_fixup_ioc3(struct pci_dev *d) { pci_disable_swapping(d); diff --git a/arch/mips/pci/pci-octeon.c b/arch/mips/pci/pci-octeon.c index 3e92a06fa772..5017d5843c5a 100644 --- a/arch/mips/pci/pci-octeon.c +++ b/arch/mips/pci/pci-octeon.c @@ -21,8 +21,6 @@ #include #include -#include - #define USE_OCTEON_INTERNAL_ARBITER /* @@ -166,8 +164,6 @@ int pcibios_plat_dev_init(struct pci_dev *dev) pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, dconfig); } - dev->dev.dma_ops = octeon_pci_dma_map_ops; - return 0; } diff --git a/arch/mips/pci/pci.c b/arch/mips/pci/pci.c index 9632436d74d7..c2e94cf5ecda 100644 --- a/arch/mips/pci/pci.c +++ b/arch/mips/pci/pci.c @@ -54,5 +54,5 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar, phys_addr_t size = resource_size(rsrc); *start = fixup_bigphys_addr(rsrc->start, size); - *end = rsrc->start + size; + *end = rsrc->start + size - 1; } diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c index 87ba86bd8696..d919a0d813a1 100644 --- a/arch/mips/pci/pcie-octeon.c +++ b/arch/mips/pci/pcie-octeon.c @@ -94,8 +94,6 @@ union cvmx_pcie_address { static int cvmx_pcie_rc_initialize(int pcie_port); -#include - /** * Return the Core virtual base address for PCIe IO access. IOs are * read/written as an offset from this address. @@ -1239,14 +1237,14 @@ static int __cvmx_pcie_rc_initialize_gen2(int pcie_port) /* CN63XX Pass 1.0 errata G-14395 requires the QLM De-emphasis be programmed */ if (OCTEON_IS_MODEL(OCTEON_CN63XX_PASS1_0)) { if (pcie_port) { - union cvmx_ciu_qlm1 ciu_qlm; + union cvmx_ciu_qlm ciu_qlm; ciu_qlm.u64 = cvmx_read_csr(CVMX_CIU_QLM1); ciu_qlm.s.txbypass = 1; ciu_qlm.s.txdeemph = 5; ciu_qlm.s.txmargin = 0x17; cvmx_write_csr(CVMX_CIU_QLM1, ciu_qlm.u64); } else { - union cvmx_ciu_qlm0 ciu_qlm; + union cvmx_ciu_qlm ciu_qlm; ciu_qlm.u64 = cvmx_read_csr(CVMX_CIU_QLM0); ciu_qlm.s.txbypass = 1; ciu_qlm.s.txdeemph = 5; diff --git a/arch/mips/pic32/pic32mzda/early_console.c b/arch/mips/pic32/pic32mzda/early_console.c index d7b783463fac..8ed4961b1271 100644 --- a/arch/mips/pic32/pic32mzda/early_console.c +++ b/arch/mips/pic32/pic32mzda/early_console.c @@ -13,6 +13,7 @@ */ #include #include +#include #include "pic32mzda.h" #include "early_pin.h" @@ -157,7 +158,7 @@ void __init fw_init_early_console(char port) setup_early_console(port, baud); } -int prom_putchar(char c) +void prom_putchar(char c) { if (console_port >= 0) { while (__raw_readl( @@ -166,6 +167,4 @@ int prom_putchar(char c) __raw_writel(c, uart_base + U_TXR(console_port)); } - - return 1; } diff --git a/arch/mips/ralink/early_printk.c b/arch/mips/ralink/early_printk.c index 3c59ffe5f5f5..ecd30ddfb3db 100644 --- a/arch/mips/ralink/early_printk.c +++ b/arch/mips/ralink/early_printk.c @@ -10,6 +10,7 @@ #include #include +#include #ifdef CONFIG_SOC_RT288X #define EARLY_UART_BASE 0x300c00 @@ -68,7 +69,7 @@ static void find_uart_base(void) } } -void prom_putchar(unsigned char ch) +void prom_putchar(char ch) { if (!init_complete) { find_uart_base(); @@ -76,13 +77,13 @@ void prom_putchar(unsigned char ch) } if (IS_ENABLED(CONFIG_SOC_MT7621) || soc_is_mt7628()) { - uart_w32(ch, UART_TX); + uart_w32((unsigned char)ch, UART_TX); while ((uart_r32(UART_REG_LSR) & UART_LSR_THRE) == 0) ; } else { while ((uart_r32(UART_REG_LSR_RT2880) & UART_LSR_THRE) == 0) ; - uart_w32(ch, UART_REG_TX); + uart_w32((unsigned char)ch, UART_REG_TX); while ((uart_r32(UART_REG_LSR_RT2880) & UART_LSR_THRE) == 0) ; } diff --git a/arch/mips/sgi-ip27/ip27-console.c b/arch/mips/sgi-ip27/ip27-console.c index 45fdfbcbd4c6..6bdb48d41276 100644 --- a/arch/mips/sgi-ip27/ip27-console.c +++ b/arch/mips/sgi-ip27/ip27-console.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include diff --git a/arch/mips/sgi-ip32/Makefile b/arch/mips/sgi-ip32/Makefile index 60f0227425e7..4745cd94df11 100644 --- a/arch/mips/sgi-ip32/Makefile +++ b/arch/mips/sgi-ip32/Makefile @@ -4,4 +4,4 @@ # obj-y += ip32-berr.o ip32-irq.o ip32-platform.o ip32-setup.o ip32-reset.o \ - crime.o ip32-memory.o + crime.o ip32-memory.o ip32-dma.o diff --git a/arch/mips/sgi-ip32/ip32-dma.c b/arch/mips/sgi-ip32/ip32-dma.c new file mode 100644 index 000000000000..fa7b17cb5385 --- /dev/null +++ b/arch/mips/sgi-ip32/ip32-dma.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2006 Ralf Baechle + */ +#include +#include + +/* + * Few notes. + * 1. CPU sees memory as two chunks: 0-256M@0x0, and the rest @0x40000000+256M + * 2. PCI sees memory as one big chunk @0x0 (or we could use 0x40000000 for + * native-endian) + * 3. All other devices see memory as one big chunk at 0x40000000 + * 4. Non-PCI devices will pass NULL as struct device* + * + * Thus we translate differently, depending on device. + */ + +#define RAM_OFFSET_MASK 0x3fffffffUL + +dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + dma_addr_t dma_addr = paddr & RAM_OFFSET_MASK; + + if (!dev) + dma_addr += CRIME_HI_MEM_BASE; + return dma_addr; +} + +phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr) +{ + phys_addr_t paddr = dma_addr & RAM_OFFSET_MASK; + + if (dma_addr >= 256*1024*1024) + paddr += CRIME_HI_MEM_BASE; + return paddr; +} diff --git a/arch/mips/sibyte/Kconfig b/arch/mips/sibyte/Kconfig index f4dbce25bc6a..7ec278d72096 100644 --- a/arch/mips/sibyte/Kconfig +++ b/arch/mips/sibyte/Kconfig @@ -70,7 +70,6 @@ config SIBYTE_BCM1x55 config SIBYTE_SB1xxx_SOC bool - select DMA_COHERENT select IRQ_MIPS_CPU select SWAP_IO_SPACE select SYS_SUPPORTS_32BIT_KERNEL diff --git a/arch/mips/sibyte/common/cfe.c b/arch/mips/sibyte/common/cfe.c index 115399202eab..092fb2a6ec4a 100644 --- a/arch/mips/sibyte/common/cfe.c +++ b/arch/mips/sibyte/common/cfe.c @@ -27,6 +27,7 @@ #include #include +#include #include #include diff --git a/arch/mips/txx9/generic/setup.c b/arch/mips/txx9/generic/setup.c index 1791a44ee570..dde4dc859f79 100644 --- a/arch/mips/txx9/generic/setup.c +++ b/arch/mips/txx9/generic/setup.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile index ce196046ac3e..34605ca21498 100644 --- a/arch/mips/vdso/Makefile +++ b/arch/mips/vdso/Makefile @@ -7,7 +7,13 @@ ccflags-vdso := \ $(filter -I%,$(KBUILD_CFLAGS)) \ $(filter -E%,$(KBUILD_CFLAGS)) \ $(filter -mmicromips,$(KBUILD_CFLAGS)) \ - $(filter -march=%,$(KBUILD_CFLAGS)) + $(filter -march=%,$(KBUILD_CFLAGS)) \ + -D__VDSO__ + +ifeq ($(cc-name),clang) +ccflags-vdso += $(filter --target=%,$(KBUILD_CFLAGS)) +endif + cflags-vdso := $(ccflags-vdso) \ $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \ -O2 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \ @@ -38,6 +44,7 @@ endif # VDSO linker flags. VDSO_LDFLAGS := \ -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1 \ + $(addprefix -Wl$(comma),$(filter -E%,$(KBUILD_CFLAGS))) \ -nostdlib -shared \ $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) \ $(call cc-ldoption, -Wl$(comma)--build-id) diff --git a/arch/mips/vdso/genvdso.h b/arch/mips/vdso/genvdso.h index 94334727059a..611b06f01a3c 100644 --- a/arch/mips/vdso/genvdso.h +++ b/arch/mips/vdso/genvdso.h @@ -15,8 +15,6 @@ static inline bool FUNC(patch_vdso)(const char *path, void *vdso) ELF(Shdr) *shdr; char *shstrtab, *name; uint16_t sh_count, sh_entsize, i; - unsigned int local_gotno, symtabno, gotsym; - ELF(Dyn) *dyn = NULL; shdrs = vdso + FUNC(swap_uint)(ehdr->e_shoff); sh_count = swap_uint16(ehdr->e_shnum); @@ -41,9 +39,6 @@ static inline bool FUNC(patch_vdso)(const char *path, void *vdso) "%s: '%s' contains relocation sections\n", program_name, path); return false; - case SHT_DYNAMIC: - dyn = vdso + FUNC(swap_uint)(shdr->sh_offset); - break; } /* Check for existing sections. */ @@ -61,52 +56,6 @@ static inline bool FUNC(patch_vdso)(const char *path, void *vdso) } } - /* - * Ensure the GOT has no entries other than the standard 2, for the same - * reason we check that there's no relocation sections above. - * The standard two entries are: - * - Lazy resolver - * - Module pointer - */ - if (dyn) { - local_gotno = symtabno = gotsym = 0; - - while (FUNC(swap_uint)(dyn->d_tag) != DT_NULL) { - switch (FUNC(swap_uint)(dyn->d_tag)) { - /* - * This member holds the number of local GOT entries. - */ - case DT_MIPS_LOCAL_GOTNO: - local_gotno = FUNC(swap_uint)(dyn->d_un.d_val); - break; - /* - * This member holds the number of entries in the - * .dynsym section. - */ - case DT_MIPS_SYMTABNO: - symtabno = FUNC(swap_uint)(dyn->d_un.d_val); - break; - /* - * This member holds the index of the first dynamic - * symbol table entry that corresponds to an entry in - * the GOT. - */ - case DT_MIPS_GOTSYM: - gotsym = FUNC(swap_uint)(dyn->d_un.d_val); - break; - } - - dyn++; - } - - if (local_gotno > 2 || symtabno - gotsym) { - fprintf(stderr, - "%s: '%s' contains unexpected GOT entries\n", - program_name, path); - return false; - } - } - return true; } diff --git a/arch/mips/vr41xx/common/pmu.c b/arch/mips/vr41xx/common/pmu.c index 39a0db3e2b34..16e684b59875 100644 --- a/arch/mips/vr41xx/common/pmu.c +++ b/arch/mips/vr41xx/common/pmu.c @@ -17,6 +17,7 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ +#include #include #include #include @@ -46,7 +47,7 @@ static void __iomem *pmu_base; #define pmu_read(offset) readw(pmu_base + (offset)) #define pmu_write(offset, value) writew((value), pmu_base + (offset)) -static void vr41xx_cpu_wait(void) +static void __cpuidle vr41xx_cpu_wait(void) { local_irq_disable(); if (!need_resched()) diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig index 6aed974276d8..34f7222c5efe 100644 --- a/arch/nds32/Kconfig +++ b/arch/nds32/Kconfig @@ -12,17 +12,17 @@ config NDS32 select CLONE_BACKWARDS select COMMON_CLK select DMA_NONCOHERENT_OPS - select GENERIC_ASHLDI3 - select GENERIC_ASHRDI3 - select GENERIC_LSHRDI3 - select GENERIC_CMPDI2 - select GENERIC_MULDI3 - select GENERIC_UCMPDI2 select GENERIC_ATOMIC64 select GENERIC_CPU_DEVICES select GENERIC_CLOCKEVENTS select GENERIC_IRQ_CHIP select GENERIC_IRQ_SHOW + select GENERIC_LIB_ASHLDI3 + select GENERIC_LIB_ASHRDI3 + select GENERIC_LIB_CMPDI2 + select GENERIC_LIB_LSHRDI3 + select GENERIC_LIB_MULDI3 + select GENERIC_LIB_UCMPDI2 select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER select GENERIC_TIME_VSYSCALL diff --git a/arch/nds32/Makefile b/arch/nds32/Makefile index 513bb2e9baf9..031c676821ff 100644 --- a/arch/nds32/Makefile +++ b/arch/nds32/Makefile @@ -34,10 +34,12 @@ ifdef CONFIG_CPU_LITTLE_ENDIAN KBUILD_CFLAGS += $(call cc-option, -EL) KBUILD_AFLAGS += $(call cc-option, -EL) LDFLAGS += $(call cc-option, -EL) +CHECKFLAGS += -D__NDS32_EL__ else KBUILD_CFLAGS += $(call cc-option, -EB) KBUILD_AFLAGS += $(call cc-option, -EB) LDFLAGS += $(call cc-option, -EB) +CHECKFLAGS += -D__NDS32_EB__ endif boot := arch/nds32/boot diff --git a/arch/nds32/include/asm/cacheflush.h b/arch/nds32/include/asm/cacheflush.h index 10b48f0d8e85..8b26198d51bb 100644 --- a/arch/nds32/include/asm/cacheflush.h +++ b/arch/nds32/include/asm/cacheflush.h @@ -8,6 +8,8 @@ #define PG_dcache_dirty PG_arch_1 +void flush_icache_range(unsigned long start, unsigned long end); +void flush_icache_page(struct vm_area_struct *vma, struct page *page); #ifdef CONFIG_CPU_CACHE_ALIASING void flush_cache_mm(struct mm_struct *mm); void flush_cache_dup_mm(struct mm_struct *mm); @@ -34,13 +36,16 @@ void flush_anon_page(struct vm_area_struct *vma, void flush_kernel_dcache_page(struct page *page); void flush_kernel_vmap_range(void *addr, int size); void invalidate_kernel_vmap_range(void *addr, int size); -void flush_icache_range(unsigned long start, unsigned long end); -void flush_icache_page(struct vm_area_struct *vma, struct page *page); #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&(mapping)->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages) #else #include +#undef flush_icache_range +#undef flush_icache_page +#undef flush_icache_user_range +void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, + unsigned long addr, int len); #endif #endif /* __NDS32_CACHEFLUSH_H__ */ diff --git a/arch/nds32/include/asm/futex.h b/arch/nds32/include/asm/futex.h index eab5e84bd991..cb6cb91cfdf8 100644 --- a/arch/nds32/include/asm/futex.h +++ b/arch/nds32/include/asm/futex.h @@ -16,7 +16,7 @@ " .popsection\n" \ " .pushsection .fixup,\"ax\"\n" \ "4: move %0, " err_reg "\n" \ - " j 3b\n" \ + " b 3b\n" \ " .popsection" #define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \ diff --git a/arch/nds32/kernel/setup.c b/arch/nds32/kernel/setup.c index 2f5b2ccebe47..63a1a5ef5219 100644 --- a/arch/nds32/kernel/setup.c +++ b/arch/nds32/kernel/setup.c @@ -278,7 +278,8 @@ static void __init setup_memory(void) void __init setup_arch(char **cmdline_p) { - early_init_devtree( __dtb_start); + early_init_devtree(__atags_pointer ? \ + phys_to_virt(__atags_pointer) : __dtb_start); setup_cpuinfo(); diff --git a/arch/nds32/mm/cacheflush.c b/arch/nds32/mm/cacheflush.c index ce8fd34497bf..254703653b6f 100644 --- a/arch/nds32/mm/cacheflush.c +++ b/arch/nds32/mm/cacheflush.c @@ -13,7 +13,39 @@ extern struct cache_info L1_cache_info[2]; -#ifndef CONFIG_CPU_CACHE_ALIASING +void flush_icache_range(unsigned long start, unsigned long end) +{ + unsigned long line_size, flags; + line_size = L1_cache_info[DCACHE].line_size; + start = start & ~(line_size - 1); + end = (end + line_size - 1) & ~(line_size - 1); + local_irq_save(flags); + cpu_cache_wbinval_range(start, end, 1); + local_irq_restore(flags); +} +EXPORT_SYMBOL(flush_icache_range); + +void flush_icache_page(struct vm_area_struct *vma, struct page *page) +{ + unsigned long flags; + unsigned long kaddr; + local_irq_save(flags); + kaddr = (unsigned long)kmap_atomic(page); + cpu_cache_wbinval_page(kaddr, vma->vm_flags & VM_EXEC); + kunmap_atomic((void *)kaddr); + local_irq_restore(flags); +} +EXPORT_SYMBOL(flush_icache_page); + +void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, + unsigned long addr, int len) +{ + unsigned long kaddr; + kaddr = (unsigned long)kmap_atomic(page) + (addr & ~PAGE_MASK); + flush_icache_range(kaddr, kaddr + len); + kunmap_atomic((void *)kaddr); +} + void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t * pte) { @@ -35,19 +67,15 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, if ((test_and_clear_bit(PG_dcache_dirty, &page->flags)) || (vma->vm_flags & VM_EXEC)) { - - if (!PageHighMem(page)) { - cpu_cache_wbinval_page((unsigned long) - page_address(page), - vma->vm_flags & VM_EXEC); - } else { - unsigned long kaddr = (unsigned long)kmap_atomic(page); - cpu_cache_wbinval_page(kaddr, vma->vm_flags & VM_EXEC); - kunmap_atomic((void *)kaddr); - } + unsigned long kaddr; + local_irq_save(flags); + kaddr = (unsigned long)kmap_atomic(page); + cpu_cache_wbinval_page(kaddr, vma->vm_flags & VM_EXEC); + kunmap_atomic((void *)kaddr); + local_irq_restore(flags); } } -#else +#ifdef CONFIG_CPU_CACHE_ALIASING extern pte_t va_present(struct mm_struct *mm, unsigned long addr); static inline unsigned long aliasing(unsigned long addr, unsigned long page) @@ -317,52 +345,4 @@ void invalidate_kernel_vmap_range(void *addr, int size) local_irq_restore(flags); } EXPORT_SYMBOL(invalidate_kernel_vmap_range); - -void flush_icache_range(unsigned long start, unsigned long end) -{ - unsigned long line_size, flags; - line_size = L1_cache_info[DCACHE].line_size; - start = start & ~(line_size - 1); - end = (end + line_size - 1) & ~(line_size - 1); - local_irq_save(flags); - cpu_cache_wbinval_range(start, end, 1); - local_irq_restore(flags); -} -EXPORT_SYMBOL(flush_icache_range); - -void flush_icache_page(struct vm_area_struct *vma, struct page *page) -{ - unsigned long flags; - local_irq_save(flags); - cpu_cache_wbinval_page((unsigned long)page_address(page), - vma->vm_flags & VM_EXEC); - local_irq_restore(flags); -} - -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t * pte) -{ - struct page *page; - unsigned long flags; - unsigned long pfn = pte_pfn(*pte); - - if (!pfn_valid(pfn)) - return; - - if (vma->vm_mm == current->active_mm) { - local_irq_save(flags); - __nds32__mtsr_dsb(addr, NDS32_SR_TLB_VPN); - __nds32__tlbop_rwr(*pte); - __nds32__isb(); - local_irq_restore(flags); - } - - page = pfn_to_page(pfn); - if (test_and_clear_bit(PG_dcache_dirty, &page->flags) || - (vma->vm_flags & VM_EXEC)) { - local_irq_save(flags); - cpu_dcache_wbinval_page((unsigned long)page_address(page)); - local_irq_restore(flags); - } -} #endif diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig index 9ecad05bfc73..dfb6a79ba7ff 100644 --- a/arch/openrisc/Kconfig +++ b/arch/openrisc/Kconfig @@ -27,7 +27,6 @@ config OPENRISC select GENERIC_STRNLEN_USER select GENERIC_SMP_IDLE_THREAD select MODULES_USE_ELF_RELA - select MULTI_IRQ_HANDLER select HAVE_DEBUG_STACKOVERFLOW select OR1K_PIC select CPU_NO_EFFICIENT_FFS if !OPENRISC_HAVE_INST_FF1 @@ -36,6 +35,7 @@ config OPENRISC select ARCH_USE_QUEUED_RWLOCKS select OMPIC if SMP select ARCH_WANT_FRAME_POINTERS + select GENERIC_IRQ_MULTI_HANDLER config CPU_BIG_ENDIAN def_bool y @@ -69,9 +69,6 @@ config STACKTRACE_SUPPORT config LOCKDEP_SUPPORT def_bool y -config MULTI_IRQ_HANDLER - def_bool y - source "init/Kconfig" source "kernel/Kconfig.freezer" diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h index 146e1660f00e..b589fac39b92 100644 --- a/arch/openrisc/include/asm/atomic.h +++ b/arch/openrisc/include/asm/atomic.h @@ -100,7 +100,7 @@ ATOMIC_OP(xor) * * This is often used through atomic_inc_not_zero() */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int old, tmp; @@ -119,7 +119,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) return old; } -#define __atomic_add_unless __atomic_add_unless +#define atomic_fetch_add_unless atomic_fetch_add_unless #include diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h index d29f7db53906..f9cd43a39d72 100644 --- a/arch/openrisc/include/asm/cmpxchg.h +++ b/arch/openrisc/include/asm/cmpxchg.h @@ -16,8 +16,9 @@ #ifndef __ASM_OPENRISC_CMPXCHG_H #define __ASM_OPENRISC_CMPXCHG_H +#include +#include #include -#include #define __HAVE_ARCH_CMPXCHG 1 diff --git a/arch/openrisc/include/asm/irq.h b/arch/openrisc/include/asm/irq.h index d9eee0a2b7b4..eb612b1865d2 100644 --- a/arch/openrisc/include/asm/irq.h +++ b/arch/openrisc/include/asm/irq.h @@ -24,6 +24,4 @@ #define NO_IRQ (-1) -extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); - #endif /* __ASM_OPENRISC_IRQ_H__ */ diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index 3e1a46615120..8999b9226512 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -98,8 +98,12 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte) __free_page(pte); } +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pgtable_page_dtor(pte); \ + tlb_remove_page((tlb), (pte)); \ +} while (0) -#define __pte_free_tlb(tlb, pte, addr) tlb_remove_page((tlb), (pte)) #define pmd_pgtable(pmd) pmd_page(pmd) #define check_pgt_cache() do { } while (0) diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S index 690d55272ba6..0c826ad6e994 100644 --- a/arch/openrisc/kernel/entry.S +++ b/arch/openrisc/kernel/entry.S @@ -277,12 +277,6 @@ EXCEPTION_ENTRY(_data_page_fault_handler) l.addi r3,r1,0 // pt_regs /* r4 set be EXCEPTION_HANDLE */ // effective address of fault - /* - * __PHX__: TODO - * - * all this can be written much simpler. look at - * DTLB miss handler in the CONFIG_GUARD_PROTECTED_CORE part - */ #ifdef CONFIG_OPENRISC_NO_SPR_SR_DSX l.lwz r6,PT_PC(r3) // address of an offending insn l.lwz r6,0(r6) // instruction that caused pf @@ -314,7 +308,7 @@ EXCEPTION_ENTRY(_data_page_fault_handler) #else - l.lwz r6,PT_SR(r3) // SR + l.mfspr r6,r0,SPR_SR // SR l.andi r6,r6,SPR_SR_DSX // check for delay slot exception l.sfne r6,r0 // exception happened in delay slot l.bnf 7f diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S index fb02b2a1d6f2..9fc6b60140f0 100644 --- a/arch/openrisc/kernel/head.S +++ b/arch/openrisc/kernel/head.S @@ -210,8 +210,7 @@ * r4 - EEAR exception EA * r10 - current pointing to current_thread_info struct * r12 - syscall 0, since we didn't come from syscall - * r13 - temp it actually contains new SR, not needed anymore - * r31 - handler address of the handler we'll jump to + * r30 - handler address of the handler we'll jump to * * handler has to save remaining registers to the exception * ksp frame *before* tainting them! @@ -244,6 +243,7 @@ /* r1 is KSP, r30 is __pa(KSP) */ ;\ tophys (r30,r1) ;\ l.sw PT_GPR12(r30),r12 ;\ + /* r4 use for tmp before EA */ ;\ l.mfspr r12,r0,SPR_EPCR_BASE ;\ l.sw PT_PC(r30),r12 ;\ l.mfspr r12,r0,SPR_ESR_BASE ;\ @@ -263,7 +263,10 @@ /* r12 == 1 if we come from syscall */ ;\ CLEAR_GPR(r12) ;\ /* ----- turn on MMU ----- */ ;\ - l.ori r30,r0,(EXCEPTION_SR) ;\ + /* Carry DSX into exception SR */ ;\ + l.mfspr r30,r0,SPR_SR ;\ + l.andi r30,r30,SPR_SR_DSX ;\ + l.ori r30,r30,(EXCEPTION_SR) ;\ l.mtspr r0,r30,SPR_ESR_BASE ;\ /* r30: EA address of handler */ ;\ LOAD_SYMBOL_2_GPR(r30,handler) ;\ diff --git a/arch/openrisc/kernel/irq.c b/arch/openrisc/kernel/irq.c index 35e478a93116..5f9445effaf8 100644 --- a/arch/openrisc/kernel/irq.c +++ b/arch/openrisc/kernel/irq.c @@ -41,13 +41,6 @@ void __init init_IRQ(void) irqchip_init(); } -static void (*handle_arch_irq)(struct pt_regs *); - -void __init set_handle_irq(void (*handle_irq)(struct pt_regs *)) -{ - handle_arch_irq = handle_irq; -} - void __irq_entry do_IRQ(struct pt_regs *regs) { handle_arch_irq(regs); diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c index fac246e6f37a..d8981cbb852a 100644 --- a/arch/openrisc/kernel/traps.c +++ b/arch/openrisc/kernel/traps.c @@ -300,7 +300,7 @@ static inline int in_delay_slot(struct pt_regs *regs) return 0; } #else - return regs->sr & SPR_SR_DSX; + return mfspr(SPR_SR) & SPR_SR_DSX; #endif } diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig index 4d8f64d48597..e21751fb24aa 100644 --- a/arch/parisc/Kconfig +++ b/arch/parisc/Kconfig @@ -11,7 +11,6 @@ config PARISC select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_UBSAN_SANITIZE_ALL - select ARCH_WANTS_UBSAN_NO_NULL select ARCH_SUPPORTS_MEMORY_FAILURE select RTC_CLASS select RTC_DRV_GENERIC @@ -46,6 +45,7 @@ config PARISC select HAVE_ARCH_HASH select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK + select HAVE_REGS_AND_STACK_ACCESS_API select GENERIC_SCHED_CLOCK select HAVE_UNSTABLE_SCHED_CLOCK if SMP select GENERIC_CLOCKEVENTS @@ -188,6 +188,10 @@ config PA20 config PA11 def_bool y depends on PA7000 || PA7100LC || PA7200 || PA7300LC + select ARCH_HAS_SYNC_DMA_FOR_CPU + select ARCH_HAS_SYNC_DMA_FOR_DEVICE + select DMA_NONCOHERENT_OPS + select DMA_NONCOHERENT_CACHE_SYNC config PREFETCH def_bool y @@ -195,7 +199,7 @@ config PREFETCH config MLONGCALLS bool "Enable the -mlong-calls compiler option for big kernels" - def_bool y if (!MODULES) + default y depends on PA8X00 help If you configure the kernel to include many drivers built-in instead @@ -244,11 +248,11 @@ config PARISC_PAGE_SIZE_4KB config PARISC_PAGE_SIZE_16KB bool "16KB" - depends on PA8X00 + depends on PA8X00 && BROKEN config PARISC_PAGE_SIZE_64KB bool "64KB" - depends on PA8X00 + depends on PA8X00 && BROKEN endchoice @@ -275,7 +279,7 @@ config SMP machines, but will use only one CPU of a multiprocessor machine. On a uniprocessor machine, the kernel will run faster if you say N. - See also and the SMP-HOWTO + See also and the SMP-HOWTO available at . If you don't know what to do here, say N. @@ -347,7 +351,7 @@ config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 depends on SMP - default "32" + default "4" endmenu diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile index 714284ea6cc2..5ce030266e7d 100644 --- a/arch/parisc/Makefile +++ b/arch/parisc/Makefile @@ -65,10 +65,6 @@ endif # kernel. cflags-y += -mdisable-fpregs -# Without this, "ld -r" results in .text sections that are too big -# (> 0x40000) for branches to reach stubs. -cflags-y += -ffunction-sections - # Use long jumps instead of long branches (needed if your linker fails to # link a too big vmlinux executable). Not enabled for building modules. ifdef CONFIG_MLONGCALLS diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h index 60e6f07b7e32..e9c6385ef0d1 100644 --- a/arch/parisc/include/asm/assembly.h +++ b/arch/parisc/include/asm/assembly.h @@ -36,6 +36,7 @@ #define RP_OFFSET 16 #define FRAME_SIZE 128 #define CALLEE_REG_FRAME_SIZE 144 +#define REG_SZ 8 #define ASM_ULONG_INSN .dword #else /* CONFIG_64BIT */ #define LDREG ldw @@ -50,6 +51,7 @@ #define RP_OFFSET 20 #define FRAME_SIZE 64 #define CALLEE_REG_FRAME_SIZE 128 +#define REG_SZ 4 #define ASM_ULONG_INSN .word #endif diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 88bae6676c9b..118953d41763 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -77,30 +77,6 @@ static __inline__ int atomic_read(const atomic_t *v) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * __atomic_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define ATOMIC_OP(op, c_op) \ static __inline__ void atomic_##op(int i, atomic_t *v) \ { \ @@ -160,28 +136,6 @@ ATOMIC_OPS(xor, ^=) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_inc(v) (atomic_add( 1,(v))) -#define atomic_dec(v) (atomic_add( -1,(v))) - -#define atomic_inc_return(v) (atomic_add_return( 1,(v))) -#define atomic_dec_return(v) (atomic_add_return( -1,(v))) - -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) - -#define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) - #define ATOMIC_INIT(i) { (i) } #ifdef CONFIG_64BIT @@ -264,72 +218,11 @@ atomic64_read(const atomic64_t *v) return READ_ONCE((v)->counter); } -#define atomic64_inc(v) (atomic64_add( 1,(v))) -#define atomic64_dec(v) (atomic64_add( -1,(v))) - -#define atomic64_inc_return(v) (atomic64_add_return( 1,(v))) -#define atomic64_dec_return(v) (atomic64_add_return( -1,(v))) - -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i),(v)) == 0) - /* exported interface */ #define atomic64_cmpxchg(v, o, n) \ ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - -/* - * atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #endif /* !CONFIG_64BIT */ diff --git a/arch/parisc/include/asm/barrier.h b/arch/parisc/include/asm/barrier.h new file mode 100644 index 000000000000..dbaaca84f27f --- /dev/null +++ b/arch/parisc/include/asm/barrier.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_BARRIER_H +#define __ASM_BARRIER_H + +#ifndef __ASSEMBLY__ + +/* The synchronize caches instruction executes as a nop on systems in + which all memory references are performed in order. */ +#define synchronize_caches() __asm__ __volatile__ ("sync" : : : "memory") + +#if defined(CONFIG_SMP) +#define mb() do { synchronize_caches(); } while (0) +#define rmb() mb() +#define wmb() mb() +#define dma_rmb() mb() +#define dma_wmb() mb() +#else +#define mb() barrier() +#define rmb() barrier() +#define wmb() barrier() +#define dma_rmb() barrier() +#define dma_wmb() barrier() +#endif + +#define __smp_mb() mb() +#define __smp_rmb() mb() +#define __smp_wmb() mb() + +#include + +#endif /* !__ASSEMBLY__ */ +#endif /* __ASM_BARRIER_H */ diff --git a/arch/parisc/include/asm/dma-mapping.h b/arch/parisc/include/asm/dma-mapping.h index 01e1fc057c83..44a9f97194aa 100644 --- a/arch/parisc/include/asm/dma-mapping.h +++ b/arch/parisc/include/asm/dma-mapping.h @@ -21,11 +21,6 @@ ** flush/purge and allocate "regular" cacheable pages for everything. */ -#ifdef CONFIG_PA11 -extern const struct dma_map_ops pcxl_dma_ops; -extern const struct dma_map_ops pcx_dma_ops; -#endif - extern const struct dma_map_ops *hppa_dma_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) diff --git a/arch/parisc/include/asm/linkage.h b/arch/parisc/include/asm/linkage.h index 9a69bf6fc4b6..49f6f3d772cc 100644 --- a/arch/parisc/include/asm/linkage.h +++ b/arch/parisc/include/asm/linkage.h @@ -18,9 +18,9 @@ #ifdef __ASSEMBLY__ #define ENTRY(name) \ - .export name !\ - ALIGN !\ -name: + ALIGN !\ +name: ASM_NL\ + .export name #ifdef CONFIG_64BIT #define ENDPROC(name) \ @@ -31,13 +31,18 @@ name: END(name) #endif -#define ENTRY_CFI(name) \ +#define ENTRY_CFI(name, ...) \ ENTRY(name) ASM_NL\ + .proc ASM_NL\ + .callinfo __VA_ARGS__ ASM_NL\ + .entry ASM_NL\ CFI_STARTPROC #define ENDPROC_CFI(name) \ - ENDPROC(name) ASM_NL\ - CFI_ENDPROC + CFI_ENDPROC ASM_NL\ + .exit ASM_NL\ + .procend ASM_NL\ + ENDPROC(name) #endif /* __ASSEMBLY__ */ diff --git a/arch/parisc/include/asm/ptrace.h b/arch/parisc/include/asm/ptrace.h index 46da07670c2b..2a27b275ab09 100644 --- a/arch/parisc/include/asm/ptrace.h +++ b/arch/parisc/include/asm/ptrace.h @@ -25,4 +25,15 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) return regs->gr[20]; } +static inline void instruction_pointer_set(struct pt_regs *regs, + unsigned long val) +{ + regs->iaoq[0] = val; +} + +/* Query offset/name of register from its name/offset */ +extern int regs_query_register_offset(const char *name); +extern const char *regs_query_register_name(unsigned int offset); +#define MAX_REG_OFFSET (offsetof(struct pt_regs, ipsw)) + #endif diff --git a/arch/parisc/include/asm/signal.h b/arch/parisc/include/asm/signal.h index eeb5c8858663..715c96ba2ec8 100644 --- a/arch/parisc/include/asm/signal.h +++ b/arch/parisc/include/asm/signal.h @@ -21,14 +21,6 @@ typedef struct { unsigned long sig[_NSIG_WORDS]; } sigset_t; -#ifndef __KERNEL__ -struct sigaction { - __sighandler_t sa_handler; - unsigned long sa_flags; - sigset_t sa_mask; /* mask last for extensibility */ -}; -#endif - #include #endif /* !__ASSEMBLY */ diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h index 6f84b6acc86e..8a63515f03bf 100644 --- a/arch/parisc/include/asm/spinlock.h +++ b/arch/parisc/include/asm/spinlock.h @@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x, { volatile unsigned int *a; - mb(); a = __ldcw_align(x); while (__ldcw(a) == 0) while (*a == 0) @@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x, local_irq_disable(); } else cpu_relax(); - mb(); } #define arch_spin_lock_flags arch_spin_lock_flags static inline void arch_spin_unlock(arch_spinlock_t *x) { volatile unsigned int *a; - mb(); + a = __ldcw_align(x); - *a = 1; mb(); + *a = 1; } static inline int arch_spin_trylock(arch_spinlock_t *x) @@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x) volatile unsigned int *a; int ret; - mb(); a = __ldcw_align(x); ret = __ldcw(a) != 0; - mb(); return ret; } diff --git a/arch/parisc/include/asm/unwind.h b/arch/parisc/include/asm/unwind.h index c73a3ee20226..f133b7efbebb 100644 --- a/arch/parisc/include/asm/unwind.h +++ b/arch/parisc/include/asm/unwind.h @@ -4,6 +4,9 @@ #include +/* Max number of levels to backtrace */ +#define MAX_UNWIND_ENTRIES 30 + /* From ABI specifications */ struct unwind_table_entry { unsigned int region_start; diff --git a/arch/parisc/include/uapi/asm/errno.h b/arch/parisc/include/uapi/asm/errno.h index fc0df353ff0d..87245c584784 100644 --- a/arch/parisc/include/uapi/asm/errno.h +++ b/arch/parisc/include/uapi/asm/errno.h @@ -113,7 +113,6 @@ #define ELOOP 249 /* Too many symbolic links encountered */ #define ENOSYS 251 /* Function not implemented */ -#define ENOTSUP 252 /* Function not implemented (POSIX.4 / HPUX) */ #define ECANCELLED 253 /* aio request was canceled before complete (POSIX.4 / HPUX) */ #define ECANCELED ECANCELLED /* SuSv3 and Solaris wants one 'L' */ diff --git a/arch/parisc/include/uapi/asm/unistd.h b/arch/parisc/include/uapi/asm/unistd.h index 4872e77aa96b..dc77c5a51db7 100644 --- a/arch/parisc/include/uapi/asm/unistd.h +++ b/arch/parisc/include/uapi/asm/unistd.h @@ -364,8 +364,9 @@ #define __NR_preadv2 (__NR_Linux + 347) #define __NR_pwritev2 (__NR_Linux + 348) #define __NR_statx (__NR_Linux + 349) +#define __NR_io_pgetevents (__NR_Linux + 350) -#define __NR_Linux_syscalls (__NR_statx + 1) +#define __NR_Linux_syscalls (__NR_io_pgetevents + 1) #define __IGNORE_select /* newselect */ diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c index e0e1c9775c32..5eb979d04b90 100644 --- a/arch/parisc/kernel/drivers.c +++ b/arch/parisc/kernel/drivers.c @@ -154,17 +154,14 @@ int register_parisc_driver(struct parisc_driver *driver) { /* FIXME: we need this because apparently the sti * driver can be registered twice */ - if(driver->drv.name) { - printk(KERN_WARNING - "BUG: skipping previously registered driver %s\n", - driver->name); + if (driver->drv.name) { + pr_warn("BUG: skipping previously registered driver %s\n", + driver->name); return 1; } if (!driver->probe) { - printk(KERN_WARNING - "BUG: driver %s has no probe routine\n", - driver->name); + pr_warn("BUG: driver %s has no probe routine\n", driver->name); return 1; } @@ -491,12 +488,9 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path) dev = create_parisc_device(mod_path); if (dev->id.hw_type != HPHW_FAULTY) { - printk(KERN_ERR "Two devices have hardware path [%s]. " - "IODC data for second device: " - "%02x%02x%02x%02x%02x%02x\n" - "Rearranging GSC cards sometimes helps\n", - parisc_pathname(dev), iodc_data[0], iodc_data[1], - iodc_data[3], iodc_data[4], iodc_data[5], iodc_data[6]); + pr_err("Two devices have hardware path [%s]. IODC data for second device: %7phN\n" + "Rearranging GSC cards sometimes helps\n", + parisc_pathname(dev), iodc_data); return NULL; } @@ -528,8 +522,7 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path) * the keyboard controller */ if ((hpa & 0xfff) == 0 && insert_resource(&iomem_resource, &dev->hpa)) - printk("Unable to claim HPA %lx for device %s\n", - hpa, name); + pr_warn("Unable to claim HPA %lx for device %s\n", hpa, name); return dev; } @@ -875,7 +868,7 @@ static void print_parisc_device(struct parisc_device *dev) static int count; print_pa_hwpath(dev, hw_path); - printk(KERN_INFO "%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", + pr_info("%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", ++count, dev->name, (void*) dev->hpa.start, hw_path, dev->id.hw_type, dev->id.hversion_rev, dev->id.hversion, dev->id.sversion); diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S index e95207c0565e..c7508f5717fb 100644 --- a/arch/parisc/kernel/entry.S +++ b/arch/parisc/kernel/entry.S @@ -481,6 +481,8 @@ /* Release pa_tlb_lock lock without reloading lock address. */ .macro tlb_unlock0 spc,tmp #ifdef CONFIG_SMP + or,COND(=) %r0,\spc,%r0 + sync or,COND(=) %r0,\spc,%r0 stw \spc,0(\tmp) #endif @@ -764,7 +766,6 @@ END(fault_vector_11) #endif /* Fault vector is separately protected and *must* be on its own page */ .align PAGE_SIZE -ENTRY(end_fault_vector) .import handle_interruption,code .import do_cpu_irq_mask,code @@ -776,7 +777,6 @@ ENTRY(end_fault_vector) */ ENTRY_CFI(ret_from_kernel_thread) - /* Call schedule_tail first though */ BL schedule_tail, %r2 nop @@ -815,8 +815,9 @@ ENTRY_CFI(_switch_to) LDREG TASK_THREAD_INFO(%r25), %r25 bv %r0(%r2) mtctl %r25,%cr30 +ENDPROC_CFI(_switch_to) -_switch_to_ret: +ENTRY_CFI(_switch_to_ret) mtctl %r0, %cr0 /* Needed for single stepping */ callee_rest callee_rest_float @@ -824,7 +825,7 @@ _switch_to_ret: LDREG -RP_OFFSET(%r30), %r2 bv %r0(%r2) copy %r26, %r28 -ENDPROC_CFI(_switch_to) +ENDPROC_CFI(_switch_to_ret) /* * Common rfi return path for interruptions, kernel execve, and @@ -885,12 +886,14 @@ ENTRY_CFI(syscall_exit_rfi) STREG %r19,PT_SR5(%r16) STREG %r19,PT_SR6(%r16) STREG %r19,PT_SR7(%r16) +ENDPROC_CFI(syscall_exit_rfi) -intr_return: +ENTRY_CFI(intr_return) /* check for reschedule */ mfctl %cr30,%r1 LDREG TI_FLAGS(%r1),%r19 /* sched.h: TIF_NEED_RESCHED */ bb,<,n %r19,31-TIF_NEED_RESCHED,intr_do_resched /* forward */ +ENDPROC_CFI(intr_return) .import do_notify_resume,code intr_check_sig: @@ -1046,7 +1049,6 @@ intr_extint: b do_cpu_irq_mask ldo R%intr_return(%r2), %r2 /* return to intr_return, not here */ -ENDPROC_CFI(syscall_exit_rfi) /* Generic interruptions (illegal insn, unaligned, page fault, etc) */ @@ -1997,12 +1999,9 @@ ENDPROC_CFI(syscall_exit) .align L1_CACHE_BYTES .globl mcount .type mcount, @function -ENTRY(mcount) +ENTRY_CFI(mcount, caller) _mcount: .export _mcount,data - .proc - .callinfo caller,frame=0 - .entry /* * The 64bit mcount() function pointer needs 4 dwords, of which the * first two are free. We optimize it here and put 2 instructions for @@ -2024,18 +2023,13 @@ ftrace_stub: .dword mcount .dword 0 /* code in head.S puts value of global gp here */ #endif - .exit - .procend -ENDPROC(mcount) +ENDPROC_CFI(mcount) #ifdef CONFIG_FUNCTION_GRAPH_TRACER .align 8 .globl return_to_handler .type return_to_handler, @function -ENTRY_CFI(return_to_handler) - .proc - .callinfo caller,frame=FRAME_SIZE - .entry +ENTRY_CFI(return_to_handler, caller,frame=FRAME_SIZE) .export parisc_return_to_handler,data parisc_return_to_handler: copy %r3,%r1 @@ -2074,8 +2068,6 @@ parisc_return_to_handler: bv %r0(%rp) #endif LDREGM -FRAME_SIZE(%sp),%r3 - .exit - .procend ENDPROC_CFI(return_to_handler) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ @@ -2085,31 +2077,30 @@ ENDPROC_CFI(return_to_handler) #ifdef CONFIG_IRQSTACKS /* void call_on_stack(unsigned long param1, void *func, unsigned long new_stack) */ -ENTRY_CFI(call_on_stack) +ENTRY_CFI(call_on_stack, FRAME=2*FRAME_SIZE,CALLS,SAVE_RP,SAVE_SP) copy %sp, %r1 /* Regarding the HPPA calling conventions for function pointers, we assume the PIC register is not changed across call. For CONFIG_64BIT, the argument pointer is left to point at the argument region allocated for the call to call_on_stack. */ + + /* Switch to new stack. We allocate two frames. */ + ldo 2*FRAME_SIZE(%arg2), %sp # ifdef CONFIG_64BIT - /* Switch to new stack. We allocate two 128 byte frames. */ - ldo 256(%arg2), %sp /* Save previous stack pointer and return pointer in frame marker */ - STREG %rp, -144(%sp) + STREG %rp, -FRAME_SIZE-RP_OFFSET(%sp) /* Calls always use function descriptor */ LDREG 16(%arg1), %arg1 bve,l (%arg1), %rp - STREG %r1, -136(%sp) - LDREG -144(%sp), %rp + STREG %r1, -FRAME_SIZE-REG_SZ(%sp) + LDREG -FRAME_SIZE-RP_OFFSET(%sp), %rp bve (%rp) - LDREG -136(%sp), %sp + LDREG -FRAME_SIZE-REG_SZ(%sp), %sp # else - /* Switch to new stack. We allocate two 64 byte frames. */ - ldo 128(%arg2), %sp /* Save previous stack pointer and return pointer in frame marker */ - STREG %r1, -68(%sp) - STREG %rp, -84(%sp) + STREG %r1, -FRAME_SIZE-REG_SZ(%sp) + STREG %rp, -FRAME_SIZE-RP_OFFSET(%sp) /* Calls use function descriptor if PLABEL bit is set */ bb,>=,n %arg1, 30, 1f depwi 0,31,2, %arg1 @@ -2117,9 +2108,9 @@ ENTRY_CFI(call_on_stack) 1: be,l 0(%sr4,%arg1), %sr0, %r31 copy %r31, %rp - LDREG -84(%sp), %rp + LDREG -FRAME_SIZE-RP_OFFSET(%sp), %rp bv (%rp) - LDREG -68(%sp), %sp + LDREG -FRAME_SIZE-REG_SZ(%sp), %sp # endif /* CONFIG_64BIT */ ENDPROC_CFI(call_on_stack) #endif /* CONFIG_IRQSTACKS */ diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S index 22e6374ece44..f33bf2d306d6 100644 --- a/arch/parisc/kernel/pacache.S +++ b/arch/parisc/kernel/pacache.S @@ -44,10 +44,6 @@ .align 16 ENTRY_CFI(flush_tlb_all_local) - .proc - .callinfo NO_CALLS - .entry - /* * The pitlbe and pdtlbe instructions should only be used to * flush the entire tlb. Also, there needs to be no intervening @@ -189,18 +185,11 @@ fdtdone: 2: bv %r0(%r2) nop - - .exit - .procend ENDPROC_CFI(flush_tlb_all_local) .import cache_info,data ENTRY_CFI(flush_instruction_cache_local) - .proc - .callinfo NO_CALLS - .entry - load32 cache_info, %r1 /* Flush Instruction Cache */ @@ -256,18 +245,11 @@ fisync: mtsm %r22 /* restore I-bit */ bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_instruction_cache_local) .import cache_info, data ENTRY_CFI(flush_data_cache_local) - .proc - .callinfo NO_CALLS - .entry - load32 cache_info, %r1 /* Flush Data Cache */ @@ -324,9 +306,6 @@ fdsync: mtsm %r22 /* restore I-bit */ bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_data_cache_local) /* Macros to serialize TLB purge operations on SMP. */ @@ -353,6 +332,7 @@ ENDPROC_CFI(flush_data_cache_local) .macro tlb_unlock la,flags,tmp #ifdef CONFIG_SMP ldi 1,\tmp + sync stw \tmp,0(\la) mtsm \flags #endif @@ -361,10 +341,6 @@ ENDPROC_CFI(flush_data_cache_local) /* Clear page using kernel mapping. */ ENTRY_CFI(clear_page_asm) - .proc - .callinfo NO_CALLS - .entry - #ifdef CONFIG_64BIT /* Unroll the loop. */ @@ -423,18 +399,11 @@ ENTRY_CFI(clear_page_asm) #endif bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(clear_page_asm) /* Copy page using kernel mapping. */ ENTRY_CFI(copy_page_asm) - .proc - .callinfo NO_CALLS - .entry - #ifdef CONFIG_64BIT /* PA8x00 CPUs can consume 2 loads or 1 store per cycle. * Unroll the loop by hand and arrange insn appropriately. @@ -541,9 +510,6 @@ ENTRY_CFI(copy_page_asm) #endif bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(copy_page_asm) /* @@ -597,10 +563,6 @@ ENDPROC_CFI(copy_page_asm) */ ENTRY_CFI(copy_user_page_asm) - .proc - .callinfo NO_CALLS - .entry - /* Convert virtual `to' and `from' addresses to physical addresses. Move `from' physical address to non shadowed register. */ ldil L%(__PAGE_OFFSET), %r1 @@ -749,16 +711,9 @@ ENTRY_CFI(copy_user_page_asm) bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(copy_user_page_asm) ENTRY_CFI(clear_user_page_asm) - .proc - .callinfo NO_CALLS - .entry - tophys_r1 %r26 ldil L%(TMPALIAS_MAP_START), %r28 @@ -835,16 +790,9 @@ ENTRY_CFI(clear_user_page_asm) bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(clear_user_page_asm) ENTRY_CFI(flush_dcache_page_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%(TMPALIAS_MAP_START), %r28 #ifdef CONFIG_64BIT #if (TMPALIAS_MAP_START >= 0x80000000) @@ -902,16 +850,9 @@ ENTRY_CFI(flush_dcache_page_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_dcache_page_asm) ENTRY_CFI(flush_icache_page_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%(TMPALIAS_MAP_START), %r28 #ifdef CONFIG_64BIT #if (TMPALIAS_MAP_START >= 0x80000000) @@ -976,16 +917,9 @@ ENTRY_CFI(flush_icache_page_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_icache_page_asm) ENTRY_CFI(flush_kernel_dcache_page_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%dcache_stride, %r1 ldw R%dcache_stride(%r1), %r23 @@ -1019,16 +953,9 @@ ENTRY_CFI(flush_kernel_dcache_page_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_kernel_dcache_page_asm) ENTRY_CFI(purge_kernel_dcache_page_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%dcache_stride, %r1 ldw R%dcache_stride(%r1), %r23 @@ -1061,16 +988,9 @@ ENTRY_CFI(purge_kernel_dcache_page_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(purge_kernel_dcache_page_asm) ENTRY_CFI(flush_user_dcache_range_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%dcache_stride, %r1 ldw R%dcache_stride(%r1), %r23 ldo -1(%r23), %r21 @@ -1082,16 +1002,9 @@ ENTRY_CFI(flush_user_dcache_range_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_user_dcache_range_asm) ENTRY_CFI(flush_kernel_dcache_range_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%dcache_stride, %r1 ldw R%dcache_stride(%r1), %r23 ldo -1(%r23), %r21 @@ -1104,16 +1017,9 @@ ENTRY_CFI(flush_kernel_dcache_range_asm) syncdma bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_kernel_dcache_range_asm) ENTRY_CFI(purge_kernel_dcache_range_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%dcache_stride, %r1 ldw R%dcache_stride(%r1), %r23 ldo -1(%r23), %r21 @@ -1126,16 +1032,9 @@ ENTRY_CFI(purge_kernel_dcache_range_asm) syncdma bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(purge_kernel_dcache_range_asm) ENTRY_CFI(flush_user_icache_range_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%icache_stride, %r1 ldw R%icache_stride(%r1), %r23 ldo -1(%r23), %r21 @@ -1147,16 +1046,9 @@ ENTRY_CFI(flush_user_icache_range_asm) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_user_icache_range_asm) ENTRY_CFI(flush_kernel_icache_page) - .proc - .callinfo NO_CALLS - .entry - ldil L%icache_stride, %r1 ldw R%icache_stride(%r1), %r23 @@ -1190,16 +1082,9 @@ ENTRY_CFI(flush_kernel_icache_page) sync bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(flush_kernel_icache_page) ENTRY_CFI(flush_kernel_icache_range_asm) - .proc - .callinfo NO_CALLS - .entry - ldil L%icache_stride, %r1 ldw R%icache_stride(%r1), %r23 ldo -1(%r23), %r21 @@ -1211,8 +1096,6 @@ ENTRY_CFI(flush_kernel_icache_range_asm) sync bv %r0(%r2) nop - .exit - .procend ENDPROC_CFI(flush_kernel_icache_range_asm) __INIT @@ -1222,10 +1105,6 @@ ENDPROC_CFI(flush_kernel_icache_range_asm) */ .align 256 ENTRY_CFI(disable_sr_hashing_asm) - .proc - .callinfo NO_CALLS - .entry - /* * Switch to real mode */ @@ -1307,9 +1186,6 @@ srdis_done: 2: bv %r0(%r2) nop - .exit - - .procend ENDPROC_CFI(disable_sr_hashing_asm) .end diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 6df07ce4f3c2..04c48f1ef3fb 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -21,13 +21,12 @@ #include #include #include -#include #include #include #include #include -#include -#include +#include +#include #include #include /* for DMA_CHUNK_SIZE */ @@ -395,7 +394,7 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pa11_dma_alloc(struct device *dev, size_t size, +static void *pcxl_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) { unsigned long vaddr; @@ -422,190 +421,60 @@ static void *pa11_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void pa11_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) +static void *pcx_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) { - int order; - - order = get_order(size); - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); - free_pages((unsigned long)__va(dma_handle), order); -} + void *addr; -static dma_addr_t pa11_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction direction, unsigned long attrs) -{ - void *addr = page_address(page) + offset; - BUG_ON(direction == DMA_NONE); + if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) + return NULL; - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - flush_kernel_dcache_range((unsigned long) addr, size); + addr = (void *)__get_free_pages(flag, get_order(size)); + if (addr) + *dma_handle = (dma_addr_t)virt_to_phys(addr); - return virt_to_phys(addr); + return addr; } -static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - BUG_ON(direction == DMA_NONE); - - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - if (direction == DMA_TO_DEVICE) - return; - - /* - * For PCI_DMA_FROMDEVICE this flush is not necessary for the - * simple map/unmap case. However, it IS necessary if if - * pci_dma_sync_single_* has been called and the buffer reused. - */ - - flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), size); + if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) + return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); + else + return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); } -static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, - int nents, enum dma_data_direction direction, - unsigned long attrs) +void arch_dma_free(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_handle, unsigned long attrs) { - int i; - struct scatterlist *sg; - - BUG_ON(direction == DMA_NONE); + int order = get_order(size); - for_each_sg(sglist, sg, nents, i) { - unsigned long vaddr = (unsigned long)sg_virt(sg); + if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); - sg_dma_address(sg) = (dma_addr_t) virt_to_phys(vaddr); - sg_dma_len(sg) = sg->length; - - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - continue; - - flush_kernel_dcache_range(vaddr, sg->length); + vaddr = __va(dma_handle); } - return nents; -} - -static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, - int nents, enum dma_data_direction direction, - unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - BUG_ON(direction == DMA_NONE); - - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - if (direction == DMA_TO_DEVICE) - return; - - /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - - for_each_sg(sglist, sg, nents, i) - flush_kernel_vmap_range(sg_virt(sg), sg->length); -} - -static void pa11_dma_sync_single_for_cpu(struct device *dev, - dma_addr_t dma_handle, size_t size, - enum dma_data_direction direction) -{ - BUG_ON(direction == DMA_NONE); - - flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), - size); -} - -static void pa11_dma_sync_single_for_device(struct device *dev, - dma_addr_t dma_handle, size_t size, - enum dma_data_direction direction) -{ - BUG_ON(direction == DMA_NONE); - - flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), - size); + free_pages((unsigned long)vaddr, get_order(size)); } -static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) +void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) { - int i; - struct scatterlist *sg; - - /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - - for_each_sg(sglist, sg, nents, i) - flush_kernel_vmap_range(sg_virt(sg), sg->length); + flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size); } -static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) +void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) { - int i; - struct scatterlist *sg; - - /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - - for_each_sg(sglist, sg, nents, i) - flush_kernel_vmap_range(sg_virt(sg), sg->length); + flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size); } -static void pa11_dma_cache_sync(struct device *dev, void *vaddr, size_t size, +void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction) { flush_kernel_dcache_range((unsigned long)vaddr, size); } - -const struct dma_map_ops pcxl_dma_ops = { - .alloc = pa11_dma_alloc, - .free = pa11_dma_free, - .map_page = pa11_dma_map_page, - .unmap_page = pa11_dma_unmap_page, - .map_sg = pa11_dma_map_sg, - .unmap_sg = pa11_dma_unmap_sg, - .sync_single_for_cpu = pa11_dma_sync_single_for_cpu, - .sync_single_for_device = pa11_dma_sync_single_for_device, - .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, - .sync_sg_for_device = pa11_dma_sync_sg_for_device, - .cache_sync = pa11_dma_cache_sync, -}; - -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -static void pcx_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t iova, unsigned long attrs) -{ - free_pages((unsigned long)vaddr, get_order(size)); - return; -} - -const struct dma_map_ops pcx_dma_ops = { - .alloc = pcx_dma_alloc, - .free = pcx_dma_free, - .map_page = pa11_dma_map_page, - .unmap_page = pa11_dma_unmap_page, - .map_sg = pa11_dma_map_sg, - .unmap_sg = pa11_dma_unmap_sg, - .sync_single_for_cpu = pa11_dma_sync_single_for_cpu, - .sync_single_for_device = pa11_dma_sync_single_for_device, - .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, - .sync_sg_for_device = pa11_dma_sync_sg_for_device, - .cache_sync = pa11_dma_cache_sync, -}; diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c index b931745815e0..eb39e7e380d7 100644 --- a/arch/parisc/kernel/process.c +++ b/arch/parisc/kernel/process.c @@ -302,7 +302,7 @@ get_wchan(struct task_struct *p) ip = info.ip; if (!in_sched_functions(ip)) return ip; - } while (count++ < 16); + } while (count++ < MAX_UNWIND_ENTRIES); return 0; } diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c index 7aa1d4d0d444..2582df1c529b 100644 --- a/arch/parisc/kernel/ptrace.c +++ b/arch/parisc/kernel/ptrace.c @@ -676,3 +676,103 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) #endif return &user_parisc_native_view; } + + +/* HAVE_REGS_AND_STACK_ACCESS_API feature */ + +struct pt_regs_offset { + const char *name; + int offset; +}; + +#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)} +#define REG_OFFSET_INDEX(r,i) {.name = #r#i, .offset = offsetof(struct pt_regs, r[i])} +#define REG_OFFSET_END {.name = NULL, .offset = 0} + +static const struct pt_regs_offset regoffset_table[] = { + REG_OFFSET_INDEX(gr,0), + REG_OFFSET_INDEX(gr,1), + REG_OFFSET_INDEX(gr,2), + REG_OFFSET_INDEX(gr,3), + REG_OFFSET_INDEX(gr,4), + REG_OFFSET_INDEX(gr,5), + REG_OFFSET_INDEX(gr,6), + REG_OFFSET_INDEX(gr,7), + REG_OFFSET_INDEX(gr,8), + REG_OFFSET_INDEX(gr,9), + REG_OFFSET_INDEX(gr,10), + REG_OFFSET_INDEX(gr,11), + REG_OFFSET_INDEX(gr,12), + REG_OFFSET_INDEX(gr,13), + REG_OFFSET_INDEX(gr,14), + REG_OFFSET_INDEX(gr,15), + REG_OFFSET_INDEX(gr,16), + REG_OFFSET_INDEX(gr,17), + REG_OFFSET_INDEX(gr,18), + REG_OFFSET_INDEX(gr,19), + REG_OFFSET_INDEX(gr,20), + REG_OFFSET_INDEX(gr,21), + REG_OFFSET_INDEX(gr,22), + REG_OFFSET_INDEX(gr,23), + REG_OFFSET_INDEX(gr,24), + REG_OFFSET_INDEX(gr,25), + REG_OFFSET_INDEX(gr,26), + REG_OFFSET_INDEX(gr,27), + REG_OFFSET_INDEX(gr,28), + REG_OFFSET_INDEX(gr,29), + REG_OFFSET_INDEX(gr,30), + REG_OFFSET_INDEX(gr,31), + REG_OFFSET_INDEX(sr,0), + REG_OFFSET_INDEX(sr,1), + REG_OFFSET_INDEX(sr,2), + REG_OFFSET_INDEX(sr,3), + REG_OFFSET_INDEX(sr,4), + REG_OFFSET_INDEX(sr,5), + REG_OFFSET_INDEX(sr,6), + REG_OFFSET_INDEX(sr,7), + REG_OFFSET_INDEX(iasq,0), + REG_OFFSET_INDEX(iasq,1), + REG_OFFSET_INDEX(iaoq,0), + REG_OFFSET_INDEX(iaoq,1), + REG_OFFSET_NAME(cr27), + REG_OFFSET_NAME(ksp), + REG_OFFSET_NAME(kpc), + REG_OFFSET_NAME(sar), + REG_OFFSET_NAME(iir), + REG_OFFSET_NAME(isr), + REG_OFFSET_NAME(ior), + REG_OFFSET_NAME(ipsw), + REG_OFFSET_END, +}; + +/** + * regs_query_register_offset() - query register offset from its name + * @name: the name of a register + * + * regs_query_register_offset() returns the offset of a register in struct + * pt_regs from its name. If the name is invalid, this returns -EINVAL; + */ +int regs_query_register_offset(const char *name) +{ + const struct pt_regs_offset *roff; + for (roff = regoffset_table; roff->name != NULL; roff++) + if (!strcmp(roff->name, name)) + return roff->offset; + return -EINVAL; +} + +/** + * regs_query_register_name() - query register name from its offset + * @offset: the offset of a register in struct pt_regs. + * + * regs_query_register_name() returns the name of a register from its + * offset in struct pt_regs. If the @offset is invalid, this returns NULL; + */ +const char *regs_query_register_name(unsigned int offset) +{ + const struct pt_regs_offset *roff; + for (roff = regoffset_table; roff->name != NULL; roff++) + if (roff->offset == offset) + return roff->name; + return NULL; +} diff --git a/arch/parisc/kernel/real2.S b/arch/parisc/kernel/real2.S index cc9963421a19..2b16d8d6598f 100644 --- a/arch/parisc/kernel/real2.S +++ b/arch/parisc/kernel/real2.S @@ -35,12 +35,6 @@ real32_stack: real64_stack: .block 8192 -#ifdef CONFIG_64BIT -# define REG_SZ 8 -#else -# define REG_SZ 4 -#endif - #define N_SAVED_REGS 9 save_cr_space: diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c index 8d3a7b80ac42..4e87c35c22b7 100644 --- a/arch/parisc/kernel/setup.c +++ b/arch/parisc/kernel/setup.c @@ -97,14 +97,12 @@ void __init dma_ops_init(void) panic( "PA-RISC Linux currently only supports machines that conform to\n" "the PA-RISC 1.1 or 2.0 architecture specification.\n"); - case pcxs: - case pcxt: - hppa_dma_ops = &pcx_dma_ops; - break; case pcxl2: pa7300lc_init(); case pcxl: /* falls through */ - hppa_dma_ops = &pcxl_dma_ops; + case pcxs: + case pcxt: + hppa_dma_ops = &dma_noncoherent_ops; break; default: break; diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S index e775f80ae28c..5f7e57fcaeef 100644 --- a/arch/parisc/kernel/syscall.S +++ b/arch/parisc/kernel/syscall.S @@ -629,11 +629,12 @@ cas_action: stw %r1, 4(%sr2,%r20) #endif /* The load and store could fail */ -1: ldw,ma 0(%r26), %r28 +1: ldw 0(%r26), %r28 sub,<> %r28, %r25, %r0 -2: stw,ma %r24, 0(%r26) +2: stw %r24, 0(%r26) /* Free lock */ - stw,ma %r20, 0(%sr2,%r20) + sync + stw %r20, 0(%sr2,%r20) #if ENABLE_LWS_DEBUG /* Clear thread register indicator */ stw %r0, 4(%sr2,%r20) @@ -647,6 +648,7 @@ cas_action: 3: /* Error occurred on load or store */ /* Free lock */ + sync stw %r20, 0(%sr2,%r20) #if ENABLE_LWS_DEBUG stw %r0, 4(%sr2,%r20) @@ -796,30 +798,30 @@ cas2_action: ldo 1(%r0),%r28 /* 8bit CAS */ -13: ldb,ma 0(%r26), %r29 +13: ldb 0(%r26), %r29 sub,= %r29, %r25, %r0 b,n cas2_end -14: stb,ma %r24, 0(%r26) +14: stb %r24, 0(%r26) b cas2_end copy %r0, %r28 nop nop /* 16bit CAS */ -15: ldh,ma 0(%r26), %r29 +15: ldh 0(%r26), %r29 sub,= %r29, %r25, %r0 b,n cas2_end -16: sth,ma %r24, 0(%r26) +16: sth %r24, 0(%r26) b cas2_end copy %r0, %r28 nop nop /* 32bit CAS */ -17: ldw,ma 0(%r26), %r29 +17: ldw 0(%r26), %r29 sub,= %r29, %r25, %r0 b,n cas2_end -18: stw,ma %r24, 0(%r26) +18: stw %r24, 0(%r26) b cas2_end copy %r0, %r28 nop @@ -827,10 +829,10 @@ cas2_action: /* 64bit CAS */ #ifdef CONFIG_64BIT -19: ldd,ma 0(%r26), %r29 +19: ldd 0(%r26), %r29 sub,*= %r29, %r25, %r0 b,n cas2_end -20: std,ma %r24, 0(%r26) +20: std %r24, 0(%r26) copy %r0, %r28 #else /* Compare first word */ @@ -848,7 +850,8 @@ cas2_action: cas2_end: /* Free lock */ - stw,ma %r20, 0(%sr2,%r20) + sync + stw %r20, 0(%sr2,%r20) /* Enable interrupts */ ssm PSW_SM_I, %r0 /* Return to userspace, set no error */ @@ -858,6 +861,7 @@ cas2_end: 22: /* Error occurred on load or store */ /* Free lock */ + sync stw %r20, 0(%sr2,%r20) ssm PSW_SM_I, %r0 ldo 1(%r0),%r28 diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S index 6308749359e4..fe3f2a49d2b1 100644 --- a/arch/parisc/kernel/syscall_table.S +++ b/arch/parisc/kernel/syscall_table.S @@ -445,6 +445,7 @@ ENTRY_COMP(preadv2) ENTRY_COMP(pwritev2) ENTRY_SAME(statx) + ENTRY_COMP(io_pgetevents) /* 350 */ .ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b)) diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c index 4309ad31a874..318815212518 100644 --- a/arch/parisc/kernel/traps.c +++ b/arch/parisc/kernel/traps.c @@ -172,7 +172,7 @@ static void do_show_stack(struct unwind_frame_info *info) int i = 1; printk(KERN_CRIT "Backtrace:\n"); - while (i <= 16) { + while (i <= MAX_UNWIND_ENTRIES) { if (unwind_once(info) < 0 || info->ip == 0) break; diff --git a/arch/parisc/kernel/unwind.c b/arch/parisc/kernel/unwind.c index 143f90e2f9f3..5cdf13069dd9 100644 --- a/arch/parisc/kernel/unwind.c +++ b/arch/parisc/kernel/unwind.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include @@ -25,7 +24,7 @@ /* #define DEBUG 1 */ #ifdef DEBUG -#define dbg(x...) printk(x) +#define dbg(x...) pr_debug(x) #else #define dbg(x...) #endif @@ -117,7 +116,8 @@ unwind_table_init(struct unwind_table *table, const char *name, for (; start <= end; start++) { if (start < end && start->region_end > (start+1)->region_start) { - printk("WARNING: Out of order unwind entry! %p and %p\n", start, start+1); + pr_warn("Out of order unwind entry! %px and %px\n", + start, start+1); } start->region_start += base_addr; @@ -182,7 +182,7 @@ int __init unwind_init(void) start = (long)&__start___unwind[0]; stop = (long)&__stop___unwind[0]; - printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", + dbg("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", start, stop, (stop - start) / sizeof(struct unwind_table_entry)); @@ -203,25 +203,60 @@ int __init unwind_init(void) return 0; } -#ifdef CONFIG_64BIT -#define get_func_addr(fptr) fptr[2] -#else -#define get_func_addr(fptr) fptr[0] -#endif - static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int frame_size) { - extern void handle_interruption(int, struct pt_regs *); - static unsigned long *hi = (unsigned long *)&handle_interruption; - - if (pc == get_func_addr(hi)) { + /* + * We have to use void * instead of a function pointer, because + * function pointers aren't a pointer to the function on 64-bit. + * Make them const so the compiler knows they live in .text + */ + extern void * const handle_interruption; + extern void * const ret_from_kernel_thread; + extern void * const syscall_exit; + extern void * const intr_return; + extern void * const _switch_to_ret; +#ifdef CONFIG_IRQSTACKS + extern void * const call_on_stack; +#endif /* CONFIG_IRQSTACKS */ + + if (pc == (unsigned long) &handle_interruption) { struct pt_regs *regs = (struct pt_regs *)(info->sp - frame_size - PT_SZ_ALGN); dbg("Unwinding through handle_interruption()\n"); info->prev_sp = regs->gr[30]; info->prev_ip = regs->iaoq[0]; + return 1; + } + + if (pc == (unsigned long) &ret_from_kernel_thread || + pc == (unsigned long) &syscall_exit) { + info->prev_sp = info->prev_ip = 0; + return 1; + } + + if (pc == (unsigned long) &intr_return) { + struct pt_regs *regs; + + dbg("Found intr_return()\n"); + regs = (struct pt_regs *)(info->sp - PT_SZ_ALGN); + info->prev_sp = regs->gr[30]; + info->prev_ip = regs->iaoq[0]; + info->rp = regs->gr[2]; + return 1; + } + + if (pc == (unsigned long) &_switch_to_ret) { + info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE; + info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET); + return 1; + } +#ifdef CONFIG_IRQSTACKS + if (pc == (unsigned long) &call_on_stack) { + info->prev_sp = *(unsigned long *)(info->sp - FRAME_SIZE - REG_SZ); + info->prev_ip = *(unsigned long *)(info->sp - FRAME_SIZE - RP_OFFSET); return 1; } +#endif return 0; } @@ -238,34 +273,8 @@ static void unwind_frame_regs(struct unwind_frame_info *info) if (e == NULL) { unsigned long sp; - dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip); - -#ifdef CONFIG_KALLSYMS - /* Handle some frequent special cases.... */ - { - char symname[KSYM_NAME_LEN]; - char *modname; - - kallsyms_lookup(info->ip, NULL, NULL, &modname, - symname); - - dbg("info->ip = 0x%lx, name = %s\n", info->ip, symname); - - if (strcmp(symname, "_switch_to_ret") == 0) { - info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE; - info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET); - dbg("_switch_to_ret @ %lx - setting " - "prev_sp=%lx prev_ip=%lx\n", - info->ip, info->prev_sp, - info->prev_ip); - return; - } else if (strcmp(symname, "ret_from_kernel_thread") == 0 || - strcmp(symname, "syscall_exit") == 0) { - info->prev_ip = info->prev_sp = 0; - return; - } - } -#endif + dbg("Cannot find unwind entry for %pS; forced unwinding\n", + (void *) info->ip); /* Since we are doing the unwinding blind, we don't know if we are adjusting the stack correctly or extracting the rp @@ -439,8 +448,8 @@ unsigned long return_address(unsigned int level) /* initialize unwind info */ asm volatile ("copy %%r30, %0" : "=r"(sp)); memset(&r, 0, sizeof(struct pt_regs)); - r.iaoq[0] = (unsigned long) current_text_addr(); - r.gr[2] = (unsigned long) __builtin_return_address(0); + r.iaoq[0] = _THIS_IP_; + r.gr[2] = _RET_IP_; r.gr[30] = sp; unwind_frame_init(&info, current, &r); diff --git a/arch/parisc/lib/lusercopy.S b/arch/parisc/lib/lusercopy.S index d4fe19806d57..b53fb6fedf06 100644 --- a/arch/parisc/lib/lusercopy.S +++ b/arch/parisc/lib/lusercopy.S @@ -64,9 +64,6 @@ */ ENTRY_CFI(lclear_user) - .proc - .callinfo NO_CALLS - .entry comib,=,n 0,%r25,$lclu_done get_sr $lclu_loop: @@ -81,13 +78,9 @@ $lclu_done: ldo 1(%r25),%r25 ASM_EXCEPTIONTABLE_ENTRY(1b,2b) - - .exit ENDPROC_CFI(lclear_user) - .procend - /* * long lstrnlen_user(char *s, long n) * @@ -97,9 +90,6 @@ ENDPROC_CFI(lclear_user) */ ENTRY_CFI(lstrnlen_user) - .proc - .callinfo NO_CALLS - .entry comib,= 0,%r25,$lslen_nzero copy %r26,%r24 get_sr @@ -111,7 +101,6 @@ $lslen_loop: $lslen_done: bv %r0(%r2) sub %r26,%r24,%r28 - .exit $lslen_nzero: b $lslen_done @@ -125,9 +114,6 @@ $lslen_nzero: ENDPROC_CFI(lstrnlen_user) - .procend - - /* * unsigned long pa_memcpy(void *dstp, const void *srcp, unsigned long len) @@ -186,10 +172,6 @@ ENDPROC_CFI(lstrnlen_user) save_len = r31 ENTRY_CFI(pa_memcpy) - .proc - .callinfo NO_CALLS - .entry - /* Last destination address */ add dst,len,end @@ -439,9 +421,6 @@ ENTRY_CFI(pa_memcpy) b .Lcopy_done 10: stw,ma t1,4(dstspc,dst) ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done) - - .exit ENDPROC_CFI(pa_memcpy) - .procend .end diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 2607d2d33405..74842d28a7a1 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -19,7 +19,6 @@ #include #include #include -#include /* for hppa_dma_ops and pcxl_dma_ops */ #include #include #include @@ -616,17 +615,13 @@ void __init mem_init(void) free_all_bootmem(); #ifdef CONFIG_PA11 - if (hppa_dma_ops == &pcxl_dma_ops) { + if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { pcxl_dma_start = (unsigned long)SET_MAP_OFFSET(MAP_START); parisc_vmalloc_start = SET_MAP_OFFSET(pcxl_dma_start + PCXL_DMA_MAP_SIZE); - } else { - pcxl_dma_start = 0; - parisc_vmalloc_start = SET_MAP_OFFSET(MAP_START); - } -#else - parisc_vmalloc_start = SET_MAP_OFFSET(MAP_START); + } else #endif + parisc_vmalloc_start = SET_MAP_OFFSET(MAP_START); mem_init_print_info(NULL); diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index bd06a3ccda31..fb96206de317 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -243,7 +243,9 @@ endif cpu-as-$(CONFIG_4xx) += -Wa,-m405 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) cpu-as-$(CONFIG_E200) += -Wa,-me200 +cpu-as-$(CONFIG_E500) += -Wa,-me500 cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4 +cpu-as-$(CONFIG_PPC_E500MC) += $(call as-option,-Wa$(comma)-me500mc) KBUILD_AFLAGS += $(cpu-as-y) KBUILD_CFLAGS += $(cpu-as-y) diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index aa9e785c59c2..7841b8a60657 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -134,7 +134,13 @@ unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip); void pnv_power9_force_smt4_catch(void); void pnv_power9_force_smt4_release(void); +/* Transaction memory related */ void tm_enable(void); void tm_disable(void); void tm_abort(uint8_t cause); + +struct kvm_vcpu; +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr); +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr); + #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 682b3e6a1e21..963abf8bf1c0 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -18,18 +18,11 @@ * a "bne-" instruction at the end, so an isync is enough as a acquire barrier * on the platform without lwsync. */ -#define __atomic_op_acquire(op, args...) \ -({ \ - typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory"); \ - __ret; \ -}) - -#define __atomic_op_release(op, args...) \ -({ \ - __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory"); \ - op##_relaxed(args); \ -}) +#define __atomic_acquire_fence() \ + __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory") + +#define __atomic_release_fence() \ + __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory") static __inline__ int atomic_read(const atomic_t *v) { @@ -129,8 +122,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN_RELAXED #undef ATOMIC_OP -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - static __inline__ void atomic_inc(atomic_t *v) { int t; @@ -145,6 +136,7 @@ static __inline__ void atomic_inc(atomic_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic_inc atomic_inc static __inline__ int atomic_inc_return_relaxed(atomic_t *v) { @@ -163,16 +155,6 @@ static __inline__ int atomic_inc_return_relaxed(atomic_t *v) return t; } -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - static __inline__ void atomic_dec(atomic_t *v) { int t; @@ -187,6 +169,7 @@ static __inline__ void atomic_dec(atomic_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic_dec atomic_dec static __inline__ int atomic_dec_return_relaxed(atomic_t *v) { @@ -218,7 +201,7 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -226,13 +209,13 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int t; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -"1: lwarx %0,0,%1 # __atomic_add_unless\n\ +"1: lwarx %0,0,%1 # atomic_fetch_add_unless\n\ cmpw 0,%0,%3 \n\ beq 2f \n\ add %0,%2,%0 \n" @@ -248,6 +231,7 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) return t; } +#define atomic_fetch_add_unless atomic_fetch_add_unless /** * atomic_inc_not_zero - increment unless the number is zero @@ -280,9 +264,6 @@ static __inline__ int atomic_inc_not_zero(atomic_t *v) } #define atomic_inc_not_zero(v) atomic_inc_not_zero((v)) -#define atomic_sub_and_test(a, v) (atomic_sub_return((a), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return((v)) == 0) - /* * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1, even if @@ -412,8 +393,6 @@ ATOMIC64_OPS(xor, xor) #undef ATOMIC64_OP_RETURN_RELAXED #undef ATOMIC64_OP -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - static __inline__ void atomic64_inc(atomic64_t *v) { long t; @@ -427,6 +406,7 @@ static __inline__ void atomic64_inc(atomic64_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic64_inc atomic64_inc static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) { @@ -444,16 +424,6 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) return t; } -/* - * atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - static __inline__ void atomic64_dec(atomic64_t *v) { long t; @@ -467,6 +437,7 @@ static __inline__ void atomic64_dec(atomic64_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic64_dec atomic64_dec static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) { @@ -487,9 +458,6 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) - /* * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1. @@ -513,6 +481,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) return t; } +#define atomic64_dec_if_positive atomic64_dec_if_positive #define atomic64_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic64_cmpxchg_relaxed(v, o, n) \ @@ -524,7 +493,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) #define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -532,13 +501,13 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) +static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { long t; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -"1: ldarx %0,0,%1 # __atomic_add_unless\n\ +"1: ldarx %0,0,%1 # atomic64_fetch_add_unless\n\ cmpd 0,%0,%3 \n\ beq 2f \n\ add %0,%2,%0 \n" @@ -551,8 +520,9 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) : "r" (&v->counter), "r" (a), "r" (u) : "cc", "memory"); - return t != u; + return t; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless /** * atomic_inc64_not_zero - increment unless the number is zero @@ -582,6 +552,7 @@ static __inline__ int atomic64_inc_not_zero(atomic64_t *v) return t1 != 0; } +#define atomic64_inc_not_zero(v) atomic64_inc_not_zero((v)) #endif /* __powerpc64__ */ diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 6a6673907e45..82e44b1a00ae 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -108,6 +108,7 @@ static inline void pgtable_free(void *table, unsigned index_size) } #define check_pgt_cache() do { } while (0) +#define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, @@ -137,7 +138,6 @@ static inline void pgtable_free_tlb(struct mmu_gather *tlb, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { - pgtable_page_dtor(table); pgtable_free_tlb(tlb, page_address(table), 0); } #endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */ diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-4k.h b/arch/powerpc/include/asm/book3s/64/pgtable-4k.h index af5f2baac80f..a069dfcac9a9 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable-4k.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable-4k.h @@ -49,6 +49,27 @@ static inline int hugepd_ok(hugepd_t hpd) } #define is_hugepd(hpd) (hugepd_ok(hpd)) +/* + * 16M and 16G huge page directory tables are allocated from slab cache + * + */ +#define H_16M_CACHE_INDEX (PAGE_SHIFT + H_PTE_INDEX_SIZE + H_PMD_INDEX_SIZE - 24) +#define H_16G_CACHE_INDEX \ + (PAGE_SHIFT + H_PTE_INDEX_SIZE + H_PMD_INDEX_SIZE + H_PUD_INDEX_SIZE - 34) + +static inline int get_hugepd_cache_index(int index) +{ + switch (index) { + case H_16M_CACHE_INDEX: + return HTLB_16M_INDEX; + case H_16G_CACHE_INDEX: + return HTLB_16G_INDEX; + default: + BUG(); + } + /* should not reach */ +} + #else /* !CONFIG_HUGETLB_PAGE */ static inline int pmd_huge(pmd_t pmd) { return 0; } static inline int pud_huge(pud_t pud) { return 0; } diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h index fb4b3ba52339..d7ee249d6890 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h @@ -45,8 +45,17 @@ static inline int hugepd_ok(hugepd_t hpd) { return 0; } + #define is_hugepd(pdep) 0 +/* + * This should never get called + */ +static inline int get_hugepd_cache_index(int index) +{ + BUG(); +} + #else /* !CONFIG_HUGETLB_PAGE */ static inline int pmd_huge(pmd_t pmd) { return 0; } static inline int pud_huge(pud_t pud) { return 0; } diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 63cee159022b..42aafba7a308 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -287,6 +287,11 @@ enum pgtable_index { PMD_INDEX, PUD_INDEX, PGD_INDEX, + /* + * Below are used with 4k page size and hugetlb + */ + HTLB_16M_INDEX, + HTLB_16G_INDEX, }; extern unsigned long __vmalloc_start; diff --git a/arch/powerpc/include/asm/hw_breakpoint.h b/arch/powerpc/include/asm/hw_breakpoint.h index 8e7b09703ca4..27d6e3c8fde9 100644 --- a/arch/powerpc/include/asm/hw_breakpoint.h +++ b/arch/powerpc/include/asm/hw_breakpoint.h @@ -52,6 +52,7 @@ struct arch_hw_breakpoint { #include #include +struct perf_event_attr; struct perf_event; struct pmu; struct perf_sample_data; @@ -60,8 +61,10 @@ struct perf_sample_data; extern int hw_breakpoint_slots(int type); extern int arch_bp_generic_fields(int type, int *gen_bp_type); -extern int arch_check_bp_in_kernelspace(struct perf_event *bp); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp); +extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +extern int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); int arch_install_hw_breakpoint(struct perf_event *bp); diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h index 9f3be5c8a4a3..785c464b6588 100644 --- a/arch/powerpc/include/asm/kprobes.h +++ b/arch/powerpc/include/asm/kprobes.h @@ -88,7 +88,6 @@ struct prev_kprobe { struct kprobe_ctlblk { unsigned long kprobe_status; unsigned long kprobe_saved_msr; - struct pt_regs jprobe_saved_regs; struct prev_kprobe prev_kprobe; }; @@ -103,17 +102,6 @@ extern int kprobe_exceptions_notify(struct notifier_block *self, extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr); extern int kprobe_handler(struct pt_regs *regs); extern int kprobe_post_handler(struct pt_regs *regs); -#ifdef CONFIG_KPROBES_ON_FTRACE -extern int __is_active_jprobe(unsigned long addr); -extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb); -#else -static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) -{ - return 0; -} -#endif #else static inline int kprobe_handler(struct pt_regs *regs) { return 0; } static inline int kprobe_post_handler(struct pt_regs *regs) { return 0; } diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index e7377b73cfec..1f345a0b6ba2 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -104,6 +104,7 @@ struct kvmppc_vcore { ulong vtb; /* virtual timebase */ ulong conferring_threads; unsigned int halt_poll_ns; + atomic_t online_count; }; struct kvmppc_vcpu_book3s { @@ -209,6 +210,7 @@ extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec) extern void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec); extern void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags); +extern void kvmppc_trigger_fac_interrupt(struct kvm_vcpu *vcpu, ulong fac); extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper, u32 val); extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr); @@ -256,6 +258,21 @@ extern int kvmppc_hcall_impl_pr(unsigned long cmd); extern int kvmppc_hcall_impl_hv_realmode(unsigned long cmd); extern void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu); extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu); +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu); +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu); +void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu); +#else +static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {} +static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {} +static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu) {} +static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu) {} +#endif + +void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac); + extern int kvm_irq_bypass; static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) @@ -274,12 +291,12 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) { - vcpu->arch.gpr[num] = val; + vcpu->arch.regs.gpr[num] = val; } static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) { - return vcpu->arch.gpr[num]; + return vcpu->arch.regs.gpr[num]; } static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) @@ -294,42 +311,42 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.xer = val; + vcpu->arch.regs.xer = val; } static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu) { - return vcpu->arch.xer; + return vcpu->arch.regs.xer; } static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.ctr = val; + vcpu->arch.regs.ctr = val; } static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu) { - return vcpu->arch.ctr; + return vcpu->arch.regs.ctr; } static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.lr = val; + vcpu->arch.regs.link = val; } static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu) { - return vcpu->arch.lr; + return vcpu->arch.regs.link; } static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.pc = val; + vcpu->arch.regs.nip = val; } static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu) { - return vcpu->arch.pc; + return vcpu->arch.regs.nip; } static inline u64 kvmppc_get_msr(struct kvm_vcpu *vcpu); diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index c424e44f4c00..dc435a5af7d6 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -483,15 +483,15 @@ static inline u64 sanitize_msr(u64 msr) static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu) { vcpu->arch.cr = vcpu->arch.cr_tm; - vcpu->arch.xer = vcpu->arch.xer_tm; - vcpu->arch.lr = vcpu->arch.lr_tm; - vcpu->arch.ctr = vcpu->arch.ctr_tm; + vcpu->arch.regs.xer = vcpu->arch.xer_tm; + vcpu->arch.regs.link = vcpu->arch.lr_tm; + vcpu->arch.regs.ctr = vcpu->arch.ctr_tm; vcpu->arch.amr = vcpu->arch.amr_tm; vcpu->arch.ppr = vcpu->arch.ppr_tm; vcpu->arch.dscr = vcpu->arch.dscr_tm; vcpu->arch.tar = vcpu->arch.tar_tm; - memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm, - sizeof(vcpu->arch.gpr)); + memcpy(vcpu->arch.regs.gpr, vcpu->arch.gpr_tm, + sizeof(vcpu->arch.regs.gpr)); vcpu->arch.fp = vcpu->arch.fp_tm; vcpu->arch.vr = vcpu->arch.vr_tm; vcpu->arch.vrsave = vcpu->arch.vrsave_tm; @@ -500,15 +500,15 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu) static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu) { vcpu->arch.cr_tm = vcpu->arch.cr; - vcpu->arch.xer_tm = vcpu->arch.xer; - vcpu->arch.lr_tm = vcpu->arch.lr; - vcpu->arch.ctr_tm = vcpu->arch.ctr; + vcpu->arch.xer_tm = vcpu->arch.regs.xer; + vcpu->arch.lr_tm = vcpu->arch.regs.link; + vcpu->arch.ctr_tm = vcpu->arch.regs.ctr; vcpu->arch.amr_tm = vcpu->arch.amr; vcpu->arch.ppr_tm = vcpu->arch.ppr; vcpu->arch.dscr_tm = vcpu->arch.dscr; vcpu->arch.tar_tm = vcpu->arch.tar; - memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr, - sizeof(vcpu->arch.gpr)); + memcpy(vcpu->arch.gpr_tm, vcpu->arch.regs.gpr, + sizeof(vcpu->arch.regs.gpr)); vcpu->arch.fp_tm = vcpu->arch.fp; vcpu->arch.vr_tm = vcpu->arch.vr; vcpu->arch.vrsave_tm = vcpu->arch.vrsave; diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h index bc6e29e4dfd4..d513e3ed1c65 100644 --- a/arch/powerpc/include/asm/kvm_booke.h +++ b/arch/powerpc/include/asm/kvm_booke.h @@ -36,12 +36,12 @@ static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) { - vcpu->arch.gpr[num] = val; + vcpu->arch.regs.gpr[num] = val; } static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) { - return vcpu->arch.gpr[num]; + return vcpu->arch.regs.gpr[num]; } static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) @@ -56,12 +56,12 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.xer = val; + vcpu->arch.regs.xer = val; } static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu) { - return vcpu->arch.xer; + return vcpu->arch.regs.xer; } static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) @@ -72,32 +72,32 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.ctr = val; + vcpu->arch.regs.ctr = val; } static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu) { - return vcpu->arch.ctr; + return vcpu->arch.regs.ctr; } static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.lr = val; + vcpu->arch.regs.link = val; } static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu) { - return vcpu->arch.lr; + return vcpu->arch.regs.link; } static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.pc = val; + vcpu->arch.regs.nip = val; } static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu) { - return vcpu->arch.pc; + return vcpu->arch.regs.nip; } static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 17498e9a26e4..fa4efa7e88f7 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -269,7 +269,6 @@ struct kvm_arch { unsigned long host_lpcr; unsigned long sdr1; unsigned long host_sdr1; - int tlbie_lock; unsigned long lpcr; unsigned long vrma_slb_v; int mmu_ready; @@ -454,6 +453,12 @@ struct mmio_hpte_cache { #define KVMPPC_VSX_COPY_WORD 1 #define KVMPPC_VSX_COPY_DWORD 2 #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP 3 +#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP 4 + +#define KVMPPC_VMX_COPY_BYTE 8 +#define KVMPPC_VMX_COPY_HWORD 9 +#define KVMPPC_VMX_COPY_WORD 10 +#define KVMPPC_VMX_COPY_DWORD 11 struct openpic; @@ -486,7 +491,7 @@ struct kvm_vcpu_arch { struct kvmppc_book3s_shadow_vcpu *shadow_vcpu; #endif - ulong gpr[32]; + struct pt_regs regs; struct thread_fp_state fp; @@ -521,14 +526,10 @@ struct kvm_vcpu_arch { u32 qpr[32]; #endif - ulong pc; - ulong ctr; - ulong lr; #ifdef CONFIG_PPC_BOOK3S ulong tar; #endif - ulong xer; u32 cr; #ifdef CONFIG_PPC_BOOK3S @@ -626,7 +627,6 @@ struct kvm_vcpu_arch { struct thread_vr_state vr_tm; u32 vrsave_tm; /* also USPRG0 */ - #endif #ifdef CONFIG_KVM_EXIT_TIMING @@ -681,16 +681,17 @@ struct kvm_vcpu_arch { * Number of simulations for vsx. * If we use 2*8bytes to simulate 1*16bytes, * then the number should be 2 and - * mmio_vsx_copy_type=KVMPPC_VSX_COPY_DWORD. + * mmio_copy_type=KVMPPC_VSX_COPY_DWORD. * If we use 4*4bytes to simulate 1*16bytes, * the number should be 4 and * mmio_vsx_copy_type=KVMPPC_VSX_COPY_WORD. */ u8 mmio_vsx_copy_nums; u8 mmio_vsx_offset; - u8 mmio_vsx_copy_type; u8 mmio_vsx_tx_sx_enabled; u8 mmio_vmx_copy_nums; + u8 mmio_vmx_offset; + u8 mmio_copy_type; u8 osi_needed; u8 osi_enabled; u8 papr_enabled; @@ -772,6 +773,8 @@ struct kvm_vcpu_arch { u64 busy_preempt; u32 emul_inst; + + u32 online; #endif #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index abe7032cdb54..e991821dd7fa 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -52,7 +52,7 @@ enum emulation_result { EMULATE_EXIT_USER, /* emulation requires exit to user-space */ }; -enum instruction_type { +enum instruction_fetch_type { INST_GENERIC, INST_SC, /* system call */ }; @@ -81,10 +81,10 @@ extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu, extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned int rt, unsigned int bytes, int is_default_endian, int mmio_sign_extend); -extern int kvmppc_handle_load128_by2x64(struct kvm_run *run, - struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian); -extern int kvmppc_handle_store128_by2x64(struct kvm_run *run, - struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian); +extern int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rt, unsigned int bytes, int is_default_endian); +extern int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rs, unsigned int bytes, int is_default_endian); extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu, u64 val, unsigned int bytes, int is_default_endian); @@ -93,7 +93,7 @@ extern int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, int is_default_endian); extern int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, - enum instruction_type type, u32 *inst); + enum instruction_fetch_type type, u32 *inst); extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); @@ -265,6 +265,8 @@ union kvmppc_one_reg { vector128 vval; u64 vsxval[2]; u32 vsx32val[4]; + u16 vsx16val[8]; + u8 vsx8val[16]; struct { u64 addr; u64 length; @@ -324,13 +326,14 @@ struct kvmppc_ops { int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info); int (*set_smt_mode)(struct kvm *kvm, unsigned long mode, unsigned long flags); + void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr); }; extern struct kvmppc_ops *kvmppc_hv_ops; extern struct kvmppc_ops *kvmppc_pr_ops; static inline int kvmppc_get_last_inst(struct kvm_vcpu *vcpu, - enum instruction_type type, u32 *inst) + enum instruction_fetch_type type, u32 *inst) { int ret = EMULATE_DONE; u32 fetched_inst; diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 896efa559996..b2f89b621b15 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -35,9 +35,9 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm( extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm, unsigned long ua, unsigned long entries); extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, - unsigned long ua, unsigned long *hpa); + unsigned long ua, unsigned int pageshift, unsigned long *hpa); extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, - unsigned long ua, unsigned long *hpa); + unsigned long ua, unsigned int pageshift, unsigned long *hpa); extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem); extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem); #endif @@ -143,24 +143,33 @@ static inline void mm_context_remove_copro(struct mm_struct *mm) { int c; - c = atomic_dec_if_positive(&mm->context.copros); - - /* Detect imbalance between add and remove */ - WARN_ON(c < 0); - /* - * Need to broadcast a global flush of the full mm before - * decrementing active_cpus count, as the next TLBI may be - * local and the nMMU and/or PSL need to be cleaned up. - * Should be rare enough so that it's acceptable. + * When removing the last copro, we need to broadcast a global + * flush of the full mm, as the next TLBI may be local and the + * nMMU and/or PSL need to be cleaned up. + * + * Both the 'copros' and 'active_cpus' counts are looked at in + * flush_all_mm() to determine the scope (local/global) of the + * TLBIs, so we need to flush first before decrementing + * 'copros'. If this API is used by several callers for the + * same context, it can lead to over-flushing. It's hopefully + * not common enough to be a problem. * * Skip on hash, as we don't know how to do the proper flush * for the time being. Invalidations will remain global if - * used on hash. + * used on hash. Note that we can't drop 'copros' either, as + * it could make some invalidations local with no flush + * in-between. */ - if (c == 0 && radix_enabled()) { + if (radix_enabled()) { flush_all_mm(mm); - dec_mm_active_cpus(mm); + + c = atomic_dec_if_positive(&mm->context.copros); + /* Detect imbalance between add and remove */ + WARN_ON(c < 0); + + if (c == 0) + dec_mm_active_cpus(mm); } } #else diff --git a/arch/powerpc/include/asm/nmi.h b/arch/powerpc/include/asm/nmi.h index 0f571e0ebca1..bd9ba8defd72 100644 --- a/arch/powerpc/include/asm/nmi.h +++ b/arch/powerpc/include/asm/nmi.h @@ -8,7 +8,7 @@ extern void arch_touch_nmi_watchdog(void); static inline void arch_touch_nmi_watchdog(void) {} #endif -#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_STACKTRACE) +#if defined(CONFIG_NMI_IPI) && defined(CONFIG_STACKTRACE) extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self); #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index 1707781d2f20..8825953c225b 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -109,6 +109,7 @@ static inline void pgtable_free(void *table, unsigned index_size) } #define check_pgt_cache() do { } while (0) +#define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, @@ -139,7 +140,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { tlb_flush_pgtable(tlb, address); - pgtable_page_dtor(table); pgtable_free_tlb(tlb, page_address(table), 0); } #endif /* _ASM_POWERPC_PGALLOC_32_H */ diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h index 0e693f322cb2..e2d62d033708 100644 --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h @@ -141,6 +141,7 @@ static inline void pgtable_free(void *table, int shift) } } +#define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 75c5b2cd9d66..562568414cf4 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -385,6 +385,7 @@ #define SPRN_PSSCR 0x357 /* Processor Stop Status and Control Register (ISA 3.0) */ #define SPRN_PSSCR_PR 0x337 /* PSSCR ISA 3.0, privileged mode access */ #define SPRN_PMCR 0x374 /* Power Management Control Register */ +#define SPRN_RWMR 0x375 /* Region-Weighting Mode Register */ /* HFSCR and FSCR bit numbers are the same */ #define FSCR_SCV_LG 12 /* Enable System Call Vectored */ diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index cfcf6a874cfa..01b5171ea189 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -393,3 +393,4 @@ SYSCALL(pkey_alloc) SYSCALL(pkey_free) SYSCALL(pkey_mprotect) SYSCALL(rseq) +COMPAT_SYS(io_pgetevents) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index 1e9708632dce..c19379f0a32e 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,7 +12,7 @@ #include -#define NR_syscalls 388 +#define NR_syscalls 389 #define __NR__exit __NR_exit diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h index 833ed9a16adf..1b32b56a03d3 100644 --- a/arch/powerpc/include/uapi/asm/kvm.h +++ b/arch/powerpc/include/uapi/asm/kvm.h @@ -633,6 +633,7 @@ struct kvm_ppc_cpu_char { #define KVM_REG_PPC_PSSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbd) #define KVM_REG_PPC_DEC_EXPIRY (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbe) +#define KVM_REG_PPC_ONLINE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xbf) /* Transactional Memory checkpointed state: * This is all GPRs, all VSX regs and a subset of SPRs diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index ac5ba55066dd..985534d0b448 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -399,5 +399,6 @@ #define __NR_pkey_free 385 #define __NR_pkey_mprotect 386 #define __NR_rseq 387 +#define __NR_io_pgetevents 388 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 9fc9e0977009..0a0544335950 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -426,20 +426,20 @@ int main(void) OFFSET(VCPU_HOST_STACK, kvm_vcpu, arch.host_stack); OFFSET(VCPU_HOST_PID, kvm_vcpu, arch.host_pid); OFFSET(VCPU_GUEST_PID, kvm_vcpu, arch.pid); - OFFSET(VCPU_GPRS, kvm_vcpu, arch.gpr); + OFFSET(VCPU_GPRS, kvm_vcpu, arch.regs.gpr); OFFSET(VCPU_VRSAVE, kvm_vcpu, arch.vrsave); OFFSET(VCPU_FPRS, kvm_vcpu, arch.fp.fpr); #ifdef CONFIG_ALTIVEC OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr); #endif - OFFSET(VCPU_XER, kvm_vcpu, arch.xer); - OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr); - OFFSET(VCPU_LR, kvm_vcpu, arch.lr); + OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer); + OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr); + OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link); #ifdef CONFIG_PPC_BOOK3S OFFSET(VCPU_TAR, kvm_vcpu, arch.tar); #endif OFFSET(VCPU_CR, kvm_vcpu, arch.cr); - OFFSET(VCPU_PC, kvm_vcpu, arch.pc); + OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip); #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr); OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0); @@ -696,10 +696,10 @@ int main(void) #else /* CONFIG_PPC_BOOK3S */ OFFSET(VCPU_CR, kvm_vcpu, arch.cr); - OFFSET(VCPU_XER, kvm_vcpu, arch.xer); - OFFSET(VCPU_LR, kvm_vcpu, arch.lr); - OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr); - OFFSET(VCPU_PC, kvm_vcpu, arch.pc); + OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer); + OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link); + OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr); + OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip); OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9); OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst); OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear); diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 4be1c0de9406..96dd3d871986 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -711,7 +711,8 @@ static __init void cpufeatures_cpu_quirks(void) cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST; cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG; cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; - } else /* DD2.1 and up have DD2_1 */ + } else if ((version & 0xffff0000) == 0x004e0000) + /* DD2.1 and up have DD2_1 */ cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; if ((version & 0xffff0000) == 0x004e0000) { diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c index 80547dad37da..fec8a6773119 100644 --- a/arch/powerpc/kernel/hw_breakpoint.c +++ b/arch/powerpc/kernel/hw_breakpoint.c @@ -119,11 +119,9 @@ void arch_unregister_hw_breakpoint(struct perf_event *bp) /* * Check for virtual address in kernel space. */ -int arch_check_bp_in_kernelspace(struct perf_event *bp) +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - - return is_kernel_addr(info->address); + return is_kernel_addr(hw->address); } int arch_bp_generic_fields(int type, int *gen_bp_type) @@ -141,30 +139,31 @@ int arch_bp_generic_fields(int type, int *gen_bp_type) /* * Validate the arch-specific HW Breakpoint register settings */ -int arch_validate_hwbkpt_settings(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { int ret = -EINVAL, length_max; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); if (!bp) return ret; - info->type = HW_BRK_TYPE_TRANSLATE; - if (bp->attr.bp_type & HW_BREAKPOINT_R) - info->type |= HW_BRK_TYPE_READ; - if (bp->attr.bp_type & HW_BREAKPOINT_W) - info->type |= HW_BRK_TYPE_WRITE; - if (info->type == HW_BRK_TYPE_TRANSLATE) + hw->type = HW_BRK_TYPE_TRANSLATE; + if (attr->bp_type & HW_BREAKPOINT_R) + hw->type |= HW_BRK_TYPE_READ; + if (attr->bp_type & HW_BREAKPOINT_W) + hw->type |= HW_BRK_TYPE_WRITE; + if (hw->type == HW_BRK_TYPE_TRANSLATE) /* must set alteast read or write */ return ret; - if (!(bp->attr.exclude_user)) - info->type |= HW_BRK_TYPE_USER; - if (!(bp->attr.exclude_kernel)) - info->type |= HW_BRK_TYPE_KERNEL; - if (!(bp->attr.exclude_hv)) - info->type |= HW_BRK_TYPE_HYP; - info->address = bp->attr.bp_addr; - info->len = bp->attr.bp_len; + if (!attr->exclude_user) + hw->type |= HW_BRK_TYPE_USER; + if (!attr->exclude_kernel) + hw->type |= HW_BRK_TYPE_KERNEL; + if (!attr->exclude_hv) + hw->type |= HW_BRK_TYPE_HYP; + hw->address = attr->bp_addr; + hw->len = attr->bp_len; /* * Since breakpoint length can be a maximum of HW_BREAKPOINT_LEN(8) @@ -178,12 +177,12 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) if (cpu_has_feature(CPU_FTR_DAWR)) { length_max = 512 ; /* 64 doublewords */ /* DAWR region can't cross 512 boundary */ - if ((bp->attr.bp_addr >> 9) != - ((bp->attr.bp_addr + bp->attr.bp_len - 1) >> 9)) + if ((attr->bp_addr >> 9) != + ((attr->bp_addr + attr->bp_len - 1) >> 9)) return -EINVAL; } - if (info->len > - (length_max - (info->address & HW_BREAKPOINT_ALIGN))) + if (hw->len > + (length_max - (hw->address & HW_BREAKPOINT_ALIGN))) return -EINVAL; return 0; } diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S index e734f6e45abc..689306118b48 100644 --- a/arch/powerpc/kernel/idle_book3s.S +++ b/arch/powerpc/kernel/idle_book3s.S @@ -144,7 +144,9 @@ power9_restore_additional_sprs: mtspr SPRN_MMCR1, r4 ld r3, STOP_MMCR2(r13) + ld r4, PACA_SPRG_VDSO(r13) mtspr SPRN_MMCR2, r3 + mtspr SPRN_SPRG3, r4 blr /* diff --git a/arch/powerpc/kernel/kprobes-ftrace.c b/arch/powerpc/kernel/kprobes-ftrace.c index 7a1f99f1b47f..e4a49c051325 100644 --- a/arch/powerpc/kernel/kprobes-ftrace.c +++ b/arch/powerpc/kernel/kprobes-ftrace.c @@ -25,50 +25,6 @@ #include #include -/* - * This is called from ftrace code after invoking registered handlers to - * disambiguate regs->nip changes done by jprobes and livepatch. We check if - * there is an active jprobe at the provided address (mcount location). - */ -int __is_active_jprobe(unsigned long addr) -{ - if (!preemptible()) { - struct kprobe *p = raw_cpu_read(current_kprobe); - return (p && (unsigned long)p->addr == addr) ? 1 : 0; - } - - return 0; -} - -static nokprobe_inline -int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb, unsigned long orig_nip) -{ - /* - * Emulate singlestep (and also recover regs->nip) - * as if there is a nop - */ - regs->nip = (unsigned long)p->addr + MCOUNT_INSN_SIZE; - if (unlikely(p->post_handler)) { - kcb->kprobe_status = KPROBE_HIT_SSDONE; - p->post_handler(p, regs, 0); - } - __this_cpu_write(current_kprobe, NULL); - if (orig_nip) - regs->nip = orig_nip; - return 1; -} - -int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) -{ - if (kprobe_ftrace(p)) - return __skip_singlestep(p, regs, kcb, 0); - else - return 0; -} -NOKPROBE_SYMBOL(skip_singlestep); - /* Ftrace callback handler for kprobes */ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, struct ftrace_ops *ops, struct pt_regs *regs) @@ -76,18 +32,14 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, struct kprobe *p; struct kprobe_ctlblk *kcb; - preempt_disable(); - p = get_kprobe((kprobe_opcode_t *)nip); if (unlikely(!p) || kprobe_disabled(p)) - goto end; + return; kcb = get_kprobe_ctlblk(); if (kprobe_running()) { kprobes_inc_nmissed_count(p); } else { - unsigned long orig_nip = regs->nip; - /* * On powerpc, NIP is *before* this instruction for the * pre handler @@ -96,19 +48,23 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, __this_cpu_write(current_kprobe, p); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (!p->pre_handler || !p->pre_handler(p, regs)) - __skip_singlestep(p, regs, kcb, orig_nip); - else { + if (!p->pre_handler || !p->pre_handler(p, regs)) { /* - * If pre_handler returns !0, it sets regs->nip and - * resets current kprobe. In this case, we should not - * re-enable preemption. + * Emulate singlestep (and also recover regs->nip) + * as if there is a nop */ - return; + regs->nip += MCOUNT_INSN_SIZE; + if (unlikely(p->post_handler)) { + kcb->kprobe_status = KPROBE_HIT_SSDONE; + p->post_handler(p, regs, 0); + } } + /* + * If pre_handler returns !0, it changes regs->nip. We have to + * skip emulating post_handler. + */ + __this_cpu_write(current_kprobe, NULL); } -end: - preempt_enable_no_resched(); } NOKPROBE_SYMBOL(kprobe_ftrace_handler); diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index e4c5bf33970b..5c60bb0f927f 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -317,25 +317,17 @@ int kprobe_handler(struct pt_regs *regs) } prepare_singlestep(p, regs); return 1; - } else { - if (*addr != BREAKPOINT_INSTRUCTION) { - /* If trap variant, then it belongs not to us */ - kprobe_opcode_t cur_insn = *addr; - if (is_trap(cur_insn)) - goto no_kprobe; - /* The breakpoint instruction was removed by - * another cpu right after we hit, no further - * handling of this interrupt is appropriate - */ - ret = 1; + } else if (*addr != BREAKPOINT_INSTRUCTION) { + /* If trap variant, then it belongs not to us */ + kprobe_opcode_t cur_insn = *addr; + + if (is_trap(cur_insn)) goto no_kprobe; - } - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - if (!skip_singlestep(p, regs, kcb)) - goto ss_probe; - ret = 1; - } + /* The breakpoint instruction was removed by + * another cpu right after we hit, no further + * handling of this interrupt is appropriate + */ + ret = 1; } goto no_kprobe; } @@ -350,7 +342,7 @@ int kprobe_handler(struct pt_regs *regs) */ kprobe_opcode_t cur_insn = *addr; if (is_trap(cur_insn)) - goto no_kprobe; + goto no_kprobe; /* * The breakpoint instruction was removed right * after we hit it. Another cpu has removed @@ -366,11 +358,13 @@ int kprobe_handler(struct pt_regs *regs) kcb->kprobe_status = KPROBE_HIT_ACTIVE; set_current_kprobe(p, regs, kcb); - if (p->pre_handler && p->pre_handler(p, regs)) - /* handler has already set things up, so skip ss setup */ + if (p->pre_handler && p->pre_handler(p, regs)) { + /* handler changed execution path, so skip ss setup */ + reset_current_kprobe(); + preempt_enable_no_resched(); return 1; + } -ss_probe: if (p->ainsn.boostable >= 0) { ret = try_to_emulate(p, regs); @@ -611,60 +605,6 @@ unsigned long arch_deref_entry_point(void *entry) } NOKPROBE_SYMBOL(arch_deref_entry_point); -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs)); - - /* setup return addr to the jprobe handler routine */ - regs->nip = arch_deref_entry_point(jp->entry); -#ifdef PPC64_ELF_ABI_v2 - regs->gpr[12] = (unsigned long)jp->entry; -#elif defined(PPC64_ELF_ABI_v1) - regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc); -#endif - - /* - * jprobes use jprobe_return() which skips the normal return - * path of the function, and this messes up the accounting of the - * function graph tracer. - * - * Pause function graph tracing while performing the jprobe function. - */ - pause_graph_tracing(); - - return 1; -} -NOKPROBE_SYMBOL(setjmp_pre_handler); - -void __used jprobe_return(void) -{ - asm volatile("jprobe_return_trap:\n" - "trap\n" - ::: "memory"); -} -NOKPROBE_SYMBOL(jprobe_return); - -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - if (regs->nip != ppc_kallsyms_lookup_name("jprobe_return_trap")) { - pr_debug("longjmp_break_handler NIP (0x%lx) does not match jprobe_return_trap (0x%lx)\n", - regs->nip, ppc_kallsyms_lookup_name("jprobe_return_trap")); - return 0; - } - - memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs)); - /* It's OK to start function graph tracing again */ - unpause_graph_tracing(); - preempt_enable_no_resched(); - return 1; -} -NOKPROBE_SYMBOL(longjmp_break_handler); - static struct kprobe trampoline_p = { .addr = (kprobe_opcode_t *) &kretprobe_trampoline, .pre_handler = trampoline_probe_handler diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c index fe9733ffffaa..471aac313b89 100644 --- a/arch/powerpc/kernel/pci-common.c +++ b/arch/powerpc/kernel/pci-common.c @@ -42,6 +42,8 @@ #include #include +#include "../../../drivers/pci/pci.h" + /* hose_spinlock protects accesses to the the phb_bitmap. */ static DEFINE_SPINLOCK(hose_spinlock); LIST_HEAD(hose_list); @@ -1014,7 +1016,7 @@ void pcibios_setup_bus_devices(struct pci_bus *bus) /* Cardbus can call us to add new devices to a bus, so ignore * those who are already fully discovered */ - if (dev->is_added) + if (pci_dev_is_added(dev)) continue; pcibios_setup_device(dev); diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c index 4f861055a852..d63b488d34d7 100644 --- a/arch/powerpc/kernel/pci_32.c +++ b/arch/powerpc/kernel/pci_32.c @@ -285,9 +285,6 @@ pci_bus_to_hose(int bus) * Note that the returned IO or memory base is a physical address */ -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus, unsigned long, devfn) { @@ -313,4 +310,3 @@ SYSCALL_DEFINE3(pciconfig_iobase, long, which, return result; } -#pragma GCC diagnostic pop diff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c index 812171c09f42..dff28f903512 100644 --- a/arch/powerpc/kernel/pci_64.c +++ b/arch/powerpc/kernel/pci_64.c @@ -203,9 +203,6 @@ void pcibios_setup_phb_io_space(struct pci_controller *hose) #define IOBASE_ISA_IO 3 #define IOBASE_ISA_MEM 4 -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, in_bus, unsigned long, in_devfn) { @@ -259,7 +256,6 @@ SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, in_bus, return -EOPNOTSUPP; } -#pragma GCC diagnostic pop #ifdef CONFIG_NUMA int pcibus_to_node(struct pci_bus *bus) diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c index 7fb9f83dcde8..8afd146bc9c7 100644 --- a/arch/powerpc/kernel/rtas.c +++ b/arch/powerpc/kernel/rtas.c @@ -1051,9 +1051,6 @@ struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log, } /* We assume to be passed big endian arguments */ -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs) { struct rtas_args args; @@ -1140,7 +1137,6 @@ SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs) return 0; } -#pragma GCC diagnostic pop /* * Call early during boot, before mem init, to retrieve the RTAS diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 62b1a40d8957..40b44bb53a4e 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -700,12 +700,19 @@ EXPORT_SYMBOL(check_legacy_ioport); static int ppc_panic_event(struct notifier_block *this, unsigned long event, void *ptr) { + /* + * panic does a local_irq_disable, but we really + * want interrupts to be hard disabled. + */ + hard_irq_disable(); + /* * If firmware-assisted dump has been registered then trigger * firmware-assisted dump and let firmware handle everything else. */ crash_fadump(NULL, ptr); - ppc_md.panic(ptr); /* May not return */ + if (ppc_md.panic) + ppc_md.panic(ptr); /* May not return */ return NOTIFY_DONE; } @@ -716,7 +723,8 @@ static struct notifier_block ppc_panic_block = { void __init setup_panic(void) { - if (!ppc_md.panic) + /* PPC64 always does a hard irq disable in its panic handler */ + if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic) return; atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block); } diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 7a7ce8ad455e..225bc5f91049 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -387,6 +387,14 @@ void early_setup_secondary(void) #endif /* CONFIG_SMP */ +void panic_smp_self_stop(void) +{ + hard_irq_disable(); + spin_begin(); + while (1) + spin_cpu_relax(); +} + #if defined(CONFIG_SMP) || defined(CONFIG_KEXEC_CORE) static bool use_spinloop(void) { diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index 17fe4339ba59..b3e8db376ecd 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -134,7 +134,7 @@ static void do_signal(struct task_struct *tsk) /* Re-enable the breakpoints for the signal stack */ thread_change_pc(tsk, tsk->thread.regs); - rseq_signal_deliver(tsk->thread.regs); + rseq_signal_deliver(&ksig, tsk->thread.regs); if (is32) { if (ksig.ka.sa.sa_flags & SA_SIGINFO) @@ -170,7 +170,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags) if (thread_info_flags & _TIF_NOTIFY_RESUME) { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); - rseq_handle_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } user_enter(); diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 5eedbb282d42..e6474a45cef5 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -1038,9 +1038,6 @@ static int do_setcontext_tm(struct ucontext __user *ucp, } #endif -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" #ifdef CONFIG_PPC64 COMPAT_SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, struct ucontext __user *, new_ctx, int, ctx_size) @@ -1134,7 +1131,6 @@ SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, set_thread_flag(TIF_RESTOREALL); return 0; } -#pragma GCC diagnostic pop #ifdef CONFIG_PPC64 COMPAT_SYSCALL_DEFINE0(rt_sigreturn) @@ -1231,9 +1227,6 @@ SYSCALL_DEFINE0(rt_sigreturn) return 0; } -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" #ifdef CONFIG_PPC32 SYSCALL_DEFINE3(debug_setcontext, struct ucontext __user *, ctx, int, ndbg, struct sig_dbg_op __user *, dbg) @@ -1337,7 +1330,6 @@ SYSCALL_DEFINE3(debug_setcontext, struct ucontext __user *, ctx, return 0; } #endif -#pragma GCC diagnostic pop /* * OK, we're invoking a handler diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index d42b60020389..83d51bf586c7 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -625,9 +625,6 @@ static long setup_trampoline(unsigned int syscall, unsigned int __user *tramp) /* * Handle {get,set,swap}_context operations */ -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, struct ucontext __user *, new_ctx, long, ctx_size) { @@ -693,7 +690,6 @@ SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, set_thread_flag(TIF_RESTOREALL); return 0; } -#pragma GCC diagnostic pop /* diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 5eadfffabe35..4794d6b4f4d2 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -600,9 +600,6 @@ static void nmi_stop_this_cpu(struct pt_regs *regs) nmi_ipi_busy_count--; nmi_ipi_unlock(); - /* Remove this CPU */ - set_cpu_online(smp_processor_id(), false); - spin_begin(); while (1) spin_cpu_relax(); @@ -617,9 +614,6 @@ void smp_send_stop(void) static void stop_this_cpu(void *dummy) { - /* Remove this CPU */ - set_cpu_online(smp_processor_id(), false); - hard_irq_disable(); spin_begin(); while (1) diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c index 07e97f289c52..e2c50b55138f 100644 --- a/arch/powerpc/kernel/stacktrace.c +++ b/arch/powerpc/kernel/stacktrace.c @@ -196,7 +196,7 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk, EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable); #endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */ -#ifdef CONFIG_PPC_BOOK3S_64 +#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_NMI_IPI) static void handle_backtrace_ipi(struct pt_regs *regs) { nmi_cpu_backtrace(regs); @@ -242,4 +242,4 @@ void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self) { nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace_ipi); } -#endif /* CONFIG_PPC64 */ +#endif /* defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_NMI_IPI) */ diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c index 083fa06962fd..466216506eb2 100644 --- a/arch/powerpc/kernel/syscalls.c +++ b/arch/powerpc/kernel/syscalls.c @@ -62,9 +62,6 @@ out: return ret; } -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE6(mmap2, unsigned long, addr, size_t, len, unsigned long, prot, unsigned long, flags, unsigned long, fd, unsigned long, pgoff) @@ -78,7 +75,6 @@ SYSCALL_DEFINE6(mmap, unsigned long, addr, size_t, len, { return do_mmap2(addr, len, prot, flags, fd, offset, PAGE_SHIFT); } -#pragma GCC diagnostic pop #ifdef CONFIG_PPC32 /* diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S index 9a5b5a513604..32476a6e4e9c 100644 --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S @@ -104,39 +104,13 @@ ftrace_regs_call: bl ftrace_stub nop - /* Load the possibly modified NIP */ - ld r15, _NIP(r1) - + /* Load ctr with the possibly modified NIP */ + ld r3, _NIP(r1) + mtctr r3 #ifdef CONFIG_LIVEPATCH - cmpd r14, r15 /* has NIP been altered? */ + cmpd r14, r3 /* has NIP been altered? */ #endif -#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE) - /* NIP has not been altered, skip over further checks */ - beq 1f - - /* Check if there is an active jprobe on us */ - subi r3, r14, 4 - bl __is_active_jprobe - nop - - /* - * If r3 == 1, then this is a kprobe/jprobe. - * else, this is livepatched function. - * - * The conditional branch for livepatch_handler below will use the - * result of this comparison. For kprobe/jprobe, we just need to branch to - * the new NIP, not call livepatch_handler. The branch below is bne, so we - * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want - * CR0[EQ] = (r3 == 1). - */ - cmpdi r3, 1 -1: -#endif - - /* Load CTR with the possibly modified NIP */ - mtctr r15 - /* Restore gprs */ REST_GPR(0,r1) REST_10GPRS(2,r1) @@ -154,10 +128,7 @@ ftrace_regs_call: addi r1, r1, SWITCH_FRAME_SIZE #ifdef CONFIG_LIVEPATCH - /* - * Based on the cmpd or cmpdi above, if the NIP was altered and we're - * not on a kprobe/jprobe, then handle livepatch. - */ + /* Based on the cmpd above, if the NIP was altered handle livepatch */ bne- livepatch_handler #endif diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index 4b19da8c87ae..f872c04bb5b1 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -63,6 +63,9 @@ kvm-pr-y := \ book3s_64_mmu.o \ book3s_32_mmu.o +kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \ + tm.o + ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \ book3s_rmhandlers.o diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 97d4a112648f..edaf4720d156 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -134,7 +134,7 @@ void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags) { kvmppc_unfixup_split_real(vcpu); kvmppc_set_srr0(vcpu, kvmppc_get_pc(vcpu)); - kvmppc_set_srr1(vcpu, kvmppc_get_msr(vcpu) | flags); + kvmppc_set_srr1(vcpu, (kvmppc_get_msr(vcpu) & ~0x783f0000ul) | flags); kvmppc_set_pc(vcpu, kvmppc_interrupt_offset(vcpu) + vec); vcpu->arch.mmu.reset_msr(vcpu); } @@ -256,18 +256,15 @@ void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, ulong dar, { kvmppc_set_dar(vcpu, dar); kvmppc_set_dsisr(vcpu, flags); - kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE); + kvmppc_inject_interrupt(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE, 0); } -EXPORT_SYMBOL_GPL(kvmppc_core_queue_data_storage); /* used by kvm_hv */ +EXPORT_SYMBOL_GPL(kvmppc_core_queue_data_storage); void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong flags) { - u64 msr = kvmppc_get_msr(vcpu); - msr &= ~(SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT); - msr |= flags & (SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT); - kvmppc_set_msr_fast(vcpu, msr); - kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE); + kvmppc_inject_interrupt(vcpu, BOOK3S_INTERRUPT_INST_STORAGE, flags); } +EXPORT_SYMBOL_GPL(kvmppc_core_queue_inst_storage); static int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority) @@ -450,8 +447,8 @@ int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid, return r; } -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *inst) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *inst) { ulong pc = kvmppc_get_pc(vcpu); int r; @@ -509,8 +506,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { int i; - vcpu_load(vcpu); - regs->pc = kvmppc_get_pc(vcpu); regs->cr = kvmppc_get_cr(vcpu); regs->ctr = kvmppc_get_ctr(vcpu); @@ -532,7 +527,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) regs->gpr[i] = kvmppc_get_gpr(vcpu, i); - vcpu_put(vcpu); return 0; } @@ -540,8 +534,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { int i; - vcpu_load(vcpu); - kvmppc_set_pc(vcpu, regs->pc); kvmppc_set_cr(vcpu, regs->cr); kvmppc_set_ctr(vcpu, regs->ctr); @@ -562,7 +554,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) kvmppc_set_gpr(vcpu, i, regs->gpr[i]); - vcpu_put(vcpu); return 0; } diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h index 4ad5e287b8bc..14ef03501d21 100644 --- a/arch/powerpc/kvm/book3s.h +++ b/arch/powerpc/kvm/book3s.h @@ -31,4 +31,10 @@ extern int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, extern int kvmppc_book3s_init_pr(void); extern void kvmppc_book3s_exit_pr(void); +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +extern void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val); +#else +static inline void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val) {} +#endif + #endif diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c index 1992676c7a94..45c8ea4a0487 100644 --- a/arch/powerpc/kvm/book3s_32_mmu.c +++ b/arch/powerpc/kvm/book3s_32_mmu.c @@ -52,7 +52,7 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu) { #ifdef DEBUG_MMU_PTE_IP - return vcpu->arch.pc == DEBUG_MMU_PTE_IP; + return vcpu->arch.regs.nip == DEBUG_MMU_PTE_IP; #else return true; #endif diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c index a93d719edc90..cf9d686e8162 100644 --- a/arch/powerpc/kvm/book3s_64_mmu.c +++ b/arch/powerpc/kvm/book3s_64_mmu.c @@ -38,7 +38,16 @@ static void kvmppc_mmu_book3s_64_reset_msr(struct kvm_vcpu *vcpu) { - kvmppc_set_msr(vcpu, vcpu->arch.intr_msr); + unsigned long msr = vcpu->arch.intr_msr; + unsigned long cur_msr = kvmppc_get_msr(vcpu); + + /* If transactional, change to suspend mode on IRQ delivery */ + if (MSR_TM_TRANSACTIONAL(cur_msr)) + msr |= MSR_TS_S; + else + msr |= cur_msr & MSR_TS_MASK; + + kvmppc_set_msr(vcpu, msr); } static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe( diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 1b3fcafc685e..7f3a8cf5d66f 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -272,6 +272,9 @@ int kvmppc_mmu_hv_init(void) if (!cpu_has_feature(CPU_FTR_HVMODE)) return -EINVAL; + if (!mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE)) + return -EINVAL; + /* POWER7 has 10-bit LPIDs (12-bit in POWER8) */ host_lpid = mfspr(SPRN_LPID); rsvd_lpid = LPID_RSVD; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 481da8f93fa4..176f911ee983 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -139,44 +139,24 @@ int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, return 0; } -#ifdef CONFIG_PPC_64K_PAGES -#define MMU_BASE_PSIZE MMU_PAGE_64K -#else -#define MMU_BASE_PSIZE MMU_PAGE_4K -#endif - static void kvmppc_radix_tlbie_page(struct kvm *kvm, unsigned long addr, unsigned int pshift) { - int psize = MMU_BASE_PSIZE; - - if (pshift >= PUD_SHIFT) - psize = MMU_PAGE_1G; - else if (pshift >= PMD_SHIFT) - psize = MMU_PAGE_2M; - addr &= ~0xfffUL; - addr |= mmu_psize_defs[psize].ap << 5; - asm volatile("ptesync": : :"memory"); - asm volatile(PPC_TLBIE_5(%0, %1, 0, 0, 1) - : : "r" (addr), "r" (kvm->arch.lpid) : "memory"); - if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) - asm volatile(PPC_TLBIE_5(%0, %1, 0, 0, 1) - : : "r" (addr), "r" (kvm->arch.lpid) : "memory"); - asm volatile("eieio ; tlbsync ; ptesync": : :"memory"); + unsigned long psize = PAGE_SIZE; + + if (pshift) + psize = 1UL << pshift; + + addr &= ~(psize - 1); + radix__flush_tlb_lpid_page(kvm->arch.lpid, addr, psize); } -static void kvmppc_radix_flush_pwc(struct kvm *kvm, unsigned long addr) +static void kvmppc_radix_flush_pwc(struct kvm *kvm) { - unsigned long rb = 0x2 << PPC_BITLSHIFT(53); /* IS = 2 */ - - asm volatile("ptesync": : :"memory"); - /* RIC=1 PRS=0 R=1 IS=2 */ - asm volatile(PPC_TLBIE_5(%0, %1, 1, 0, 1) - : : "r" (rb), "r" (kvm->arch.lpid) : "memory"); - asm volatile("eieio ; tlbsync ; ptesync": : :"memory"); + radix__flush_pwc_lpid(kvm->arch.lpid); } -unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep, +static unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep, unsigned long clr, unsigned long set, unsigned long addr, unsigned int shift) { @@ -228,6 +208,167 @@ static void kvmppc_pmd_free(pmd_t *pmdp) kmem_cache_free(kvm_pmd_cache, pmdp); } +static void kvmppc_unmap_pte(struct kvm *kvm, pte_t *pte, + unsigned long gpa, unsigned int shift) + +{ + unsigned long page_size = 1ul << shift; + unsigned long old; + + old = kvmppc_radix_update_pte(kvm, pte, ~0UL, 0, gpa, shift); + kvmppc_radix_tlbie_page(kvm, gpa, shift); + if (old & _PAGE_DIRTY) { + unsigned long gfn = gpa >> PAGE_SHIFT; + struct kvm_memory_slot *memslot; + + memslot = gfn_to_memslot(kvm, gfn); + if (memslot && memslot->dirty_bitmap) + kvmppc_update_dirty_map(memslot, gfn, page_size); + } +} + +/* + * kvmppc_free_p?d are used to free existing page tables, and recursively + * descend and clear and free children. + * Callers are responsible for flushing the PWC. + * + * When page tables are being unmapped/freed as part of page fault path + * (full == false), ptes are not expected. There is code to unmap them + * and emit a warning if encountered, but there may already be data + * corruption due to the unexpected mappings. + */ +static void kvmppc_unmap_free_pte(struct kvm *kvm, pte_t *pte, bool full) +{ + if (full) { + memset(pte, 0, sizeof(long) << PTE_INDEX_SIZE); + } else { + pte_t *p = pte; + unsigned long it; + + for (it = 0; it < PTRS_PER_PTE; ++it, ++p) { + if (pte_val(*p) == 0) + continue; + WARN_ON_ONCE(1); + kvmppc_unmap_pte(kvm, p, + pte_pfn(*p) << PAGE_SHIFT, + PAGE_SHIFT); + } + } + + kvmppc_pte_free(pte); +} + +static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full) +{ + unsigned long im; + pmd_t *p = pmd; + + for (im = 0; im < PTRS_PER_PMD; ++im, ++p) { + if (!pmd_present(*p)) + continue; + if (pmd_is_leaf(*p)) { + if (full) { + pmd_clear(p); + } else { + WARN_ON_ONCE(1); + kvmppc_unmap_pte(kvm, (pte_t *)p, + pte_pfn(*(pte_t *)p) << PAGE_SHIFT, + PMD_SHIFT); + } + } else { + pte_t *pte; + + pte = pte_offset_map(p, 0); + kvmppc_unmap_free_pte(kvm, pte, full); + pmd_clear(p); + } + } + kvmppc_pmd_free(pmd); +} + +static void kvmppc_unmap_free_pud(struct kvm *kvm, pud_t *pud) +{ + unsigned long iu; + pud_t *p = pud; + + for (iu = 0; iu < PTRS_PER_PUD; ++iu, ++p) { + if (!pud_present(*p)) + continue; + if (pud_huge(*p)) { + pud_clear(p); + } else { + pmd_t *pmd; + + pmd = pmd_offset(p, 0); + kvmppc_unmap_free_pmd(kvm, pmd, true); + pud_clear(p); + } + } + pud_free(kvm->mm, pud); +} + +void kvmppc_free_radix(struct kvm *kvm) +{ + unsigned long ig; + pgd_t *pgd; + + if (!kvm->arch.pgtable) + return; + pgd = kvm->arch.pgtable; + for (ig = 0; ig < PTRS_PER_PGD; ++ig, ++pgd) { + pud_t *pud; + + if (!pgd_present(*pgd)) + continue; + pud = pud_offset(pgd, 0); + kvmppc_unmap_free_pud(kvm, pud); + pgd_clear(pgd); + } + pgd_free(kvm->mm, kvm->arch.pgtable); + kvm->arch.pgtable = NULL; +} + +static void kvmppc_unmap_free_pmd_entry_table(struct kvm *kvm, pmd_t *pmd, + unsigned long gpa) +{ + pte_t *pte = pte_offset_kernel(pmd, 0); + + /* + * Clearing the pmd entry then flushing the PWC ensures that the pte + * page no longer be cached by the MMU, so can be freed without + * flushing the PWC again. + */ + pmd_clear(pmd); + kvmppc_radix_flush_pwc(kvm); + + kvmppc_unmap_free_pte(kvm, pte, false); +} + +static void kvmppc_unmap_free_pud_entry_table(struct kvm *kvm, pud_t *pud, + unsigned long gpa) +{ + pmd_t *pmd = pmd_offset(pud, 0); + + /* + * Clearing the pud entry then flushing the PWC ensures that the pmd + * page and any children pte pages will no longer be cached by the MMU, + * so can be freed without flushing the PWC again. + */ + pud_clear(pud); + kvmppc_radix_flush_pwc(kvm); + + kvmppc_unmap_free_pmd(kvm, pmd, false); +} + +/* + * There are a number of bits which may differ between different faults to + * the same partition scope entry. RC bits, in the course of cleaning and + * aging. And the write bit can change, either the access could have been + * upgraded, or a read fault could happen concurrently with a write fault + * that sets those bits first. + */ +#define PTE_BITS_MUST_MATCH (~(_PAGE_WRITE | _PAGE_DIRTY | _PAGE_ACCESSED)) + static int kvmppc_create_pte(struct kvm *kvm, pte_t pte, unsigned long gpa, unsigned int level, unsigned long mmu_seq) { @@ -235,7 +376,6 @@ static int kvmppc_create_pte(struct kvm *kvm, pte_t pte, unsigned long gpa, pud_t *pud, *new_pud = NULL; pmd_t *pmd, *new_pmd = NULL; pte_t *ptep, *new_ptep = NULL; - unsigned long old; int ret; /* Traverse the guest's 2nd-level tree, allocate new levels needed */ @@ -273,42 +413,39 @@ static int kvmppc_create_pte(struct kvm *kvm, pte_t pte, unsigned long gpa, if (pud_huge(*pud)) { unsigned long hgpa = gpa & PUD_MASK; + /* Check if we raced and someone else has set the same thing */ + if (level == 2) { + if (pud_raw(*pud) == pte_raw(pte)) { + ret = 0; + goto out_unlock; + } + /* Valid 1GB page here already, add our extra bits */ + WARN_ON_ONCE((pud_val(*pud) ^ pte_val(pte)) & + PTE_BITS_MUST_MATCH); + kvmppc_radix_update_pte(kvm, (pte_t *)pud, + 0, pte_val(pte), hgpa, PUD_SHIFT); + ret = 0; + goto out_unlock; + } /* * If we raced with another CPU which has just put * a 1GB pte in after we saw a pmd page, try again. */ - if (level <= 1 && !new_pmd) { + if (!new_pmd) { ret = -EAGAIN; goto out_unlock; } - /* Check if we raced and someone else has set the same thing */ - if (level == 2 && pud_raw(*pud) == pte_raw(pte)) { - ret = 0; - goto out_unlock; - } /* Valid 1GB page here already, remove it */ - old = kvmppc_radix_update_pte(kvm, (pte_t *)pud, - ~0UL, 0, hgpa, PUD_SHIFT); - kvmppc_radix_tlbie_page(kvm, hgpa, PUD_SHIFT); - if (old & _PAGE_DIRTY) { - unsigned long gfn = hgpa >> PAGE_SHIFT; - struct kvm_memory_slot *memslot; - memslot = gfn_to_memslot(kvm, gfn); - if (memslot && memslot->dirty_bitmap) - kvmppc_update_dirty_map(memslot, - gfn, PUD_SIZE); - } + kvmppc_unmap_pte(kvm, (pte_t *)pud, hgpa, PUD_SHIFT); } if (level == 2) { if (!pud_none(*pud)) { /* * There's a page table page here, but we wanted to * install a large page, so remove and free the page - * table page. new_pmd will be NULL since level == 2. + * table page. */ - new_pmd = pmd_offset(pud, 0); - pud_clear(pud); - kvmppc_radix_flush_pwc(kvm, gpa); + kvmppc_unmap_free_pud_entry_table(kvm, pud, gpa); } kvmppc_radix_set_pte_at(kvm, gpa, (pte_t *)pud, pte); ret = 0; @@ -324,42 +461,40 @@ static int kvmppc_create_pte(struct kvm *kvm, pte_t pte, unsigned long gpa, if (pmd_is_leaf(*pmd)) { unsigned long lgpa = gpa & PMD_MASK; + /* Check if we raced and someone else has set the same thing */ + if (level == 1) { + if (pmd_raw(*pmd) == pte_raw(pte)) { + ret = 0; + goto out_unlock; + } + /* Valid 2MB page here already, add our extra bits */ + WARN_ON_ONCE((pmd_val(*pmd) ^ pte_val(pte)) & + PTE_BITS_MUST_MATCH); + kvmppc_radix_update_pte(kvm, pmdp_ptep(pmd), + 0, pte_val(pte), lgpa, PMD_SHIFT); + ret = 0; + goto out_unlock; + } + /* * If we raced with another CPU which has just put * a 2MB pte in after we saw a pte page, try again. */ - if (level == 0 && !new_ptep) { + if (!new_ptep) { ret = -EAGAIN; goto out_unlock; } - /* Check if we raced and someone else has set the same thing */ - if (level == 1 && pmd_raw(*pmd) == pte_raw(pte)) { - ret = 0; - goto out_unlock; - } /* Valid 2MB page here already, remove it */ - old = kvmppc_radix_update_pte(kvm, pmdp_ptep(pmd), - ~0UL, 0, lgpa, PMD_SHIFT); - kvmppc_radix_tlbie_page(kvm, lgpa, PMD_SHIFT); - if (old & _PAGE_DIRTY) { - unsigned long gfn = lgpa >> PAGE_SHIFT; - struct kvm_memory_slot *memslot; - memslot = gfn_to_memslot(kvm, gfn); - if (memslot && memslot->dirty_bitmap) - kvmppc_update_dirty_map(memslot, - gfn, PMD_SIZE); - } + kvmppc_unmap_pte(kvm, pmdp_ptep(pmd), lgpa, PMD_SHIFT); } if (level == 1) { if (!pmd_none(*pmd)) { /* * There's a page table page here, but we wanted to * install a large page, so remove and free the page - * table page. new_ptep will be NULL since level == 1. + * table page. */ - new_ptep = pte_offset_kernel(pmd, 0); - pmd_clear(pmd); - kvmppc_radix_flush_pwc(kvm, gpa); + kvmppc_unmap_free_pmd_entry_table(kvm, pmd, gpa); } kvmppc_radix_set_pte_at(kvm, gpa, pmdp_ptep(pmd), pte); ret = 0; @@ -378,12 +513,12 @@ static int kvmppc_create_pte(struct kvm *kvm, pte_t pte, unsigned long gpa, ret = 0; goto out_unlock; } - /* PTE was previously valid, so invalidate it */ - old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_PRESENT, - 0, gpa, 0); - kvmppc_radix_tlbie_page(kvm, gpa, 0); - if (old & _PAGE_DIRTY) - mark_page_dirty(kvm, gpa >> PAGE_SHIFT); + /* Valid page here already, add our extra bits */ + WARN_ON_ONCE((pte_val(*ptep) ^ pte_val(pte)) & + PTE_BITS_MUST_MATCH); + kvmppc_radix_update_pte(kvm, ptep, 0, pte_val(pte), gpa, 0); + ret = 0; + goto out_unlock; } kvmppc_radix_set_pte_at(kvm, gpa, ptep, pte); ret = 0; @@ -565,9 +700,13 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned long mask = (1ul << shift) - PAGE_SIZE; pte = __pte(pte_val(pte) | (hva & mask)); } - if (!(writing || upgrade_write)) - pte = __pte(pte_val(pte) & ~ _PAGE_WRITE); - pte = __pte(pte_val(pte) | _PAGE_EXEC); + pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED); + if (writing || upgrade_write) { + if (pte_val(pte) & _PAGE_WRITE) + pte = __pte(pte_val(pte) | _PAGE_DIRTY); + } else { + pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY)); + } } /* Allocate space in the tree and write the PTE */ @@ -734,51 +873,6 @@ int kvmppc_init_vm_radix(struct kvm *kvm) return 0; } -void kvmppc_free_radix(struct kvm *kvm) -{ - unsigned long ig, iu, im; - pte_t *pte; - pmd_t *pmd; - pud_t *pud; - pgd_t *pgd; - - if (!kvm->arch.pgtable) - return; - pgd = kvm->arch.pgtable; - for (ig = 0; ig < PTRS_PER_PGD; ++ig, ++pgd) { - if (!pgd_present(*pgd)) - continue; - pud = pud_offset(pgd, 0); - for (iu = 0; iu < PTRS_PER_PUD; ++iu, ++pud) { - if (!pud_present(*pud)) - continue; - if (pud_huge(*pud)) { - pud_clear(pud); - continue; - } - pmd = pmd_offset(pud, 0); - for (im = 0; im < PTRS_PER_PMD; ++im, ++pmd) { - if (pmd_is_leaf(*pmd)) { - pmd_clear(pmd); - continue; - } - if (!pmd_present(*pmd)) - continue; - pte = pte_offset_map(pmd, 0); - memset(pte, 0, sizeof(long) << PTE_INDEX_SIZE); - kvmppc_pte_free(pte); - pmd_clear(pmd); - } - kvmppc_pmd_free(pmd_offset(pud, 0)); - pud_clear(pud); - } - pud_free(kvm->mm, pud_offset(pgd, 0)); - pgd_clear(pgd); - } - pgd_free(kvm->mm, kvm->arch.pgtable); - kvm->arch.pgtable = NULL; -} - static void pte_ctor(void *addr) { memset(addr, 0, RADIX_PTE_TABLE_SIZE); diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 4dffa611376d..8c456fa691a5 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -176,14 +176,12 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd, if (!tbltmp) continue; - /* - * Make sure hardware table parameters are exactly the same; - * this is used in the TCE handlers where boundary checks - * use only the first attached table. - */ - if ((tbltmp->it_page_shift == stt->page_shift) && - (tbltmp->it_offset == stt->offset) && - (tbltmp->it_size == stt->size)) { + /* Make sure hardware table parameters are compatible */ + if ((tbltmp->it_page_shift <= stt->page_shift) && + (tbltmp->it_offset << tbltmp->it_page_shift == + stt->offset << stt->page_shift) && + (tbltmp->it_size << tbltmp->it_page_shift == + stt->size << stt->page_shift)) { /* * Reference the table to avoid races with * add/remove DMA windows. @@ -237,7 +235,7 @@ static void release_spapr_tce_table(struct rcu_head *head) kfree(stt); } -static int kvm_spapr_tce_fault(struct vm_fault *vmf) +static vm_fault_t kvm_spapr_tce_fault(struct vm_fault *vmf) { struct kvmppc_spapr_tce_table *stt = vmf->vma->vm_file->private_data; struct page *page; @@ -302,7 +300,8 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, int ret = -ENOMEM; int i; - if (!args->size) + if (!args->size || args->page_shift < 12 || args->page_shift > 34 || + (args->offset + args->size > (ULLONG_MAX >> args->page_shift))) return -EINVAL; size = _ALIGN_UP(args->size, PAGE_SIZE >> 3); @@ -396,7 +395,7 @@ static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm, return H_SUCCESS; } -static long kvmppc_tce_iommu_unmap(struct kvm *kvm, +static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm, struct iommu_table *tbl, unsigned long entry) { enum dma_data_direction dir = DMA_NONE; @@ -416,7 +415,24 @@ static long kvmppc_tce_iommu_unmap(struct kvm *kvm, return ret; } -long kvmppc_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, +static long kvmppc_tce_iommu_unmap(struct kvm *kvm, + struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, + unsigned long entry) +{ + unsigned long i, ret = H_SUCCESS; + unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); + unsigned long io_entry = entry * subpages; + + for (i = 0; i < subpages; ++i) { + ret = kvmppc_tce_iommu_do_unmap(kvm, tbl, io_entry + i); + if (ret != H_SUCCESS) + break; + } + + return ret; +} + +long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl, unsigned long entry, unsigned long ua, enum dma_data_direction dir) { @@ -433,7 +449,7 @@ long kvmppc_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, /* This only handles v2 IOMMU type, v1 is handled via ioctl() */ return H_TOO_HARD; - if (WARN_ON_ONCE(mm_iommu_ua_to_hpa(mem, ua, &hpa))) + if (WARN_ON_ONCE(mm_iommu_ua_to_hpa(mem, ua, tbl->it_page_shift, &hpa))) return H_HARDWARE; if (mm_iommu_mapped_inc(mem)) @@ -453,6 +469,27 @@ long kvmppc_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, return 0; } +static long kvmppc_tce_iommu_map(struct kvm *kvm, + struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, + unsigned long entry, unsigned long ua, + enum dma_data_direction dir) +{ + unsigned long i, pgoff, ret = H_SUCCESS; + unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); + unsigned long io_entry = entry * subpages; + + for (i = 0, pgoff = 0; i < subpages; + ++i, pgoff += IOMMU_PAGE_SIZE(tbl)) { + + ret = kvmppc_tce_iommu_do_map(kvm, tbl, + io_entry + i, ua + pgoff, dir); + if (ret != H_SUCCESS) + break; + } + + return ret; +} + long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, unsigned long ioba, unsigned long tce) { @@ -491,10 +528,10 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { if (dir == DMA_NONE) - ret = kvmppc_tce_iommu_unmap(vcpu->kvm, + ret = kvmppc_tce_iommu_unmap(vcpu->kvm, stt, stit->tbl, entry); else - ret = kvmppc_tce_iommu_map(vcpu->kvm, stit->tbl, + ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl, entry, ua, dir); if (ret == H_SUCCESS) @@ -570,7 +607,7 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu, return H_PARAMETER; list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { - ret = kvmppc_tce_iommu_map(vcpu->kvm, + ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl, entry + i, ua, iommu_tce_direction(tce)); @@ -615,10 +652,10 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu, return H_PARAMETER; list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { - unsigned long entry = ioba >> stit->tbl->it_page_shift; + unsigned long entry = ioba >> stt->page_shift; for (i = 0; i < npages; ++i) { - ret = kvmppc_tce_iommu_unmap(vcpu->kvm, + ret = kvmppc_tce_iommu_unmap(vcpu->kvm, stt, stit->tbl, entry + i); if (ret == H_SUCCESS) diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 6651f736a0b1..5b298f5a1a14 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -221,7 +221,7 @@ static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm, return H_SUCCESS; } -static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm, +static long kvmppc_rm_tce_iommu_do_unmap(struct kvm *kvm, struct iommu_table *tbl, unsigned long entry) { enum dma_data_direction dir = DMA_NONE; @@ -245,7 +245,24 @@ static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm, return ret; } -static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, +static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm, + struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, + unsigned long entry) +{ + unsigned long i, ret = H_SUCCESS; + unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); + unsigned long io_entry = entry * subpages; + + for (i = 0; i < subpages; ++i) { + ret = kvmppc_rm_tce_iommu_do_unmap(kvm, tbl, io_entry + i); + if (ret != H_SUCCESS) + break; + } + + return ret; +} + +static long kvmppc_rm_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl, unsigned long entry, unsigned long ua, enum dma_data_direction dir) { @@ -262,7 +279,8 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, if (!mem) return H_TOO_HARD; - if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, &hpa))) + if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, tbl->it_page_shift, + &hpa))) return H_HARDWARE; pua = (void *) vmalloc_to_phys(pua); @@ -290,6 +308,27 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl, return 0; } +static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, + struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, + unsigned long entry, unsigned long ua, + enum dma_data_direction dir) +{ + unsigned long i, pgoff, ret = H_SUCCESS; + unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); + unsigned long io_entry = entry * subpages; + + for (i = 0, pgoff = 0; i < subpages; + ++i, pgoff += IOMMU_PAGE_SIZE(tbl)) { + + ret = kvmppc_rm_tce_iommu_do_map(kvm, tbl, + io_entry + i, ua + pgoff, dir); + if (ret != H_SUCCESS) + break; + } + + return ret; +} + long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, unsigned long ioba, unsigned long tce) { @@ -327,10 +366,10 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { if (dir == DMA_NONE) - ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, + ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, stt, stit->tbl, entry); else - ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, + ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt, stit->tbl, entry, ua, dir); if (ret == H_SUCCESS) @@ -431,7 +470,8 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, mem = mm_iommu_lookup_rm(vcpu->kvm->mm, ua, IOMMU_PAGE_SIZE_4K); if (mem) - prereg = mm_iommu_ua_to_hpa_rm(mem, ua, &tces) == 0; + prereg = mm_iommu_ua_to_hpa_rm(mem, ua, + IOMMU_PAGE_SHIFT_4K, &tces) == 0; } if (!prereg) { @@ -477,7 +517,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, return H_PARAMETER; list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { - ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, + ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt, stit->tbl, entry + i, ua, iommu_tce_direction(tce)); @@ -526,10 +566,10 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu, return H_PARAMETER; list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { - unsigned long entry = ioba >> stit->tbl->it_page_shift; + unsigned long entry = ioba >> stt->page_shift; for (i = 0; i < npages; ++i) { - ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, + ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, stt, stit->tbl, entry + i); if (ret == H_SUCCESS) @@ -571,7 +611,7 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, page = stt->pages[idx / TCES_PER_PAGE]; tbl = (u64 *)page_address(page); - vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE]; + vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE]; return H_SUCCESS; } diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c index 68d68983948e..36b11c5a0dbb 100644 --- a/arch/powerpc/kvm/book3s_emulate.c +++ b/arch/powerpc/kvm/book3s_emulate.c @@ -23,7 +23,9 @@ #include #include #include +#include #include "book3s.h" +#include #define OP_19_XOP_RFID 18 #define OP_19_XOP_RFI 50 @@ -47,6 +49,12 @@ #define OP_31_XOP_EIOIO 854 #define OP_31_XOP_SLBMFEE 915 +#define OP_31_XOP_TBEGIN 654 +#define OP_31_XOP_TABORT 910 + +#define OP_31_XOP_TRECLAIM 942 +#define OP_31_XOP_TRCHKPT 1006 + /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */ #define OP_31_XOP_DCBZ 1010 @@ -87,6 +95,157 @@ static bool spr_allowed(struct kvm_vcpu *vcpu, enum priv_level level) return true; } +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +static inline void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu) +{ + memcpy(&vcpu->arch.gpr_tm[0], &vcpu->arch.regs.gpr[0], + sizeof(vcpu->arch.gpr_tm)); + memcpy(&vcpu->arch.fp_tm, &vcpu->arch.fp, + sizeof(struct thread_fp_state)); + memcpy(&vcpu->arch.vr_tm, &vcpu->arch.vr, + sizeof(struct thread_vr_state)); + vcpu->arch.ppr_tm = vcpu->arch.ppr; + vcpu->arch.dscr_tm = vcpu->arch.dscr; + vcpu->arch.amr_tm = vcpu->arch.amr; + vcpu->arch.ctr_tm = vcpu->arch.regs.ctr; + vcpu->arch.tar_tm = vcpu->arch.tar; + vcpu->arch.lr_tm = vcpu->arch.regs.link; + vcpu->arch.cr_tm = vcpu->arch.cr; + vcpu->arch.xer_tm = vcpu->arch.regs.xer; + vcpu->arch.vrsave_tm = vcpu->arch.vrsave; +} + +static inline void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu) +{ + memcpy(&vcpu->arch.regs.gpr[0], &vcpu->arch.gpr_tm[0], + sizeof(vcpu->arch.regs.gpr)); + memcpy(&vcpu->arch.fp, &vcpu->arch.fp_tm, + sizeof(struct thread_fp_state)); + memcpy(&vcpu->arch.vr, &vcpu->arch.vr_tm, + sizeof(struct thread_vr_state)); + vcpu->arch.ppr = vcpu->arch.ppr_tm; + vcpu->arch.dscr = vcpu->arch.dscr_tm; + vcpu->arch.amr = vcpu->arch.amr_tm; + vcpu->arch.regs.ctr = vcpu->arch.ctr_tm; + vcpu->arch.tar = vcpu->arch.tar_tm; + vcpu->arch.regs.link = vcpu->arch.lr_tm; + vcpu->arch.cr = vcpu->arch.cr_tm; + vcpu->arch.regs.xer = vcpu->arch.xer_tm; + vcpu->arch.vrsave = vcpu->arch.vrsave_tm; +} + +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val) +{ + unsigned long guest_msr = kvmppc_get_msr(vcpu); + int fc_val = ra_val ? ra_val : 1; + uint64_t texasr; + + /* CR0 = 0 | MSR[TS] | 0 */ + vcpu->arch.cr = (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)) | + (((guest_msr & MSR_TS_MASK) >> (MSR_TS_S_LG - 1)) + << CR0_SHIFT); + + preempt_disable(); + tm_enable(); + texasr = mfspr(SPRN_TEXASR); + kvmppc_save_tm_pr(vcpu); + kvmppc_copyfrom_vcpu_tm(vcpu); + + /* failure recording depends on Failure Summary bit */ + if (!(texasr & TEXASR_FS)) { + texasr &= ~TEXASR_FC; + texasr |= ((u64)fc_val << TEXASR_FC_LG) | TEXASR_FS; + + texasr &= ~(TEXASR_PR | TEXASR_HV); + if (kvmppc_get_msr(vcpu) & MSR_PR) + texasr |= TEXASR_PR; + + if (kvmppc_get_msr(vcpu) & MSR_HV) + texasr |= TEXASR_HV; + + vcpu->arch.texasr = texasr; + vcpu->arch.tfiar = kvmppc_get_pc(vcpu); + mtspr(SPRN_TEXASR, texasr); + mtspr(SPRN_TFIAR, vcpu->arch.tfiar); + } + tm_disable(); + /* + * treclaim need quit to non-transactional state. + */ + guest_msr &= ~(MSR_TS_MASK); + kvmppc_set_msr(vcpu, guest_msr); + preempt_enable(); + + if (vcpu->arch.shadow_fscr & FSCR_TAR) + mtspr(SPRN_TAR, vcpu->arch.tar); +} + +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu) +{ + unsigned long guest_msr = kvmppc_get_msr(vcpu); + + preempt_disable(); + /* + * need flush FP/VEC/VSX to vcpu save area before + * copy. + */ + kvmppc_giveup_ext(vcpu, MSR_VSX); + kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); + kvmppc_copyto_vcpu_tm(vcpu); + kvmppc_save_tm_sprs(vcpu); + + /* + * as a result of trecheckpoint. set TS to suspended. + */ + guest_msr &= ~(MSR_TS_MASK); + guest_msr |= MSR_TS_S; + kvmppc_set_msr(vcpu, guest_msr); + kvmppc_restore_tm_pr(vcpu); + preempt_enable(); +} + +/* emulate tabort. at guest privilege state */ +void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val) +{ + /* currently we only emulate tabort. but no emulation of other + * tabort variants since there is no kernel usage of them at + * present. + */ + unsigned long guest_msr = kvmppc_get_msr(vcpu); + uint64_t org_texasr; + + preempt_disable(); + tm_enable(); + org_texasr = mfspr(SPRN_TEXASR); + tm_abort(ra_val); + + /* CR0 = 0 | MSR[TS] | 0 */ + vcpu->arch.cr = (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)) | + (((guest_msr & MSR_TS_MASK) >> (MSR_TS_S_LG - 1)) + << CR0_SHIFT); + + vcpu->arch.texasr = mfspr(SPRN_TEXASR); + /* failure recording depends on Failure Summary bit, + * and tabort will be treated as nops in non-transactional + * state. + */ + if (!(org_texasr & TEXASR_FS) && + MSR_TM_ACTIVE(guest_msr)) { + vcpu->arch.texasr &= ~(TEXASR_PR | TEXASR_HV); + if (guest_msr & MSR_PR) + vcpu->arch.texasr |= TEXASR_PR; + + if (guest_msr & MSR_HV) + vcpu->arch.texasr |= TEXASR_HV; + + vcpu->arch.tfiar = kvmppc_get_pc(vcpu); + } + tm_disable(); + preempt_enable(); +} + +#endif + int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned int inst, int *advance) { @@ -117,11 +276,28 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, case 19: switch (get_xop(inst)) { case OP_19_XOP_RFID: - case OP_19_XOP_RFI: + case OP_19_XOP_RFI: { + unsigned long srr1 = kvmppc_get_srr1(vcpu); +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + unsigned long cur_msr = kvmppc_get_msr(vcpu); + + /* + * add rules to fit in ISA specification regarding TM + * state transistion in TM disable/Suspended state, + * and target TM state is TM inactive(00) state. (the + * change should be suppressed). + */ + if (((cur_msr & MSR_TM) == 0) && + ((srr1 & MSR_TM) == 0) && + MSR_TM_SUSPENDED(cur_msr) && + !MSR_TM_ACTIVE(srr1)) + srr1 |= MSR_TS_S; +#endif kvmppc_set_pc(vcpu, kvmppc_get_srr0(vcpu)); - kvmppc_set_msr(vcpu, kvmppc_get_srr1(vcpu)); + kvmppc_set_msr(vcpu, srr1); *advance = 0; break; + } default: emulated = EMULATE_FAIL; @@ -304,6 +480,140 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, break; } +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + case OP_31_XOP_TBEGIN: + { + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + if (!(kvmppc_get_msr(vcpu) & MSR_PR)) { + preempt_disable(); + vcpu->arch.cr = (CR0_TBEGIN_FAILURE | + (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT))); + + vcpu->arch.texasr = (TEXASR_FS | TEXASR_EXACT | + (((u64)(TM_CAUSE_EMULATE | TM_CAUSE_PERSISTENT)) + << TEXASR_FC_LG)); + + if ((inst >> 21) & 0x1) + vcpu->arch.texasr |= TEXASR_ROT; + + if (kvmppc_get_msr(vcpu) & MSR_HV) + vcpu->arch.texasr |= TEXASR_HV; + + vcpu->arch.tfhar = kvmppc_get_pc(vcpu) + 4; + vcpu->arch.tfiar = kvmppc_get_pc(vcpu); + + kvmppc_restore_tm_sprs(vcpu); + preempt_enable(); + } else + emulated = EMULATE_FAIL; + break; + } + case OP_31_XOP_TABORT: + { + ulong guest_msr = kvmppc_get_msr(vcpu); + unsigned long ra_val = 0; + + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + /* only emulate for privilege guest, since problem state + * guest can run with TM enabled and we don't expect to + * trap at here for that case. + */ + WARN_ON(guest_msr & MSR_PR); + + if (ra) + ra_val = kvmppc_get_gpr(vcpu, ra); + + kvmppc_emulate_tabort(vcpu, ra_val); + break; + } + case OP_31_XOP_TRECLAIM: + { + ulong guest_msr = kvmppc_get_msr(vcpu); + unsigned long ra_val = 0; + + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + /* generate interrupts based on priorities */ + if (guest_msr & MSR_PR) { + /* Privileged Instruction type Program Interrupt */ + kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV); + emulated = EMULATE_AGAIN; + break; + } + + if (!MSR_TM_ACTIVE(guest_msr)) { + /* TM bad thing interrupt */ + kvmppc_core_queue_program(vcpu, SRR1_PROGTM); + emulated = EMULATE_AGAIN; + break; + } + + if (ra) + ra_val = kvmppc_get_gpr(vcpu, ra); + kvmppc_emulate_treclaim(vcpu, ra_val); + break; + } + case OP_31_XOP_TRCHKPT: + { + ulong guest_msr = kvmppc_get_msr(vcpu); + unsigned long texasr; + + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + /* generate interrupt based on priorities */ + if (guest_msr & MSR_PR) { + /* Privileged Instruction type Program Intr */ + kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV); + emulated = EMULATE_AGAIN; + break; + } + + tm_enable(); + texasr = mfspr(SPRN_TEXASR); + tm_disable(); + + if (MSR_TM_ACTIVE(guest_msr) || + !(texasr & (TEXASR_FS))) { + /* TM bad thing interrupt */ + kvmppc_core_queue_program(vcpu, SRR1_PROGTM); + emulated = EMULATE_AGAIN; + break; + } + + kvmppc_emulate_trchkpt(vcpu); + break; + } +#endif default: emulated = EMULATE_FAIL; } @@ -465,13 +775,38 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val) break; #ifdef CONFIG_PPC_TRANSACTIONAL_MEM case SPRN_TFHAR: - vcpu->arch.tfhar = spr_val; - break; case SPRN_TEXASR: - vcpu->arch.texasr = spr_val; - break; case SPRN_TFIAR: - vcpu->arch.tfiar = spr_val; + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu)) && + !((MSR_TM_SUSPENDED(kvmppc_get_msr(vcpu))) && + (sprn == SPRN_TFHAR))) { + /* it is illegal to mtspr() TM regs in + * other than non-transactional state, with + * the exception of TFHAR in suspend state. + */ + kvmppc_core_queue_program(vcpu, SRR1_PROGTM); + emulated = EMULATE_AGAIN; + break; + } + + tm_enable(); + if (sprn == SPRN_TFHAR) + mtspr(SPRN_TFHAR, spr_val); + else if (sprn == SPRN_TEXASR) + mtspr(SPRN_TEXASR, spr_val); + else + mtspr(SPRN_TFIAR, spr_val); + tm_disable(); + break; #endif #endif @@ -618,13 +953,25 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val break; #ifdef CONFIG_PPC_TRANSACTIONAL_MEM case SPRN_TFHAR: - *spr_val = vcpu->arch.tfhar; - break; case SPRN_TEXASR: - *spr_val = vcpu->arch.texasr; - break; case SPRN_TFIAR: - *spr_val = vcpu->arch.tfiar; + if (!cpu_has_feature(CPU_FTR_TM)) + break; + + if (!(kvmppc_get_msr(vcpu) & MSR_TM)) { + kvmppc_trigger_fac_interrupt(vcpu, FSCR_TM_LG); + emulated = EMULATE_AGAIN; + break; + } + + tm_enable(); + if (sprn == SPRN_TFHAR) + *spr_val = mfspr(SPRN_TFHAR); + else if (sprn == SPRN_TEXASR) + *spr_val = mfspr(SPRN_TEXASR); + else if (sprn == SPRN_TFIAR) + *spr_val = mfspr(SPRN_TFIAR); + tm_disable(); break; #endif #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8858ab8b6ca4..ee4a8854985e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -123,6 +123,32 @@ static bool no_mixing_hpt_and_radix; static void kvmppc_end_cede(struct kvm_vcpu *vcpu); static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); +/* + * RWMR values for POWER8. These control the rate at which PURR + * and SPURR count and should be set according to the number of + * online threads in the vcore being run. + */ +#define RWMR_RPA_P8_1THREAD 0x164520C62609AECA +#define RWMR_RPA_P8_2THREAD 0x7FFF2908450D8DA9 +#define RWMR_RPA_P8_3THREAD 0x164520C62609AECA +#define RWMR_RPA_P8_4THREAD 0x199A421245058DA9 +#define RWMR_RPA_P8_5THREAD 0x164520C62609AECA +#define RWMR_RPA_P8_6THREAD 0x164520C62609AECA +#define RWMR_RPA_P8_7THREAD 0x164520C62609AECA +#define RWMR_RPA_P8_8THREAD 0x164520C62609AECA + +static unsigned long p8_rwmr_values[MAX_SMT_THREADS + 1] = { + RWMR_RPA_P8_1THREAD, + RWMR_RPA_P8_1THREAD, + RWMR_RPA_P8_2THREAD, + RWMR_RPA_P8_3THREAD, + RWMR_RPA_P8_4THREAD, + RWMR_RPA_P8_5THREAD, + RWMR_RPA_P8_6THREAD, + RWMR_RPA_P8_7THREAD, + RWMR_RPA_P8_8THREAD, +}; + static inline struct kvm_vcpu *next_runnable_thread(struct kvmppc_vcore *vc, int *ip) { @@ -190,7 +216,7 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) wqp = kvm_arch_vcpu_wq(vcpu); if (swq_has_sleeper(wqp)) { - swake_up(wqp); + swake_up_one(wqp); ++vcpu->stat.halt_wakeup; } @@ -371,13 +397,13 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu) pr_err("vcpu %p (%d):\n", vcpu, vcpu->vcpu_id); pr_err("pc = %.16lx msr = %.16llx trap = %x\n", - vcpu->arch.pc, vcpu->arch.shregs.msr, vcpu->arch.trap); + vcpu->arch.regs.nip, vcpu->arch.shregs.msr, vcpu->arch.trap); for (r = 0; r < 16; ++r) pr_err("r%2d = %.16lx r%d = %.16lx\n", r, kvmppc_get_gpr(vcpu, r), r+16, kvmppc_get_gpr(vcpu, r+16)); pr_err("ctr = %.16lx lr = %.16lx\n", - vcpu->arch.ctr, vcpu->arch.lr); + vcpu->arch.regs.ctr, vcpu->arch.regs.link); pr_err("srr0 = %.16llx srr1 = %.16llx\n", vcpu->arch.shregs.srr0, vcpu->arch.shregs.srr1); pr_err("sprg0 = %.16llx sprg1 = %.16llx\n", @@ -385,7 +411,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu) pr_err("sprg2 = %.16llx sprg3 = %.16llx\n", vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3); pr_err("cr = %.8x xer = %.16lx dsisr = %.8x\n", - vcpu->arch.cr, vcpu->arch.xer, vcpu->arch.shregs.dsisr); + vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr); pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar); pr_err("fault dar = %.16lx dsisr = %.8x\n", vcpu->arch.fault_dar, vcpu->arch.fault_dsisr); @@ -1526,6 +1552,9 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, *val = get_reg_val(id, vcpu->arch.dec_expires + vcpu->arch.vcore->tb_offset); break; + case KVM_REG_PPC_ONLINE: + *val = get_reg_val(id, vcpu->arch.online); + break; default: r = -EINVAL; break; @@ -1757,6 +1786,14 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, vcpu->arch.dec_expires = set_reg_val(id, *val) - vcpu->arch.vcore->tb_offset; break; + case KVM_REG_PPC_ONLINE: + i = set_reg_val(id, *val); + if (i && !vcpu->arch.online) + atomic_inc(&vcpu->arch.vcore->online_count); + else if (!i && vcpu->arch.online) + atomic_dec(&vcpu->arch.vcore->online_count); + vcpu->arch.online = i; + break; default: r = -EINVAL; break; @@ -2850,6 +2887,25 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) } } + /* + * On POWER8, set RWMR register. + * Since it only affects PURR and SPURR, it doesn't affect + * the host, so we don't save/restore the host value. + */ + if (is_power8) { + unsigned long rwmr_val = RWMR_RPA_P8_8THREAD; + int n_online = atomic_read(&vc->online_count); + + /* + * Use the 8-thread value if we're doing split-core + * or if the vcore's online count looks bogus. + */ + if (split == 1 && threads_per_subcore == MAX_SMT_THREADS && + n_online >= 1 && n_online <= MAX_SMT_THREADS) + rwmr_val = p8_rwmr_values[n_online]; + mtspr(SPRN_RWMR, rwmr_val); + } + /* Start all the threads */ active = 0; for (sub = 0; sub < core_info.n_subcores; ++sub) { @@ -2902,6 +2958,32 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) for (sub = 0; sub < core_info.n_subcores; ++sub) spin_unlock(&core_info.vc[sub]->lock); + if (kvm_is_radix(vc->kvm)) { + int tmp = pcpu; + + /* + * Do we need to flush the process scoped TLB for the LPAR? + * + * On POWER9, individual threads can come in here, but the + * TLB is shared between the 4 threads in a core, hence + * invalidating on one thread invalidates for all. + * Thus we make all 4 threads use the same bit here. + * + * Hash must be flushed in realmode in order to use tlbiel. + */ + mtspr(SPRN_LPID, vc->kvm->arch.lpid); + isync(); + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + tmp &= ~0x3UL; + + if (cpumask_test_cpu(tmp, &vc->kvm->arch.need_tlb_flush)) { + radix__local_flush_tlb_lpid_guest(vc->kvm->arch.lpid); + /* Clear the bit after the TLB flush */ + cpumask_clear_cpu(tmp, &vc->kvm->arch.need_tlb_flush); + } + } + /* * Interrupts will be enabled once we get into the guest, * so tell lockdep that we're about to enable interrupts. @@ -3106,7 +3188,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) } } - prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); + prepare_to_swait_exclusive(&vc->wq, &wait, TASK_INTERRUPTIBLE); if (kvmppc_vcore_check_block(vc)) { finish_swait(&vc->wq, &wait); @@ -3229,7 +3311,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { - swake_up(&vc->wq); + swake_up_one(&vc->wq); } } @@ -3356,6 +3438,15 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu) } #endif + /* + * Force online to 1 for the sake of old userspace which doesn't + * set it. + */ + if (!vcpu->arch.online) { + atomic_inc(&vcpu->arch.vcore->online_count); + vcpu->arch.online = 1; + } + kvmppc_core_prepare_to_enter(vcpu); /* No need to go into the guest when all we'll do is come back out */ diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index de18299f92b7..d4a3f4da409b 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -18,6 +18,7 @@ #include #include +#include #include #include #include @@ -211,9 +212,9 @@ long kvmppc_h_random(struct kvm_vcpu *vcpu) /* Only need to do the expensive mfmsr() on radix */ if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR)) - r = powernv_get_random_long(&vcpu->arch.gpr[4]); + r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]); else - r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]); + r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]); if (r) return H_SUCCESS; @@ -562,7 +563,7 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - vcpu->arch.gpr[5] = get_tb(); + vcpu->arch.regs.gpr[5] = get_tb(); if (xive_enabled()) { if (is_rm()) return xive_rm_h_xirr(vcpu); @@ -633,7 +634,19 @@ int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) void kvmppc_bad_interrupt(struct pt_regs *regs) { - die("Bad interrupt in KVM entry/exit code", regs, SIGABRT); + /* + * 100 could happen at any time, 200 can happen due to invalid real + * address access for example (or any time due to a hardware problem). + */ + if (TRAP(regs) == 0x100) { + get_paca()->in_nmi++; + system_reset_exception(regs); + get_paca()->in_nmi--; + } else if (TRAP(regs) == 0x200) { + machine_check_exception(regs); + } else { + die("Bad interrupt in KVM entry/exit code", regs, SIGABRT); + } panic("Bad KVM trap"); } diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 0e8493033288..82f2ff9410b6 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -137,7 +137,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) /* * We return here in virtual mode after the guest exits * with something that we can't handle in real mode. - * Interrupts are enabled again at this point. + * Interrupts are still hard-disabled. */ /* diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 78e6a392330f..1f22d9e977d4 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -418,7 +418,8 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, long pte_index, unsigned long pteh, unsigned long ptel) { return kvmppc_do_h_enter(vcpu->kvm, flags, pte_index, pteh, ptel, - vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]); + vcpu->arch.pgdir, true, + &vcpu->arch.regs.gpr[4]); } #ifdef __BIG_ENDIAN__ @@ -434,24 +435,6 @@ static inline int is_mmio_hpte(unsigned long v, unsigned long r) (HPTE_R_KEY_HI | HPTE_R_KEY_LO)); } -static inline int try_lock_tlbie(unsigned int *lock) -{ - unsigned int tmp, old; - unsigned int token = LOCK_TOKEN; - - asm volatile("1:lwarx %1,0,%2\n" - " cmpwi cr0,%1,0\n" - " bne 2f\n" - " stwcx. %3,0,%2\n" - " bne- 1b\n" - " isync\n" - "2:" - : "=&r" (tmp), "=&r" (old) - : "r" (lock), "r" (token) - : "cc", "memory"); - return old == 0; -} - static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues, long npages, int global, bool need_sync) { @@ -463,8 +446,6 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues, * the RS field, this is backwards-compatible with P7 and P8. */ if (global) { - while (!try_lock_tlbie(&kvm->arch.tlbie_lock)) - cpu_relax(); if (need_sync) asm volatile("ptesync" : : : "memory"); for (i = 0; i < npages; ++i) { @@ -483,7 +464,6 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues, } asm volatile("eieio; tlbsync; ptesync" : : : "memory"); - kvm->arch.tlbie_lock = 0; } else { if (need_sync) asm volatile("ptesync" : : : "memory"); @@ -561,13 +541,13 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index, unsigned long avpn) { return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn, - &vcpu->arch.gpr[4]); + &vcpu->arch.regs.gpr[4]); } long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; - unsigned long *args = &vcpu->arch.gpr[4]; + unsigned long *args = &vcpu->arch.regs.gpr[4]; __be64 *hp, *hptes[4]; unsigned long tlbrb[4]; long int i, j, k, n, found, indexes[4]; @@ -787,8 +767,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C)); r &= ~HPTE_GR_RESERVED; } - vcpu->arch.gpr[4 + i * 2] = v; - vcpu->arch.gpr[5 + i * 2] = r; + vcpu->arch.regs.gpr[4 + i * 2] = v; + vcpu->arch.regs.gpr[5 + i * 2] = r; } return H_SUCCESS; } @@ -834,7 +814,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags, } } } - vcpu->arch.gpr[4] = gr; + vcpu->arch.regs.gpr[4] = gr; ret = H_SUCCESS; out: unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); @@ -881,7 +861,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags, kvmppc_set_dirty_from_hpte(kvm, v, gr); } } - vcpu->arch.gpr[4] = gr; + vcpu->arch.regs.gpr[4] = gr; ret = H_SUCCESS; out: unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c index 2a862618f072..758d1d23215e 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_xics.c +++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c @@ -517,7 +517,7 @@ unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu) } while (!icp_rm_try_update(icp, old_state, new_state)); /* Return the result in GPR4 */ - vcpu->arch.gpr[4] = xirr; + vcpu->arch.regs.gpr[4] = xirr; return check_too_hard(xics, icp); } diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index b97d261d3b89..153988d878e8 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -39,8 +39,6 @@ BEGIN_FTR_SECTION; \ extsw reg, reg; \ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) -#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM) - /* Values in HSTATE_NAPPING(r13) */ #define NAPPING_CEDE 1 #define NAPPING_NOVCPU 2 @@ -639,6 +637,10 @@ kvmppc_hv_entry: /* Primary thread switches to guest partition. */ cmpwi r6,0 bne 10f + + /* Radix has already switched LPID and flushed core TLB */ + bne cr7, 22f + lwz r7,KVM_LPID(r9) BEGIN_FTR_SECTION ld r6,KVM_SDR1(r9) @@ -650,7 +652,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) mtspr SPRN_LPID,r7 isync - /* See if we need to flush the TLB */ + /* See if we need to flush the TLB. Hash has to be done in RM */ lhz r6,PACAPACAINDEX(r13) /* test_bit(cpu, need_tlb_flush) */ BEGIN_FTR_SECTION /* @@ -677,15 +679,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) li r7,0x800 /* IS field = 0b10 */ ptesync li r0,0 /* RS for P9 version of tlbiel */ - bne cr7, 29f 28: tlbiel r7 /* On P9, rs=0, RIC=0, PRS=0, R=0 */ addi r7,r7,0x1000 bdnz 28b - b 30f -29: PPC_TLBIEL(7,0,2,1,1) /* for radix, RIC=2, PRS=1, R=1 */ - addi r7,r7,0x1000 - bdnz 29b -30: ptesync + ptesync 23: ldarx r7,0,r6 /* clear the bit after TLB flushed */ andc r7,r7,r8 stdcx. r7,0,r6 @@ -799,7 +796,10 @@ END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR */ - bl kvmppc_restore_tm + mr r3, r4 + ld r4, VCPU_MSR(r3) + bl kvmppc_restore_tm_hv + ld r4, HSTATE_KVM_VCPU(r13) 91: #endif @@ -1783,7 +1783,10 @@ END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR */ - bl kvmppc_save_tm + mr r3, r9 + ld r4, VCPU_MSR(r3) + bl kvmppc_save_tm_hv + ld r9, HSTATE_KVM_VCPU(r13) 91: #endif @@ -2686,8 +2689,9 @@ END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR */ - ld r9, HSTATE_KVM_VCPU(r13) - bl kvmppc_save_tm + ld r3, HSTATE_KVM_VCPU(r13) + ld r4, VCPU_MSR(r3) + bl kvmppc_save_tm_hv 91: #endif @@ -2805,7 +2809,10 @@ END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR */ - bl kvmppc_restore_tm + mr r3, r4 + ld r4, VCPU_MSR(r3) + bl kvmppc_restore_tm_hv + ld r4, HSTATE_KVM_VCPU(r13) 91: #endif @@ -3126,11 +3133,22 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) #ifdef CONFIG_PPC_TRANSACTIONAL_MEM /* * Save transactional state and TM-related registers. - * Called with r9 pointing to the vcpu struct. + * Called with r3 pointing to the vcpu struct and r4 containing + * the guest MSR value. * This can modify all checkpointed registers, but - * restores r1, r2 and r9 (vcpu pointer) before exit. + * restores r1 and r2 before exit. */ -kvmppc_save_tm: +kvmppc_save_tm_hv: + /* See if we need to handle fake suspend mode */ +BEGIN_FTR_SECTION + b __kvmppc_save_tm +END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) + + lbz r0, HSTATE_FAKE_SUSPEND(r13) /* Were we fake suspended? */ + cmpwi r0, 0 + beq __kvmppc_save_tm + + /* The following code handles the fake_suspend = 1 case */ mflr r0 std r0, PPC_LR_STKOFF(r1) stdu r1, -PPC_MIN_STKFRM(r1) @@ -3141,59 +3159,37 @@ kvmppc_save_tm: rldimi r8, r0, MSR_TM_LG, 63-MSR_TM_LG mtmsrd r8 - ld r5, VCPU_MSR(r9) - rldicl. r5, r5, 64 - MSR_TS_S_LG, 62 - beq 1f /* TM not active in guest. */ - - std r1, HSTATE_HOST_R1(r13) - li r3, TM_CAUSE_KVM_RESCHED - -BEGIN_FTR_SECTION - lbz r0, HSTATE_FAKE_SUSPEND(r13) /* Were we fake suspended? */ - cmpwi r0, 0 - beq 3f rldicl. r8, r8, 64 - MSR_TS_S_LG, 62 /* Did we actually hrfid? */ beq 4f -BEGIN_FTR_SECTION_NESTED(96) +BEGIN_FTR_SECTION bl pnv_power9_force_smt4_catch -END_FTR_SECTION_NESTED(CPU_FTR_P9_TM_XER_SO_BUG, CPU_FTR_P9_TM_XER_SO_BUG, 96) +END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG) nop - b 6f -3: - /* Emulation of the treclaim instruction needs TEXASR before treclaim */ - mfspr r6, SPRN_TEXASR - std r6, VCPU_ORIG_TEXASR(r9) -6: -END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_HV_ASSIST) - /* Clear the MSR RI since r1, r13 are all going to be foobar. */ + std r1, HSTATE_HOST_R1(r13) + + /* Clear the MSR RI since r1, r13 may be foobar. */ li r5, 0 mtmsrd r5, 1 - /* All GPRs are volatile at this point. */ + /* We have to treclaim here because that's the only way to do S->N */ + li r3, TM_CAUSE_KVM_RESCHED TRECLAIM(R3) - /* Temporarily store r13 and r9 so we have some regs to play with */ - SET_SCRATCH0(r13) - GET_PACA(r13) - std r9, PACATMSCRATCH(r13) - - /* If doing TM emulation on POWER9 DD2.2, check for fake suspend mode */ -BEGIN_FTR_SECTION - lbz r9, HSTATE_FAKE_SUSPEND(r13) - cmpwi r9, 0 - beq 2f /* * We were in fake suspend, so we are not going to save the * register state as the guest checkpointed state (since * we already have it), therefore we can now use any volatile GPR. */ - /* Reload stack pointer and TOC. */ + /* Reload PACA pointer, stack pointer and TOC. */ + GET_PACA(r13) ld r1, HSTATE_HOST_R1(r13) ld r2, PACATOC(r13) + /* Set MSR RI now we have r1 and r13 back. */ li r5, MSR_RI mtmsrd r5, 1 + HMT_MEDIUM ld r6, HSTATE_DSCR(r13) mtspr SPRN_DSCR, r6 @@ -3208,85 +3204,9 @@ END_FTR_SECTION_NESTED(CPU_FTR_P9_TM_XER_SO_BUG, CPU_FTR_P9_TM_XER_SO_BUG, 96) li r0, PSSCR_FAKE_SUSPEND andc r3, r3, r0 mtspr SPRN_PSSCR, r3 - ld r9, HSTATE_KVM_VCPU(r13) - /* Don't save TEXASR, use value from last exit in real suspend state */ - b 11f -2: -END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_HV_ASSIST) + /* Don't save TEXASR, use value from last exit in real suspend state */ ld r9, HSTATE_KVM_VCPU(r13) - - /* Get a few more GPRs free. */ - std r29, VCPU_GPRS_TM(29)(r9) - std r30, VCPU_GPRS_TM(30)(r9) - std r31, VCPU_GPRS_TM(31)(r9) - - /* Save away PPR and DSCR soon so don't run with user values. */ - mfspr r31, SPRN_PPR - HMT_MEDIUM - mfspr r30, SPRN_DSCR - ld r29, HSTATE_DSCR(r13) - mtspr SPRN_DSCR, r29 - - /* Save all but r9, r13 & r29-r31 */ - reg = 0 - .rept 29 - .if (reg != 9) && (reg != 13) - std reg, VCPU_GPRS_TM(reg)(r9) - .endif - reg = reg + 1 - .endr - /* ... now save r13 */ - GET_SCRATCH0(r4) - std r4, VCPU_GPRS_TM(13)(r9) - /* ... and save r9 */ - ld r4, PACATMSCRATCH(r13) - std r4, VCPU_GPRS_TM(9)(r9) - - /* Reload stack pointer and TOC. */ - ld r1, HSTATE_HOST_R1(r13) - ld r2, PACATOC(r13) - - /* Set MSR RI now we have r1 and r13 back. */ - li r5, MSR_RI - mtmsrd r5, 1 - - /* Save away checkpinted SPRs. */ - std r31, VCPU_PPR_TM(r9) - std r30, VCPU_DSCR_TM(r9) - mflr r5 - mfcr r6 - mfctr r7 - mfspr r8, SPRN_AMR - mfspr r10, SPRN_TAR - mfxer r11 - std r5, VCPU_LR_TM(r9) - stw r6, VCPU_CR_TM(r9) - std r7, VCPU_CTR_TM(r9) - std r8, VCPU_AMR_TM(r9) - std r10, VCPU_TAR_TM(r9) - std r11, VCPU_XER_TM(r9) - - /* Restore r12 as trap number. */ - lwz r12, VCPU_TRAP(r9) - - /* Save FP/VSX. */ - addi r3, r9, VCPU_FPRS_TM - bl store_fp_state - addi r3, r9, VCPU_VRS_TM - bl store_vr_state - mfspr r6, SPRN_VRSAVE - stw r6, VCPU_VRSAVE_TM(r9) -1: - /* - * We need to save these SPRs after the treclaim so that the software - * error code is recorded correctly in the TEXASR. Also the user may - * change these outside of a transaction, so they must always be - * context switched. - */ - mfspr r7, SPRN_TEXASR - std r7, VCPU_TEXASR(r9) -11: mfspr r5, SPRN_TFHAR mfspr r6, SPRN_TFIAR std r5, VCPU_TFHAR(r9) @@ -3299,149 +3219,63 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_HV_ASSIST) /* * Restore transactional state and TM-related registers. - * Called with r4 pointing to the vcpu struct. + * Called with r3 pointing to the vcpu struct + * and r4 containing the guest MSR value. * This potentially modifies all checkpointed registers. - * It restores r1, r2, r4 from the PACA. + * It restores r1 and r2 from the PACA. */ -kvmppc_restore_tm: +kvmppc_restore_tm_hv: + /* + * If we are doing TM emulation for the guest on a POWER9 DD2, + * then we don't actually do a trechkpt -- we either set up + * fake-suspend mode, or emulate a TM rollback. + */ +BEGIN_FTR_SECTION + b __kvmppc_restore_tm +END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) mflr r0 std r0, PPC_LR_STKOFF(r1) - /* Turn on TM/FP/VSX/VMX so we can restore them. */ + li r0, 0 + stb r0, HSTATE_FAKE_SUSPEND(r13) + + /* Turn on TM so we can restore TM SPRs */ mfmsr r5 - li r6, MSR_TM >> 32 - sldi r6, r6, 32 - or r5, r5, r6 - ori r5, r5, MSR_FP - oris r5, r5, (MSR_VEC | MSR_VSX)@h + li r0, 1 + rldimi r5, r0, MSR_TM_LG, 63-MSR_TM_LG mtmsrd r5 /* * The user may change these outside of a transaction, so they must * always be context switched. */ - ld r5, VCPU_TFHAR(r4) - ld r6, VCPU_TFIAR(r4) - ld r7, VCPU_TEXASR(r4) + ld r5, VCPU_TFHAR(r3) + ld r6, VCPU_TFIAR(r3) + ld r7, VCPU_TEXASR(r3) mtspr SPRN_TFHAR, r5 mtspr SPRN_TFIAR, r6 mtspr SPRN_TEXASR, r7 - li r0, 0 - stb r0, HSTATE_FAKE_SUSPEND(r13) - ld r5, VCPU_MSR(r4) - rldicl. r5, r5, 64 - MSR_TS_S_LG, 62 + rldicl. r5, r4, 64 - MSR_TS_S_LG, 62 beqlr /* TM not active in guest */ - std r1, HSTATE_HOST_R1(r13) - /* Make sure the failure summary is set, otherwise we'll program check - * when we trechkpt. It's possible that this might have been not set - * on a kvmppc_set_one_reg() call but we shouldn't let this crash the - * host. - */ + /* Make sure the failure summary is set */ oris r7, r7, (TEXASR_FS)@h mtspr SPRN_TEXASR, r7 - /* - * If we are doing TM emulation for the guest on a POWER9 DD2, - * then we don't actually do a trechkpt -- we either set up - * fake-suspend mode, or emulate a TM rollback. - */ -BEGIN_FTR_SECTION - b .Ldo_tm_fake_load -END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_HV_ASSIST) - - /* - * We need to load up the checkpointed state for the guest. - * We need to do this early as it will blow away any GPRs, VSRs and - * some SPRs. - */ - - mr r31, r4 - addi r3, r31, VCPU_FPRS_TM - bl load_fp_state - addi r3, r31, VCPU_VRS_TM - bl load_vr_state - mr r4, r31 - lwz r7, VCPU_VRSAVE_TM(r4) - mtspr SPRN_VRSAVE, r7 - - ld r5, VCPU_LR_TM(r4) - lwz r6, VCPU_CR_TM(r4) - ld r7, VCPU_CTR_TM(r4) - ld r8, VCPU_AMR_TM(r4) - ld r9, VCPU_TAR_TM(r4) - ld r10, VCPU_XER_TM(r4) - mtlr r5 - mtcr r6 - mtctr r7 - mtspr SPRN_AMR, r8 - mtspr SPRN_TAR, r9 - mtxer r10 - - /* - * Load up PPR and DSCR values but don't put them in the actual SPRs - * till the last moment to avoid running with userspace PPR and DSCR for - * too long. - */ - ld r29, VCPU_DSCR_TM(r4) - ld r30, VCPU_PPR_TM(r4) - - std r2, PACATMSCRATCH(r13) /* Save TOC */ - - /* Clear the MSR RI since r1, r13 are all going to be foobar. */ - li r5, 0 - mtmsrd r5, 1 - - /* Load GPRs r0-r28 */ - reg = 0 - .rept 29 - ld reg, VCPU_GPRS_TM(reg)(r31) - reg = reg + 1 - .endr - - mtspr SPRN_DSCR, r29 - mtspr SPRN_PPR, r30 - - /* Load final GPRs */ - ld 29, VCPU_GPRS_TM(29)(r31) - ld 30, VCPU_GPRS_TM(30)(r31) - ld 31, VCPU_GPRS_TM(31)(r31) - - /* TM checkpointed state is now setup. All GPRs are now volatile. */ - TRECHKPT - - /* Now let's get back the state we need. */ - HMT_MEDIUM - GET_PACA(r13) - ld r29, HSTATE_DSCR(r13) - mtspr SPRN_DSCR, r29 - ld r4, HSTATE_KVM_VCPU(r13) - ld r1, HSTATE_HOST_R1(r13) - ld r2, PACATMSCRATCH(r13) - - /* Set the MSR RI since we have our registers back. */ - li r5, MSR_RI - mtmsrd r5, 1 -9: - ld r0, PPC_LR_STKOFF(r1) - mtlr r0 - blr - -.Ldo_tm_fake_load: cmpwi r5, 1 /* check for suspended state */ bgt 10f stb r5, HSTATE_FAKE_SUSPEND(r13) - b 9b /* and return */ + b 9f /* and return */ 10: stdu r1, -PPC_MIN_STKFRM(r1) /* guest is in transactional state, so simulate rollback */ - mr r3, r4 bl kvmhv_emulate_tm_rollback nop - ld r4, HSTATE_KVM_VCPU(r13) /* our vcpu pointer has been trashed */ addi r1, r1, PPC_MIN_STKFRM - b 9b -#endif +9: ld r0, PPC_LR_STKOFF(r1) + mtlr r0 + blr +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ /* * We come here if we get any exception or interrupt while we are @@ -3572,6 +3406,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) bcl 20, 31, .+4 5: mflr r3 addi r3, r3, 9f - 5b + li r4, -1 + rldimi r3, r4, 62, 0 /* ensure 0xc000000000000000 bits are set */ ld r4, PACAKMSR(r13) mtspr SPRN_SRR0, r3 mtspr SPRN_SRR1, r4 diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c index bf710ad3a6d7..008285058f9b 100644 --- a/arch/powerpc/kvm/book3s_hv_tm.c +++ b/arch/powerpc/kvm/book3s_hv_tm.c @@ -19,7 +19,7 @@ static void emulate_tx_failure(struct kvm_vcpu *vcpu, u64 failure_cause) u64 texasr, tfiar; u64 msr = vcpu->arch.shregs.msr; - tfiar = vcpu->arch.pc & ~0x3ull; + tfiar = vcpu->arch.regs.nip & ~0x3ull; texasr = (failure_cause << 56) | TEXASR_ABORT | TEXASR_FS | TEXASR_EXACT; if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) texasr |= TEXASR_SUSP; @@ -57,8 +57,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) (newmsr & MSR_TM))); newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.shregs.srr0; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.shregs.srr0; return RESUME_GUEST; case PPC_INST_RFEBB: @@ -90,8 +90,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) vcpu->arch.bescr = bescr; msr = (msr & ~MSR_TS_MASK) | MSR_TS_T; vcpu->arch.shregs.msr = msr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.ebbrr; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.ebbrr; return RESUME_GUEST; case PPC_INST_MTMSRD: diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c index d98ccfd2b88c..b2c7c6fca4f9 100644 --- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c @@ -35,8 +35,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) return 0; newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.shregs.srr0; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.shregs.srr0; return 1; case PPC_INST_RFEBB: @@ -58,8 +58,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) mtspr(SPRN_BESCR, bescr); msr = (msr & ~MSR_TS_MASK) | MSR_TS_T; vcpu->arch.shregs.msr = msr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = mfspr(SPRN_EBBRR); + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = mfspr(SPRN_EBBRR); return 1; case PPC_INST_MTMSRD: @@ -103,7 +103,7 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu) { vcpu->arch.shregs.msr &= ~MSR_TS_MASK; /* go to N state */ - vcpu->arch.pc = vcpu->arch.tfhar; + vcpu->arch.regs.nip = vcpu->arch.tfhar; copy_from_checkpoint(vcpu); vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000; } diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index d3f304d06adf..c3b8006f0eac 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -42,6 +42,8 @@ #include #include #include +#include +#include #include "book3s.h" @@ -53,7 +55,9 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr, ulong msr); -static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac); +#ifdef CONFIG_PPC_BOOK3S_64 +static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac); +#endif /* Some compatibility defines */ #ifdef CONFIG_PPC_BOOK3S_32 @@ -114,6 +118,8 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu) if (kvmppc_is_split_real(vcpu)) kvmppc_fixup_split_real(vcpu); + + kvmppc_restore_tm_pr(vcpu); } static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu) @@ -133,6 +139,7 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu) kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX); kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); + kvmppc_save_tm_pr(vcpu); /* Enable AIL if supported */ if (cpu_has_feature(CPU_FTR_HVMODE) && @@ -147,25 +154,25 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu) { struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); - svcpu->gpr[0] = vcpu->arch.gpr[0]; - svcpu->gpr[1] = vcpu->arch.gpr[1]; - svcpu->gpr[2] = vcpu->arch.gpr[2]; - svcpu->gpr[3] = vcpu->arch.gpr[3]; - svcpu->gpr[4] = vcpu->arch.gpr[4]; - svcpu->gpr[5] = vcpu->arch.gpr[5]; - svcpu->gpr[6] = vcpu->arch.gpr[6]; - svcpu->gpr[7] = vcpu->arch.gpr[7]; - svcpu->gpr[8] = vcpu->arch.gpr[8]; - svcpu->gpr[9] = vcpu->arch.gpr[9]; - svcpu->gpr[10] = vcpu->arch.gpr[10]; - svcpu->gpr[11] = vcpu->arch.gpr[11]; - svcpu->gpr[12] = vcpu->arch.gpr[12]; - svcpu->gpr[13] = vcpu->arch.gpr[13]; + svcpu->gpr[0] = vcpu->arch.regs.gpr[0]; + svcpu->gpr[1] = vcpu->arch.regs.gpr[1]; + svcpu->gpr[2] = vcpu->arch.regs.gpr[2]; + svcpu->gpr[3] = vcpu->arch.regs.gpr[3]; + svcpu->gpr[4] = vcpu->arch.regs.gpr[4]; + svcpu->gpr[5] = vcpu->arch.regs.gpr[5]; + svcpu->gpr[6] = vcpu->arch.regs.gpr[6]; + svcpu->gpr[7] = vcpu->arch.regs.gpr[7]; + svcpu->gpr[8] = vcpu->arch.regs.gpr[8]; + svcpu->gpr[9] = vcpu->arch.regs.gpr[9]; + svcpu->gpr[10] = vcpu->arch.regs.gpr[10]; + svcpu->gpr[11] = vcpu->arch.regs.gpr[11]; + svcpu->gpr[12] = vcpu->arch.regs.gpr[12]; + svcpu->gpr[13] = vcpu->arch.regs.gpr[13]; svcpu->cr = vcpu->arch.cr; - svcpu->xer = vcpu->arch.xer; - svcpu->ctr = vcpu->arch.ctr; - svcpu->lr = vcpu->arch.lr; - svcpu->pc = vcpu->arch.pc; + svcpu->xer = vcpu->arch.regs.xer; + svcpu->ctr = vcpu->arch.regs.ctr; + svcpu->lr = vcpu->arch.regs.link; + svcpu->pc = vcpu->arch.regs.nip; #ifdef CONFIG_PPC_BOOK3S_64 svcpu->shadow_fscr = vcpu->arch.shadow_fscr; #endif @@ -182,10 +189,45 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu) svcpu_put(svcpu); } +static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu) +{ + ulong guest_msr = kvmppc_get_msr(vcpu); + ulong smsr = guest_msr; + + /* Guest MSR values */ +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE | + MSR_TM | MSR_TS_MASK; +#else + smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE; +#endif + /* Process MSR values */ + smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE; + /* External providers the guest reserved */ + smsr |= (guest_msr & vcpu->arch.guest_owned_ext); + /* 64-bit Process MSR values */ +#ifdef CONFIG_PPC_BOOK3S_64 + smsr |= MSR_ISF | MSR_HV; +#endif +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * in guest privileged state, we want to fail all TM transactions. + * So disable MSR TM bit so that all tbegin. will be able to be + * trapped into host. + */ + if (!(guest_msr & MSR_PR)) + smsr &= ~MSR_TM; +#endif + vcpu->arch.shadow_msr = smsr; +} + /* Copy data touched by real-mode code from shadow vcpu back to vcpu */ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu) { struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + ulong old_msr; +#endif /* * Maybe we were already preempted and synced the svcpu from @@ -194,25 +236,25 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu) if (!svcpu->in_use) goto out; - vcpu->arch.gpr[0] = svcpu->gpr[0]; - vcpu->arch.gpr[1] = svcpu->gpr[1]; - vcpu->arch.gpr[2] = svcpu->gpr[2]; - vcpu->arch.gpr[3] = svcpu->gpr[3]; - vcpu->arch.gpr[4] = svcpu->gpr[4]; - vcpu->arch.gpr[5] = svcpu->gpr[5]; - vcpu->arch.gpr[6] = svcpu->gpr[6]; - vcpu->arch.gpr[7] = svcpu->gpr[7]; - vcpu->arch.gpr[8] = svcpu->gpr[8]; - vcpu->arch.gpr[9] = svcpu->gpr[9]; - vcpu->arch.gpr[10] = svcpu->gpr[10]; - vcpu->arch.gpr[11] = svcpu->gpr[11]; - vcpu->arch.gpr[12] = svcpu->gpr[12]; - vcpu->arch.gpr[13] = svcpu->gpr[13]; + vcpu->arch.regs.gpr[0] = svcpu->gpr[0]; + vcpu->arch.regs.gpr[1] = svcpu->gpr[1]; + vcpu->arch.regs.gpr[2] = svcpu->gpr[2]; + vcpu->arch.regs.gpr[3] = svcpu->gpr[3]; + vcpu->arch.regs.gpr[4] = svcpu->gpr[4]; + vcpu->arch.regs.gpr[5] = svcpu->gpr[5]; + vcpu->arch.regs.gpr[6] = svcpu->gpr[6]; + vcpu->arch.regs.gpr[7] = svcpu->gpr[7]; + vcpu->arch.regs.gpr[8] = svcpu->gpr[8]; + vcpu->arch.regs.gpr[9] = svcpu->gpr[9]; + vcpu->arch.regs.gpr[10] = svcpu->gpr[10]; + vcpu->arch.regs.gpr[11] = svcpu->gpr[11]; + vcpu->arch.regs.gpr[12] = svcpu->gpr[12]; + vcpu->arch.regs.gpr[13] = svcpu->gpr[13]; vcpu->arch.cr = svcpu->cr; - vcpu->arch.xer = svcpu->xer; - vcpu->arch.ctr = svcpu->ctr; - vcpu->arch.lr = svcpu->lr; - vcpu->arch.pc = svcpu->pc; + vcpu->arch.regs.xer = svcpu->xer; + vcpu->arch.regs.ctr = svcpu->ctr; + vcpu->arch.regs.link = svcpu->lr; + vcpu->arch.regs.nip = svcpu->pc; vcpu->arch.shadow_srr1 = svcpu->shadow_srr1; vcpu->arch.fault_dar = svcpu->fault_dar; vcpu->arch.fault_dsisr = svcpu->fault_dsisr; @@ -228,12 +270,116 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu) to_book3s(vcpu)->vtb += get_vtb() - vcpu->arch.entry_vtb; if (cpu_has_feature(CPU_FTR_ARCH_207S)) vcpu->arch.ic += mfspr(SPRN_IC) - vcpu->arch.entry_ic; + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * Unlike other MSR bits, MSR[TS]bits can be changed at guest without + * notifying host: + * modified by unprivileged instructions like "tbegin"/"tend"/ + * "tresume"/"tsuspend" in PR KVM guest. + * + * It is necessary to sync here to calculate a correct shadow_msr. + * + * privileged guest's tbegin will be failed at present. So we + * only take care of problem state guest. + */ + old_msr = kvmppc_get_msr(vcpu); + if (unlikely((old_msr & MSR_PR) && + (vcpu->arch.shadow_srr1 & (MSR_TS_MASK)) != + (old_msr & (MSR_TS_MASK)))) { + old_msr &= ~(MSR_TS_MASK); + old_msr |= (vcpu->arch.shadow_srr1 & (MSR_TS_MASK)); + kvmppc_set_msr_fast(vcpu, old_msr); + kvmppc_recalc_shadow_msr(vcpu); + } +#endif + svcpu->in_use = false; out: svcpu_put(svcpu); } +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu) +{ + tm_enable(); + vcpu->arch.tfhar = mfspr(SPRN_TFHAR); + vcpu->arch.texasr = mfspr(SPRN_TEXASR); + vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + tm_disable(); +} + +void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu) +{ + tm_enable(); + mtspr(SPRN_TFHAR, vcpu->arch.tfhar); + mtspr(SPRN_TEXASR, vcpu->arch.texasr); + mtspr(SPRN_TFIAR, vcpu->arch.tfiar); + tm_disable(); +} + +/* loadup math bits which is enabled at kvmppc_get_msr() but not enabled at + * hardware. + */ +static void kvmppc_handle_lost_math_exts(struct kvm_vcpu *vcpu) +{ + ulong exit_nr; + ulong ext_diff = (kvmppc_get_msr(vcpu) & ~vcpu->arch.guest_owned_ext) & + (MSR_FP | MSR_VEC | MSR_VSX); + + if (!ext_diff) + return; + + if (ext_diff == MSR_FP) + exit_nr = BOOK3S_INTERRUPT_FP_UNAVAIL; + else if (ext_diff == MSR_VEC) + exit_nr = BOOK3S_INTERRUPT_ALTIVEC; + else + exit_nr = BOOK3S_INTERRUPT_VSX; + + kvmppc_handle_ext(vcpu, exit_nr, ext_diff); +} + +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) +{ + if (!(MSR_TM_ACTIVE(kvmppc_get_msr(vcpu)))) { + kvmppc_save_tm_sprs(vcpu); + return; + } + + kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); + kvmppc_giveup_ext(vcpu, MSR_VSX); + + preempt_disable(); + _kvmppc_save_tm_pr(vcpu, mfmsr()); + preempt_enable(); +} + +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) +{ + if (!MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) { + kvmppc_restore_tm_sprs(vcpu); + if (kvmppc_get_msr(vcpu) & MSR_TM) { + kvmppc_handle_lost_math_exts(vcpu); + if (vcpu->arch.fscr & FSCR_TAR) + kvmppc_handle_fac(vcpu, FSCR_TAR_LG); + } + return; + } + + preempt_disable(); + _kvmppc_restore_tm_pr(vcpu, kvmppc_get_msr(vcpu)); + preempt_enable(); + + if (kvmppc_get_msr(vcpu) & MSR_TM) { + kvmppc_handle_lost_math_exts(vcpu); + if (vcpu->arch.fscr & FSCR_TAR) + kvmppc_handle_fac(vcpu, FSCR_TAR_LG); + } +} +#endif + static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu) { int r = 1; /* Indicate we want to get back into the guest */ @@ -306,32 +452,29 @@ static void kvm_set_spte_hva_pr(struct kvm *kvm, unsigned long hva, pte_t pte) /*****************************************/ -static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu) -{ - ulong guest_msr = kvmppc_get_msr(vcpu); - ulong smsr = guest_msr; - - /* Guest MSR values */ - smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE; - /* Process MSR values */ - smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE; - /* External providers the guest reserved */ - smsr |= (guest_msr & vcpu->arch.guest_owned_ext); - /* 64-bit Process MSR values */ -#ifdef CONFIG_PPC_BOOK3S_64 - smsr |= MSR_ISF | MSR_HV; -#endif - vcpu->arch.shadow_msr = smsr; -} - static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) { - ulong old_msr = kvmppc_get_msr(vcpu); + ulong old_msr; + + /* For PAPR guest, make sure MSR reflects guest mode */ + if (vcpu->arch.papr_enabled) + msr = (msr & ~MSR_HV) | MSR_ME; #ifdef EXIT_DEBUG printk(KERN_INFO "KVM: Set MSR to 0x%llx\n", msr); #endif +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* We should never target guest MSR to TS=10 && PR=0, + * since we always fail transaction for guest privilege + * state. + */ + if (!(msr & MSR_PR) && MSR_TM_TRANSACTIONAL(msr)) + kvmppc_emulate_tabort(vcpu, + TM_CAUSE_KVM_FAC_UNAV | TM_CAUSE_PERSISTENT); +#endif + + old_msr = kvmppc_get_msr(vcpu); msr &= to_book3s(vcpu)->msr_mask; kvmppc_set_msr_fast(vcpu, msr); kvmppc_recalc_shadow_msr(vcpu); @@ -387,6 +530,11 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) /* Preload FPU if it's enabled */ if (kvmppc_get_msr(vcpu) & MSR_FP) kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + if (kvmppc_get_msr(vcpu) & MSR_TM) + kvmppc_handle_lost_math_exts(vcpu); +#endif } void kvmppc_set_pvr_pr(struct kvm_vcpu *vcpu, u32 pvr) @@ -584,24 +732,20 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, pte.may_execute = !data; } - if (page_found == -ENOENT) { - /* Page not found in guest PTE entries */ - u64 ssrr1 = vcpu->arch.shadow_srr1; - u64 msr = kvmppc_get_msr(vcpu); - kvmppc_set_dar(vcpu, kvmppc_get_fault_dar(vcpu)); - kvmppc_set_dsisr(vcpu, vcpu->arch.fault_dsisr); - kvmppc_set_msr_fast(vcpu, msr | (ssrr1 & 0xf8000000ULL)); - kvmppc_book3s_queue_irqprio(vcpu, vec); - } else if (page_found == -EPERM) { - /* Storage protection */ - u32 dsisr = vcpu->arch.fault_dsisr; - u64 ssrr1 = vcpu->arch.shadow_srr1; - u64 msr = kvmppc_get_msr(vcpu); - kvmppc_set_dar(vcpu, kvmppc_get_fault_dar(vcpu)); - dsisr = (dsisr & ~DSISR_NOHPTE) | DSISR_PROTFAULT; - kvmppc_set_dsisr(vcpu, dsisr); - kvmppc_set_msr_fast(vcpu, msr | (ssrr1 & 0xf8000000ULL)); - kvmppc_book3s_queue_irqprio(vcpu, vec); + if (page_found == -ENOENT || page_found == -EPERM) { + /* Page not found in guest PTE entries, or protection fault */ + u64 flags; + + if (page_found == -EPERM) + flags = DSISR_PROTFAULT; + else + flags = DSISR_NOHPTE; + if (data) { + flags |= vcpu->arch.fault_dsisr & DSISR_ISSTORE; + kvmppc_core_queue_data_storage(vcpu, eaddr, flags); + } else { + kvmppc_core_queue_inst_storage(vcpu, flags); + } } else if (page_found == -EINVAL) { /* Page not found in guest SLB */ kvmppc_set_dar(vcpu, kvmppc_get_fault_dar(vcpu)); @@ -683,7 +827,7 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr) } /* Give up facility (TAR / EBB / DSCR) */ -static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac) +void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac) { #ifdef CONFIG_PPC_BOOK3S_64 if (!(vcpu->arch.shadow_fscr & (1ULL << fac))) { @@ -802,7 +946,7 @@ static void kvmppc_handle_lost_ext(struct kvm_vcpu *vcpu) #ifdef CONFIG_PPC_BOOK3S_64 -static void kvmppc_trigger_fac_interrupt(struct kvm_vcpu *vcpu, ulong fac) +void kvmppc_trigger_fac_interrupt(struct kvm_vcpu *vcpu, ulong fac) { /* Inject the Interrupt Cause field and trigger a guest interrupt */ vcpu->arch.fscr &= ~(0xffULL << 56); @@ -864,6 +1008,18 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac) break; } +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* Since we disabled MSR_TM at privilege state, the mfspr instruction + * for TM spr can trigger TM fac unavailable. In this case, the + * emulation is handled by kvmppc_emulate_fac(), which invokes + * kvmppc_emulate_mfspr() finally. But note the mfspr can include + * RT for NV registers. So it need to restore those NV reg to reflect + * the update. + */ + if ((fac == FSCR_TM_LG) && !(kvmppc_get_msr(vcpu) & MSR_PR)) + return RESUME_GUEST_NV; +#endif + return RESUME_GUEST; } @@ -872,7 +1028,12 @@ void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr) if ((vcpu->arch.fscr & FSCR_TAR) && !(fscr & FSCR_TAR)) { /* TAR got dropped, drop it in shadow too */ kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); + } else if (!(vcpu->arch.fscr & FSCR_TAR) && (fscr & FSCR_TAR)) { + vcpu->arch.fscr = fscr; + kvmppc_handle_fac(vcpu, FSCR_TAR_LG); + return; } + vcpu->arch.fscr = fscr; } #endif @@ -1017,10 +1178,8 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL); r = RESUME_GUEST; } else { - u64 msr = kvmppc_get_msr(vcpu); - msr |= shadow_srr1 & 0x58000000; - kvmppc_set_msr_fast(vcpu, msr); - kvmppc_book3s_queue_irqprio(vcpu, exit_nr); + kvmppc_core_queue_inst_storage(vcpu, + shadow_srr1 & 0x58000000); r = RESUME_GUEST; } break; @@ -1059,9 +1218,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr); srcu_read_unlock(&vcpu->kvm->srcu, idx); } else { - kvmppc_set_dar(vcpu, dar); - kvmppc_set_dsisr(vcpu, fault_dsisr); - kvmppc_book3s_queue_irqprio(vcpu, exit_nr); + kvmppc_core_queue_data_storage(vcpu, dar, fault_dsisr); r = RESUME_GUEST; } break; @@ -1092,10 +1249,13 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, case BOOK3S_INTERRUPT_EXTERNAL: case BOOK3S_INTERRUPT_EXTERNAL_LEVEL: case BOOK3S_INTERRUPT_EXTERNAL_HV: + case BOOK3S_INTERRUPT_H_VIRT: vcpu->stat.ext_intr_exits++; r = RESUME_GUEST; break; + case BOOK3S_INTERRUPT_HMI: case BOOK3S_INTERRUPT_PERFMON: + case BOOK3S_INTERRUPT_SYSTEM_RESET: r = RESUME_GUEST; break; case BOOK3S_INTERRUPT_PROGRAM: @@ -1225,8 +1385,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, } #ifdef CONFIG_PPC_BOOK3S_64 case BOOK3S_INTERRUPT_FAC_UNAVAIL: - kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56); - r = RESUME_GUEST; + r = kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56); break; #endif case BOOK3S_INTERRUPT_MACHINE_CHECK: @@ -1379,6 +1538,73 @@ static int kvmppc_get_one_reg_pr(struct kvm_vcpu *vcpu, u64 id, else *val = get_reg_val(id, 0); break; +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + case KVM_REG_PPC_TFHAR: + *val = get_reg_val(id, vcpu->arch.tfhar); + break; + case KVM_REG_PPC_TFIAR: + *val = get_reg_val(id, vcpu->arch.tfiar); + break; + case KVM_REG_PPC_TEXASR: + *val = get_reg_val(id, vcpu->arch.texasr); + break; + case KVM_REG_PPC_TM_GPR0 ... KVM_REG_PPC_TM_GPR31: + *val = get_reg_val(id, + vcpu->arch.gpr_tm[id-KVM_REG_PPC_TM_GPR0]); + break; + case KVM_REG_PPC_TM_VSR0 ... KVM_REG_PPC_TM_VSR63: + { + int i, j; + + i = id - KVM_REG_PPC_TM_VSR0; + if (i < 32) + for (j = 0; j < TS_FPRWIDTH; j++) + val->vsxval[j] = vcpu->arch.fp_tm.fpr[i][j]; + else { + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + val->vval = vcpu->arch.vr_tm.vr[i-32]; + else + r = -ENXIO; + } + break; + } + case KVM_REG_PPC_TM_CR: + *val = get_reg_val(id, vcpu->arch.cr_tm); + break; + case KVM_REG_PPC_TM_XER: + *val = get_reg_val(id, vcpu->arch.xer_tm); + break; + case KVM_REG_PPC_TM_LR: + *val = get_reg_val(id, vcpu->arch.lr_tm); + break; + case KVM_REG_PPC_TM_CTR: + *val = get_reg_val(id, vcpu->arch.ctr_tm); + break; + case KVM_REG_PPC_TM_FPSCR: + *val = get_reg_val(id, vcpu->arch.fp_tm.fpscr); + break; + case KVM_REG_PPC_TM_AMR: + *val = get_reg_val(id, vcpu->arch.amr_tm); + break; + case KVM_REG_PPC_TM_PPR: + *val = get_reg_val(id, vcpu->arch.ppr_tm); + break; + case KVM_REG_PPC_TM_VRSAVE: + *val = get_reg_val(id, vcpu->arch.vrsave_tm); + break; + case KVM_REG_PPC_TM_VSCR: + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + *val = get_reg_val(id, vcpu->arch.vr_tm.vscr.u[3]); + else + r = -ENXIO; + break; + case KVM_REG_PPC_TM_DSCR: + *val = get_reg_val(id, vcpu->arch.dscr_tm); + break; + case KVM_REG_PPC_TM_TAR: + *val = get_reg_val(id, vcpu->arch.tar_tm); + break; +#endif default: r = -EINVAL; break; @@ -1412,6 +1638,72 @@ static int kvmppc_set_one_reg_pr(struct kvm_vcpu *vcpu, u64 id, case KVM_REG_PPC_LPCR_64: kvmppc_set_lpcr_pr(vcpu, set_reg_val(id, *val)); break; +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + case KVM_REG_PPC_TFHAR: + vcpu->arch.tfhar = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TFIAR: + vcpu->arch.tfiar = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TEXASR: + vcpu->arch.texasr = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_GPR0 ... KVM_REG_PPC_TM_GPR31: + vcpu->arch.gpr_tm[id - KVM_REG_PPC_TM_GPR0] = + set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_VSR0 ... KVM_REG_PPC_TM_VSR63: + { + int i, j; + + i = id - KVM_REG_PPC_TM_VSR0; + if (i < 32) + for (j = 0; j < TS_FPRWIDTH; j++) + vcpu->arch.fp_tm.fpr[i][j] = val->vsxval[j]; + else + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + vcpu->arch.vr_tm.vr[i-32] = val->vval; + else + r = -ENXIO; + break; + } + case KVM_REG_PPC_TM_CR: + vcpu->arch.cr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_XER: + vcpu->arch.xer_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_LR: + vcpu->arch.lr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_CTR: + vcpu->arch.ctr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_FPSCR: + vcpu->arch.fp_tm.fpscr = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_AMR: + vcpu->arch.amr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_PPR: + vcpu->arch.ppr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_VRSAVE: + vcpu->arch.vrsave_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_VSCR: + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + vcpu->arch.vr.vscr.u[3] = set_reg_val(id, *val); + else + r = -ENXIO; + break; + case KVM_REG_PPC_TM_DSCR: + vcpu->arch.dscr_tm = set_reg_val(id, *val); + break; + case KVM_REG_PPC_TM_TAR: + vcpu->arch.tar_tm = set_reg_val(id, *val); + break; +#endif default: r = -EINVAL; break; @@ -1687,6 +1979,17 @@ static int kvm_vm_ioctl_get_smmu_info_pr(struct kvm *kvm, return 0; } + +static int kvm_configure_mmu_pr(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg) +{ + if (!cpu_has_feature(CPU_FTR_ARCH_300)) + return -ENODEV; + /* Require flags and process table base and size to all be zero. */ + if (cfg->flags || cfg->process_table) + return -EINVAL; + return 0; +} + #else static int kvm_vm_ioctl_get_smmu_info_pr(struct kvm *kvm, struct kvm_ppc_smmu_info *info) @@ -1735,9 +2038,12 @@ static void kvmppc_core_destroy_vm_pr(struct kvm *kvm) static int kvmppc_core_check_processor_compat_pr(void) { /* - * Disable KVM for Power9 untill the required bits merged. + * PR KVM can work on POWER9 inside a guest partition + * running in HPT mode. It can't work if we are using + * radix translation (because radix provides no way for + * a process to have unique translations in quadrant 3). */ - if (cpu_has_feature(CPU_FTR_ARCH_300)) + if (cpu_has_feature(CPU_FTR_ARCH_300) && radix_enabled()) return -EIO; return 0; } @@ -1781,7 +2087,9 @@ static struct kvmppc_ops kvm_ops_pr = { .arch_vm_ioctl = kvm_arch_vm_ioctl_pr, #ifdef CONFIG_PPC_BOOK3S_64 .hcall_implemented = kvmppc_hcall_impl_pr, + .configure_mmu = kvm_configure_mmu_pr, #endif + .giveup_ext = kvmppc_giveup_ext, }; diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S index 93a180ceefad..98ccc7ec5d48 100644 --- a/arch/powerpc/kvm/book3s_segment.S +++ b/arch/powerpc/kvm/book3s_segment.S @@ -383,6 +383,19 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) */ PPC_LL r6, HSTATE_HOST_MSR(r13) +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * We don't want to change MSR[TS] bits via rfi here. + * The actual TM handling logic will be in host with + * recovered DR/IR bits after HSTATE_VMHANDLER. + * And MSR_TM can be enabled in HOST_MSR so rfid may + * not suppress this change and can lead to exception. + * Manually set MSR to prevent TS state change here. + */ + mfmsr r7 + rldicl r7, r7, 64 - MSR_TS_S_LG, 62 + rldimi r6, r7, MSR_TS_S_LG, 63 - MSR_TS_T_LG +#endif PPC_LL r8, HSTATE_VMHANDLER(r13) #ifdef CONFIG_PPC64 diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c index 99c3620b40d9..6e41ba7ec8f4 100644 --- a/arch/powerpc/kvm/book3s_xive_template.c +++ b/arch/powerpc/kvm/book3s_xive_template.c @@ -334,7 +334,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu) */ /* Return interrupt and old CPPR in GPR4 */ - vcpu->arch.gpr[4] = hirq | (old_cppr << 24); + vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24); return H_SUCCESS; } @@ -369,7 +369,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll); /* Return interrupt and old CPPR in GPR4 */ - vcpu->arch.gpr[4] = hirq | (xc->cppr << 24); + vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24); return H_SUCCESS; } diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 876d4f294fdd..a9ca016da670 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -77,8 +77,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu) { int i; - printk("pc: %08lx msr: %08llx\n", vcpu->arch.pc, vcpu->arch.shared->msr); - printk("lr: %08lx ctr: %08lx\n", vcpu->arch.lr, vcpu->arch.ctr); + printk("pc: %08lx msr: %08llx\n", vcpu->arch.regs.nip, + vcpu->arch.shared->msr); + printk("lr: %08lx ctr: %08lx\n", vcpu->arch.regs.link, + vcpu->arch.regs.ctr); printk("srr0: %08llx srr1: %08llx\n", vcpu->arch.shared->srr0, vcpu->arch.shared->srr1); @@ -491,24 +493,25 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, if (allowed) { switch (int_class) { case INT_CLASS_NONCRIT: - set_guest_srr(vcpu, vcpu->arch.pc, + set_guest_srr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_CRIT: - set_guest_csrr(vcpu, vcpu->arch.pc, + set_guest_csrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_DBG: - set_guest_dsrr(vcpu, vcpu->arch.pc, + set_guest_dsrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_MC: - set_guest_mcsrr(vcpu, vcpu->arch.pc, + set_guest_mcsrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; } - vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority]; + vcpu->arch.regs.nip = vcpu->arch.ivpr | + vcpu->arch.ivor[priority]; if (update_esr == true) kvmppc_set_esr(vcpu, vcpu->arch.queued_esr); if (update_dear == true) @@ -826,7 +829,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) case EMULATE_FAIL: printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n", - __func__, vcpu->arch.pc, vcpu->arch.last_inst); + __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst); /* For debugging, encode the failing instruction and * report it to userspace. */ run->hw.hardware_exit_reason = ~0ULL << 32; @@ -875,7 +878,7 @@ static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu) */ vcpu->arch.dbsr = 0; run->debug.arch.status = 0; - run->debug.arch.address = vcpu->arch.pc; + run->debug.arch.address = vcpu->arch.regs.nip; if (dbsr & (DBSR_IAC1 | DBSR_IAC2 | DBSR_IAC3 | DBSR_IAC4)) { run->debug.arch.status |= KVMPPC_DEBUG_BREAKPOINT; @@ -971,7 +974,7 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu, case EMULATE_FAIL: pr_debug("%s: load instruction from guest address %lx failed\n", - __func__, vcpu->arch.pc); + __func__, vcpu->arch.regs.nip); /* For debugging, encode the failing instruction and * report it to userspace. */ run->hw.hardware_exit_reason = ~0ULL << 32; @@ -1169,7 +1172,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, case BOOKE_INTERRUPT_SPE_FP_DATA: case BOOKE_INTERRUPT_SPE_FP_ROUND: printk(KERN_CRIT "%s: unexpected SPE interrupt %u at %08lx\n", - __func__, exit_nr, vcpu->arch.pc); + __func__, exit_nr, vcpu->arch.regs.nip); run->hw.hardware_exit_reason = exit_nr; r = RESUME_HOST; break; @@ -1299,7 +1302,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, } case BOOKE_INTERRUPT_ITLB_MISS: { - unsigned long eaddr = vcpu->arch.pc; + unsigned long eaddr = vcpu->arch.regs.nip; gpa_t gpaddr; gfn_t gfn; int gtlb_index; @@ -1391,7 +1394,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) int i; int r; - vcpu->arch.pc = 0; + vcpu->arch.regs.nip = 0; vcpu->arch.shared->pir = vcpu->vcpu_id; kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */ kvmppc_set_msr(vcpu, 0); @@ -1440,10 +1443,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) vcpu_load(vcpu); - regs->pc = vcpu->arch.pc; + regs->pc = vcpu->arch.regs.nip; regs->cr = kvmppc_get_cr(vcpu); - regs->ctr = vcpu->arch.ctr; - regs->lr = vcpu->arch.lr; + regs->ctr = vcpu->arch.regs.ctr; + regs->lr = vcpu->arch.regs.link; regs->xer = kvmppc_get_xer(vcpu); regs->msr = vcpu->arch.shared->msr; regs->srr0 = kvmppc_get_srr0(vcpu); @@ -1471,10 +1474,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) vcpu_load(vcpu); - vcpu->arch.pc = regs->pc; + vcpu->arch.regs.nip = regs->pc; kvmppc_set_cr(vcpu, regs->cr); - vcpu->arch.ctr = regs->ctr; - vcpu->arch.lr = regs->lr; + vcpu->arch.regs.ctr = regs->ctr; + vcpu->arch.regs.link = regs->lr; kvmppc_set_xer(vcpu, regs->xer); kvmppc_set_msr(vcpu, regs->msr); kvmppc_set_srr0(vcpu, regs->srr0); diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c index a82f64502de1..d23e582f0fee 100644 --- a/arch/powerpc/kvm/booke_emulate.c +++ b/arch/powerpc/kvm/booke_emulate.c @@ -34,19 +34,19 @@ static void kvmppc_emul_rfi(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.shared->srr0; + vcpu->arch.regs.nip = vcpu->arch.shared->srr0; kvmppc_set_msr(vcpu, vcpu->arch.shared->srr1); } static void kvmppc_emul_rfdi(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.dsrr0; + vcpu->arch.regs.nip = vcpu->arch.dsrr0; kvmppc_set_msr(vcpu, vcpu->arch.dsrr1); } static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.csrr0; + vcpu->arch.regs.nip = vcpu->arch.csrr0; kvmppc_set_msr(vcpu, vcpu->arch.csrr1); } diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c index 990db69a1d0b..3f8189eb56ed 100644 --- a/arch/powerpc/kvm/e500_emulate.c +++ b/arch/powerpc/kvm/e500_emulate.c @@ -53,7 +53,7 @@ static int dbell2prio(ulong param) static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb) { - ulong param = vcpu->arch.gpr[rb]; + ulong param = vcpu->arch.regs.gpr[rb]; int prio = dbell2prio(param); if (prio < 0) @@ -65,7 +65,7 @@ static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb) static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb) { - ulong param = vcpu->arch.gpr[rb]; + ulong param = vcpu->arch.regs.gpr[rb]; int prio = dbell2prio(rb); int pir = param & PPC_DBELL_PIR_MASK; int i; @@ -94,7 +94,7 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu, switch (get_oc(inst)) { case EHPRIV_OC_DEBUG: run->exit_reason = KVM_EXIT_DEBUG; - run->debug.arch.address = vcpu->arch.pc; + run->debug.arch.address = vcpu->arch.regs.nip; run->debug.arch.status = 0; kvmppc_account_exit(vcpu, DEBUG_EXITS); emulated = EMULATE_EXIT_USER; diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c index ddbf8f0284c0..24296f4cadc6 100644 --- a/arch/powerpc/kvm/e500_mmu.c +++ b/arch/powerpc/kvm/e500_mmu.c @@ -513,7 +513,7 @@ void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu) { unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS); - kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.pc, as); + kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.regs.nip, as); } void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index c878b4ffb86f..8f2985e46f6f 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -625,8 +625,8 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, } #ifdef CONFIG_KVM_BOOKE_HV -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *instr) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *instr) { gva_t geaddr; hpa_t addr; @@ -715,8 +715,8 @@ int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, return EMULATE_DONE; } #else -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *instr) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *instr) { return EMULATE_AGAIN; } diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index a382e15135e6..afde788be141 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -31,6 +31,7 @@ #include #include #include +#include #include "timing.h" #include "trace.h" @@ -84,8 +85,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; u32 inst; int ra, rs, rt; - enum emulation_result emulated; + enum emulation_result emulated = EMULATE_FAIL; int advance = 1; + struct instruction_op op; /* this default type might be overwritten by subcategories */ kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS); @@ -107,580 +109,276 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_vsx_tx_sx_enabled = get_tx_or_sx(inst); vcpu->arch.mmio_vsx_copy_nums = 0; vcpu->arch.mmio_vsx_offset = 0; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_NONE; + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_NONE; vcpu->arch.mmio_sp64_extend = 0; vcpu->arch.mmio_sign_extend = 0; vcpu->arch.mmio_vmx_copy_nums = 0; + vcpu->arch.mmio_vmx_offset = 0; + vcpu->arch.mmio_host_swabbed = 0; - switch (get_op(inst)) { - case 31: - switch (get_xop(inst)) { - case OP_31_XOP_LWZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - break; - - case OP_31_XOP_LWZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + emulated = EMULATE_FAIL; + vcpu->arch.regs.msr = vcpu->arch.shared->msr; + vcpu->arch.regs.ccr = vcpu->arch.cr; + if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) { + int type = op.type & INSTR_TYPE_MASK; + int size = GETSIZE(op.type); - case OP_31_XOP_LBZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - break; - - case OP_31_XOP_LBZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + switch (type) { + case LOAD: { + int instr_byte_swap = op.type & BYTEREV; - case OP_31_XOP_STDX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - break; + if (op.type & SIGNEXT) + emulated = kvmppc_handle_loads(run, vcpu, + op.reg, size, !instr_byte_swap); + else + emulated = kvmppc_handle_load(run, vcpu, + op.reg, size, !instr_byte_swap); - case OP_31_XOP_STDUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_STWX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - break; - - case OP_31_XOP_STWUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STBX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - break; - - case OP_31_XOP_STBUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LHAX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - break; - - case OP_31_XOP_LHAUX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LHZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - break; - - case OP_31_XOP_LHZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STHX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - break; - - case OP_31_XOP_STHUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_DCBST: - case OP_31_XOP_DCBF: - case OP_31_XOP_DCBI: - /* Do nothing. The guest is performing dcbi because - * hardware DMA is not snooped by the dcache, but - * emulated DMA either goes through the dcache as - * normal writes, or the host kernel has handled dcache - * coherence. */ - break; - - case OP_31_XOP_LWBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0); - break; - - case OP_31_XOP_STWBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 0); break; - - case OP_31_XOP_LHBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0); - break; - - case OP_31_XOP_STHBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 0); - break; - - case OP_31_XOP_LDBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0); - break; - - case OP_31_XOP_STDBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 0); - break; - - case OP_31_XOP_LDX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - break; - - case OP_31_XOP_LDUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LWAX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); - break; - - case OP_31_XOP_LWAUX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - + } #ifdef CONFIG_PPC_FPU - case OP_31_XOP_LFSX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_31_XOP_LFSUX: + case LOAD_FP: if (kvmppc_check_fp_disabled(vcpu)) return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - case OP_31_XOP_LFDX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - break; + if (op.type & FPCONV) + vcpu->arch.mmio_sp64_extend = 1; - case OP_31_XOP_LFDUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LFIWAX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_loads(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; + if (op.type & SIGNEXT) + emulated = kvmppc_handle_loads(run, vcpu, + KVM_MMIO_REG_FPR|op.reg, size, 1); + else + emulated = kvmppc_handle_load(run, vcpu, + KVM_MMIO_REG_FPR|op.reg, size, 1); - case OP_31_XOP_LFIWZX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_STFSX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); break; - - case OP_31_XOP_STFSUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STFDX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 8, 1); - break; - - case OP_31_XOP_STFDUX: - if (kvmppc_check_fp_disabled(vcpu)) +#endif +#ifdef CONFIG_ALTIVEC + case LOAD_VMX: + if (kvmppc_check_altivec_disabled(vcpu)) return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - case OP_31_XOP_STFIWX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); + /* Hardware enforces alignment of VMX accesses */ + vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1); + vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1); + + if (size == 16) { /* lvx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_DWORD; + } else if (size == 4) { /* lvewx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_WORD; + } else if (size == 2) { /* lvehx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_HWORD; + } else if (size == 1) { /* lvebx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_BYTE; + } else + break; + + vcpu->arch.mmio_vmx_offset = + (vcpu->arch.vaddr_accessed & 0xf)/size; + + if (size == 16) { + vcpu->arch.mmio_vmx_copy_nums = 2; + emulated = kvmppc_handle_vmx_load(run, + vcpu, KVM_MMIO_REG_VMX|op.reg, + 8, 1); + } else { + vcpu->arch.mmio_vmx_copy_nums = 1; + emulated = kvmppc_handle_vmx_load(run, vcpu, + KVM_MMIO_REG_VMX|op.reg, + size, 1); + } break; #endif - #ifdef CONFIG_VSX - case OP_31_XOP_LXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; + case LOAD_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; + else + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_DWORD; + } else if (op.element_size == 4) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_WORD_LOAD_DUMP; + else + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_WORD; + } else + break; + + if (size < op.element_size) { + /* precision convert case: lxsspx, etc */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* lxvw4x, lxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } - case OP_31_XOP_LXSIWAX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 1); - break; - - case OP_31_XOP_LXSIWZX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); + KVM_MMIO_REG_VSX | (op.reg & 0x1f), + io_size_each, 1, op.type & SIGNEXT); break; + } +#endif + case STORE: + /* if need byte reverse, op.val has been reversed by + * analyse_instr(). + */ + emulated = kvmppc_handle_store(run, vcpu, op.val, + size, 1); - case OP_31_XOP_LXVD2X: - /* - * In this case, the official load/store process is like this: - * Step1, exit from vm by page fault isr, then kvm save vsr. - * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS - * as reference. - * - * Step2, copy data between memory and VCPU - * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use - * 2copies*8bytes or 4copies*4bytes - * to simulate one copy of 16bytes. - * Also there is an endian issue here, we should notice the - * layout of memory. - * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference. - * If host is little-endian, kvm will call XXSWAPD for - * LXVD2X_ROT/STXVD2X_ROT. - * So, if host is little-endian, - * the postion of memeory should be swapped. - * - * Step3, return to guest, kvm reset register. - * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS - * as reference. - */ - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_LXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); break; - - case OP_31_XOP_LXVDSX: - if (kvmppc_check_vsx_disabled(vcpu)) +#ifdef CONFIG_PPC_FPU + case STORE_FP: + if (kvmppc_check_fp_disabled(vcpu)) return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = - KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - case OP_31_XOP_STXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; + /* The FP registers need to be flushed so that + * kvmppc_handle_store() can read actual FP vals + * from vcpu->arch. + */ + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_FP); - case OP_31_XOP_STXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; + if (op.type & FPCONV) + vcpu->arch.mmio_sp64_extend = 1; - case OP_31_XOP_STXSIWX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_offset = 1; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; + emulated = kvmppc_handle_store(run, vcpu, + VCPU_FPR(vcpu, op.reg), size, 1); - case OP_31_XOP_STXVD2X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_STXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); break; -#endif /* CONFIG_VSX */ - +#endif #ifdef CONFIG_ALTIVEC - case OP_31_XOP_LVX: + case STORE_VMX: if (kvmppc_check_altivec_disabled(vcpu)) return EMULATE_DONE; - vcpu->arch.vaddr_accessed &= ~0xFULL; - vcpu->arch.paddr_accessed &= ~0xFULL; - vcpu->arch.mmio_vmx_copy_nums = 2; - emulated = kvmppc_handle_load128_by2x64(run, vcpu, - KVM_MMIO_REG_VMX|rt, 1); - break; - case OP_31_XOP_STVX: - if (kvmppc_check_altivec_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.vaddr_accessed &= ~0xFULL; - vcpu->arch.paddr_accessed &= ~0xFULL; - vcpu->arch.mmio_vmx_copy_nums = 2; - emulated = kvmppc_handle_store128_by2x64(run, vcpu, - rs, 1); - break; -#endif /* CONFIG_ALTIVEC */ + /* Hardware enforces alignment of VMX accesses. */ + vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1); + vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1); + + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VEC); + if (size == 16) { /* stvx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_DWORD; + } else if (size == 4) { /* stvewx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_WORD; + } else if (size == 2) { /* stvehx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_HWORD; + } else if (size == 1) { /* stvebx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_BYTE; + } else + break; + + vcpu->arch.mmio_vmx_offset = + (vcpu->arch.vaddr_accessed & 0xf)/size; + + if (size == 16) { + vcpu->arch.mmio_vmx_copy_nums = 2; + emulated = kvmppc_handle_vmx_store(run, + vcpu, op.reg, 8, 1); + } else { + vcpu->arch.mmio_vmx_copy_nums = 1; + emulated = kvmppc_handle_vmx_store(run, + vcpu, op.reg, size, 1); + } - default: - emulated = EMULATE_FAIL; break; - } - break; - - case OP_LWZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - break; - -#ifdef CONFIG_PPC_FPU - case OP_STFS: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 4, 1); - break; - - case OP_STFSU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STFD: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 8, 1); - break; - - case OP_STFDU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; #endif +#ifdef CONFIG_VSX + case STORE_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VSX); + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_DWORD; + else if (op.element_size == 4) + vcpu->arch.mmio_copy_type = + KVMPPC_VSX_COPY_WORD; + else + break; + + if (size < op.element_size) { + /* precise conversion case, like stxsspx */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* stxvw4x, stxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } - case OP_LD: - rt = get_rt(inst); - switch (inst & 3) { - case 0: /* ld */ - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - break; - case 1: /* ldu */ - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - case 2: /* lwa */ - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); + emulated = kvmppc_handle_vsx_store(run, vcpu, + op.reg & 0x1f, io_size_each, 1); break; - default: - emulated = EMULATE_FAIL; } - break; - - case OP_LWZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LBZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - break; - - case OP_LBZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STW: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), - 4, 1); - break; - - case OP_STD: - rs = get_rs(inst); - switch (inst & 3) { - case 0: /* std */ - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - break; - case 1: /* stdu */ - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); +#endif + case CACHEOP: + /* Do nothing. The guest is performing dcbi because + * hardware DMA is not snooped by the dcache, but + * emulated DMA either goes through the dcache as + * normal writes, or the host kernel has handled dcache + * coherence. + */ + emulated = EMULATE_DONE; break; default: - emulated = EMULATE_FAIL; + break; } - break; - - case OP_STWU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STB: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - break; - - case OP_STBU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LHZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - break; - - case OP_LHZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LHA: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - break; - - case OP_LHAU: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STH: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - break; - - case OP_STHU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - -#ifdef CONFIG_PPC_FPU - case OP_LFS: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_LFSU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LFD: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - break; - - case OP_LFDU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; -#endif - - default: - emulated = EMULATE_FAIL; - break; } if (emulated == EMULATE_FAIL) { diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 3764d000872e..0e8c20c5eaac 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -648,9 +648,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) #endif #ifdef CONFIG_PPC_TRANSACTIONAL_MEM case KVM_CAP_PPC_HTM: - r = hv_enabled && - (!!(cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)); + r = !!(cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM) || + (hv_enabled && cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)); break; #endif default: @@ -907,6 +906,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu, } } +static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu, + u32 gpr) +{ + union kvmppc_one_reg val; + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (vcpu->arch.mmio_vsx_tx_sx_enabled) { + val.vsx32val[0] = gpr; + val.vsx32val[1] = gpr; + val.vsx32val[2] = gpr; + val.vsx32val[3] = gpr; + VCPU_VSX_VR(vcpu, index) = val.vval; + } else { + val.vsx32val[0] = gpr; + val.vsx32val[1] = gpr; + VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0]; + VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0]; + } +} + static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu, u32 gpr32) { @@ -933,30 +952,110 @@ static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu, #endif /* CONFIG_VSX */ #ifdef CONFIG_ALTIVEC +static inline int kvmppc_get_vmx_offset_generic(struct kvm_vcpu *vcpu, + int index, int element_size) +{ + int offset; + int elts = sizeof(vector128)/element_size; + + if ((index < 0) || (index >= elts)) + return -1; + + if (kvmppc_need_byteswap(vcpu)) + offset = elts - index - 1; + else + offset = index; + + return offset; +} + +static inline int kvmppc_get_vmx_dword_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 8); +} + +static inline int kvmppc_get_vmx_word_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 4); +} + +static inline int kvmppc_get_vmx_hword_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 2); +} + +static inline int kvmppc_get_vmx_byte_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 1); +} + + static inline void kvmppc_set_vmx_dword(struct kvm_vcpu *vcpu, - u64 gpr) + u64 gpr) { + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_dword_offset(vcpu, + vcpu->arch.mmio_vmx_offset); int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; - u32 hi, lo; - u32 di; -#ifdef __BIG_ENDIAN - hi = gpr >> 32; - lo = gpr & 0xffffffff; -#else - lo = gpr >> 32; - hi = gpr & 0xffffffff; -#endif + if (offset == -1) + return; + + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsxval[offset] = gpr; + VCPU_VSX_VR(vcpu, index) = val.vval; +} + +static inline void kvmppc_set_vmx_word(struct kvm_vcpu *vcpu, + u32 gpr32) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_word_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; - di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */ - if (di > 1) + if (offset == -1) return; - if (vcpu->arch.mmio_host_swabbed) - di = 1 - di; + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx32val[offset] = gpr32; + VCPU_VSX_VR(vcpu, index) = val.vval; +} + +static inline void kvmppc_set_vmx_hword(struct kvm_vcpu *vcpu, + u16 gpr16) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_hword_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (offset == -1) + return; + + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx16val[offset] = gpr16; + VCPU_VSX_VR(vcpu, index) = val.vval; +} + +static inline void kvmppc_set_vmx_byte(struct kvm_vcpu *vcpu, + u8 gpr8) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_byte_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (offset == -1) + return; - VCPU_VSX_VR(vcpu, index).u[di * 2] = hi; - VCPU_VSX_VR(vcpu, index).u[di * 2 + 1] = lo; + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx8val[offset] = gpr8; + VCPU_VSX_VR(vcpu, index) = val.vval; } #endif /* CONFIG_ALTIVEC */ @@ -1041,6 +1140,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr); break; case KVM_MMIO_REG_FPR: + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP); + VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr; break; #ifdef CONFIG_PPC_BOOK3S @@ -1054,18 +1156,36 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_VSX case KVM_MMIO_REG_VSX: - if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD) + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX); + + if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_DWORD) kvmppc_set_vsr_dword(vcpu, gpr); - else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD) + else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_WORD) kvmppc_set_vsr_word(vcpu, gpr); - else if (vcpu->arch.mmio_vsx_copy_type == + else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_DWORD_LOAD_DUMP) kvmppc_set_vsr_dword_dump(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == + KVMPPC_VSX_COPY_WORD_LOAD_DUMP) + kvmppc_set_vsr_word_dump(vcpu, gpr); break; #endif #ifdef CONFIG_ALTIVEC case KVM_MMIO_REG_VMX: - kvmppc_set_vmx_dword(vcpu, gpr); + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC); + + if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_DWORD) + kvmppc_set_vmx_dword(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_WORD) + kvmppc_set_vmx_word(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == + KVMPPC_VMX_COPY_HWORD) + kvmppc_set_vmx_hword(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == + KVMPPC_VMX_COPY_BYTE) + kvmppc_set_vmx_byte(vcpu, gpr); break; #endif default: @@ -1228,7 +1348,7 @@ static inline int kvmppc_get_vsr_data(struct kvm_vcpu *vcpu, int rs, u64 *val) u32 dword_offset, word_offset; union kvmppc_one_reg reg; int vsx_offset = 0; - int copy_type = vcpu->arch.mmio_vsx_copy_type; + int copy_type = vcpu->arch.mmio_copy_type; int result = 0; switch (copy_type) { @@ -1344,14 +1464,16 @@ static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu, #endif /* CONFIG_VSX */ #ifdef CONFIG_ALTIVEC -/* handle quadword load access in two halves */ -int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, - unsigned int rt, int is_default_endian) +int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rt, unsigned int bytes, int is_default_endian) { enum emulation_result emulated = EMULATE_DONE; + if (vcpu->arch.mmio_vsx_copy_nums > 2) + return EMULATE_FAIL; + while (vcpu->arch.mmio_vmx_copy_nums) { - emulated = __kvmppc_handle_load(run, vcpu, rt, 8, + emulated = __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 0); if (emulated != EMULATE_DONE) @@ -1359,55 +1481,127 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, vcpu->arch.paddr_accessed += run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; } return emulated; } -static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val) +int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val) { - vector128 vrs = VCPU_VSX_VR(vcpu, rs); - u32 di; - u64 w0, w1; + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; - di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */ - if (di > 1) + vmx_offset = + kvmppc_get_vmx_dword_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) return -1; - if (vcpu->arch.mmio_host_swabbed) - di = 1 - di; + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsxval[vmx_offset]; - w0 = vrs.u[di * 2]; - w1 = vrs.u[di * 2 + 1]; + return result; +} -#ifdef __BIG_ENDIAN - *val = (w0 << 32) | w1; -#else - *val = (w1 << 32) | w0; -#endif - return 0; +int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_word_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx32val[vmx_offset]; + + return result; +} + +int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_hword_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx16val[vmx_offset]; + + return result; } -/* handle quadword store in two halves */ -int kvmppc_handle_store128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, - unsigned int rs, int is_default_endian) +int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_byte_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx8val[vmx_offset]; + + return result; +} + +int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rs, unsigned int bytes, int is_default_endian) { u64 val = 0; + unsigned int index = rs & KVM_MMIO_REG_MASK; enum emulation_result emulated = EMULATE_DONE; + if (vcpu->arch.mmio_vsx_copy_nums > 2) + return EMULATE_FAIL; + vcpu->arch.io_gpr = rs; while (vcpu->arch.mmio_vmx_copy_nums) { - if (kvmppc_get_vmx_data(vcpu, rs, &val) == -1) + switch (vcpu->arch.mmio_copy_type) { + case KVMPPC_VMX_COPY_DWORD: + if (kvmppc_get_vmx_dword(vcpu, index, &val) == -1) + return EMULATE_FAIL; + + break; + case KVMPPC_VMX_COPY_WORD: + if (kvmppc_get_vmx_word(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + case KVMPPC_VMX_COPY_HWORD: + if (kvmppc_get_vmx_hword(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + case KVMPPC_VMX_COPY_BYTE: + if (kvmppc_get_vmx_byte(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + default: return EMULATE_FAIL; + } - emulated = kvmppc_handle_store(run, vcpu, val, 8, + emulated = kvmppc_handle_store(run, vcpu, val, bytes, is_default_endian); if (emulated != EMULATE_DONE) break; vcpu->arch.paddr_accessed += run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; } return emulated; @@ -1422,11 +1616,11 @@ static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu, vcpu->arch.paddr_accessed += run->mmio.len; if (!vcpu->mmio_is_write) { - emulated = kvmppc_handle_load128_by2x64(run, vcpu, - vcpu->arch.io_gpr, 1); + emulated = kvmppc_handle_vmx_load(run, vcpu, + vcpu->arch.io_gpr, run->mmio.len, 1); } else { - emulated = kvmppc_handle_store128_by2x64(run, vcpu, - vcpu->arch.io_gpr, 1); + emulated = kvmppc_handle_vmx_store(run, vcpu, + vcpu->arch.io_gpr, run->mmio.len, 1); } switch (emulated) { @@ -1570,8 +1764,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) } #endif #ifdef CONFIG_ALTIVEC - if (vcpu->arch.mmio_vmx_copy_nums > 0) + if (vcpu->arch.mmio_vmx_copy_nums > 0) { vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; + } if (vcpu->arch.mmio_vmx_copy_nums > 0) { r = kvmppc_emulate_mmio_vmx_loadstore(vcpu, run); @@ -1784,16 +1980,16 @@ long kvm_arch_vcpu_ioctl(struct file *filp, void __user *argp = (void __user *)arg; long r; - vcpu_load(vcpu); - switch (ioctl) { case KVM_ENABLE_CAP: { struct kvm_enable_cap cap; r = -EFAULT; + vcpu_load(vcpu); if (copy_from_user(&cap, argp, sizeof(cap))) goto out; r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap); + vcpu_put(vcpu); break; } @@ -1815,9 +2011,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp, case KVM_DIRTY_TLB: { struct kvm_dirty_tlb dirty; r = -EFAULT; + vcpu_load(vcpu); if (copy_from_user(&dirty, argp, sizeof(dirty))) goto out; r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty); + vcpu_put(vcpu); break; } #endif @@ -1826,7 +2024,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } out: - vcpu_put(vcpu); return r; } diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S new file mode 100644 index 000000000000..90e330f21356 --- /dev/null +++ b/arch/powerpc/kvm/tm.S @@ -0,0 +1,384 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License, version 2, as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Derived from book3s_hv_rmhandlers.S, which is: + * + * Copyright 2011 Paul Mackerras, IBM Corp. + * + */ + +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM) + +/* + * Save transactional state and TM-related registers. + * Called with: + * - r3 pointing to the vcpu struct + * - r4 points to the MSR with current TS bits: + * (For HV KVM, it is VCPU_MSR ; For PR KVM, it is host MSR). + * This can modify all checkpointed registers, but + * restores r1, r2 before exit. + */ +_GLOBAL(__kvmppc_save_tm) + mflr r0 + std r0, PPC_LR_STKOFF(r1) + + /* Turn on TM. */ + mfmsr r8 + li r0, 1 + rldimi r8, r0, MSR_TM_LG, 63-MSR_TM_LG + ori r8, r8, MSR_FP + oris r8, r8, (MSR_VEC | MSR_VSX)@h + mtmsrd r8 + + rldicl. r4, r4, 64 - MSR_TS_S_LG, 62 + beq 1f /* TM not active in guest. */ + + std r1, HSTATE_SCRATCH2(r13) + std r3, HSTATE_SCRATCH1(r13) + +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +BEGIN_FTR_SECTION + /* Emulation of the treclaim instruction needs TEXASR before treclaim */ + mfspr r6, SPRN_TEXASR + std r6, VCPU_ORIG_TEXASR(r3) +END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_HV_ASSIST) +#endif + + /* Clear the MSR RI since r1, r13 are all going to be foobar. */ + li r5, 0 + mtmsrd r5, 1 + + li r3, TM_CAUSE_KVM_RESCHED + + /* All GPRs are volatile at this point. */ + TRECLAIM(R3) + + /* Temporarily store r13 and r9 so we have some regs to play with */ + SET_SCRATCH0(r13) + GET_PACA(r13) + std r9, PACATMSCRATCH(r13) + ld r9, HSTATE_SCRATCH1(r13) + + /* Get a few more GPRs free. */ + std r29, VCPU_GPRS_TM(29)(r9) + std r30, VCPU_GPRS_TM(30)(r9) + std r31, VCPU_GPRS_TM(31)(r9) + + /* Save away PPR and DSCR soon so don't run with user values. */ + mfspr r31, SPRN_PPR + HMT_MEDIUM + mfspr r30, SPRN_DSCR +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE + ld r29, HSTATE_DSCR(r13) + mtspr SPRN_DSCR, r29 +#endif + + /* Save all but r9, r13 & r29-r31 */ + reg = 0 + .rept 29 + .if (reg != 9) && (reg != 13) + std reg, VCPU_GPRS_TM(reg)(r9) + .endif + reg = reg + 1 + .endr + /* ... now save r13 */ + GET_SCRATCH0(r4) + std r4, VCPU_GPRS_TM(13)(r9) + /* ... and save r9 */ + ld r4, PACATMSCRATCH(r13) + std r4, VCPU_GPRS_TM(9)(r9) + + /* Reload stack pointer and TOC. */ + ld r1, HSTATE_SCRATCH2(r13) + ld r2, PACATOC(r13) + + /* Set MSR RI now we have r1 and r13 back. */ + li r5, MSR_RI + mtmsrd r5, 1 + + /* Save away checkpinted SPRs. */ + std r31, VCPU_PPR_TM(r9) + std r30, VCPU_DSCR_TM(r9) + mflr r5 + mfcr r6 + mfctr r7 + mfspr r8, SPRN_AMR + mfspr r10, SPRN_TAR + mfxer r11 + std r5, VCPU_LR_TM(r9) + stw r6, VCPU_CR_TM(r9) + std r7, VCPU_CTR_TM(r9) + std r8, VCPU_AMR_TM(r9) + std r10, VCPU_TAR_TM(r9) + std r11, VCPU_XER_TM(r9) + + /* Restore r12 as trap number. */ + lwz r12, VCPU_TRAP(r9) + + /* Save FP/VSX. */ + addi r3, r9, VCPU_FPRS_TM + bl store_fp_state + addi r3, r9, VCPU_VRS_TM + bl store_vr_state + mfspr r6, SPRN_VRSAVE + stw r6, VCPU_VRSAVE_TM(r9) +1: + /* + * We need to save these SPRs after the treclaim so that the software + * error code is recorded correctly in the TEXASR. Also the user may + * change these outside of a transaction, so they must always be + * context switched. + */ + mfspr r7, SPRN_TEXASR + std r7, VCPU_TEXASR(r9) +11: + mfspr r5, SPRN_TFHAR + mfspr r6, SPRN_TFIAR + std r5, VCPU_TFHAR(r9) + std r6, VCPU_TFIAR(r9) + + ld r0, PPC_LR_STKOFF(r1) + mtlr r0 + blr + +/* + * _kvmppc_save_tm_pr() is a wrapper around __kvmppc_save_tm(), so that it can + * be invoked from C function by PR KVM only. + */ +_GLOBAL(_kvmppc_save_tm_pr) + mflr r5 + std r5, PPC_LR_STKOFF(r1) + stdu r1, -SWITCH_FRAME_SIZE(r1) + SAVE_NVGPRS(r1) + + /* save MSR since TM/math bits might be impacted + * by __kvmppc_save_tm(). + */ + mfmsr r5 + SAVE_GPR(5, r1) + + /* also save DSCR/CR/TAR so that it can be recovered later */ + mfspr r6, SPRN_DSCR + SAVE_GPR(6, r1) + + mfcr r7 + stw r7, _CCR(r1) + + mfspr r8, SPRN_TAR + SAVE_GPR(8, r1) + + bl __kvmppc_save_tm + + REST_GPR(8, r1) + mtspr SPRN_TAR, r8 + + ld r7, _CCR(r1) + mtcr r7 + + REST_GPR(6, r1) + mtspr SPRN_DSCR, r6 + + /* need preserve current MSR's MSR_TS bits */ + REST_GPR(5, r1) + mfmsr r6 + rldicl r6, r6, 64 - MSR_TS_S_LG, 62 + rldimi r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG + mtmsrd r5 + + REST_NVGPRS(r1) + addi r1, r1, SWITCH_FRAME_SIZE + ld r5, PPC_LR_STKOFF(r1) + mtlr r5 + blr + +EXPORT_SYMBOL_GPL(_kvmppc_save_tm_pr); + +/* + * Restore transactional state and TM-related registers. + * Called with: + * - r3 pointing to the vcpu struct. + * - r4 is the guest MSR with desired TS bits: + * For HV KVM, it is VCPU_MSR + * For PR KVM, it is provided by caller + * This potentially modifies all checkpointed registers. + * It restores r1, r2 from the PACA. + */ +_GLOBAL(__kvmppc_restore_tm) + mflr r0 + std r0, PPC_LR_STKOFF(r1) + + /* Turn on TM/FP/VSX/VMX so we can restore them. */ + mfmsr r5 + li r6, MSR_TM >> 32 + sldi r6, r6, 32 + or r5, r5, r6 + ori r5, r5, MSR_FP + oris r5, r5, (MSR_VEC | MSR_VSX)@h + mtmsrd r5 + + /* + * The user may change these outside of a transaction, so they must + * always be context switched. + */ + ld r5, VCPU_TFHAR(r3) + ld r6, VCPU_TFIAR(r3) + ld r7, VCPU_TEXASR(r3) + mtspr SPRN_TFHAR, r5 + mtspr SPRN_TFIAR, r6 + mtspr SPRN_TEXASR, r7 + + mr r5, r4 + rldicl. r5, r5, 64 - MSR_TS_S_LG, 62 + beqlr /* TM not active in guest */ + std r1, HSTATE_SCRATCH2(r13) + + /* Make sure the failure summary is set, otherwise we'll program check + * when we trechkpt. It's possible that this might have been not set + * on a kvmppc_set_one_reg() call but we shouldn't let this crash the + * host. + */ + oris r7, r7, (TEXASR_FS)@h + mtspr SPRN_TEXASR, r7 + + /* + * We need to load up the checkpointed state for the guest. + * We need to do this early as it will blow away any GPRs, VSRs and + * some SPRs. + */ + + mr r31, r3 + addi r3, r31, VCPU_FPRS_TM + bl load_fp_state + addi r3, r31, VCPU_VRS_TM + bl load_vr_state + mr r3, r31 + lwz r7, VCPU_VRSAVE_TM(r3) + mtspr SPRN_VRSAVE, r7 + + ld r5, VCPU_LR_TM(r3) + lwz r6, VCPU_CR_TM(r3) + ld r7, VCPU_CTR_TM(r3) + ld r8, VCPU_AMR_TM(r3) + ld r9, VCPU_TAR_TM(r3) + ld r10, VCPU_XER_TM(r3) + mtlr r5 + mtcr r6 + mtctr r7 + mtspr SPRN_AMR, r8 + mtspr SPRN_TAR, r9 + mtxer r10 + + /* + * Load up PPR and DSCR values but don't put them in the actual SPRs + * till the last moment to avoid running with userspace PPR and DSCR for + * too long. + */ + ld r29, VCPU_DSCR_TM(r3) + ld r30, VCPU_PPR_TM(r3) + + std r2, PACATMSCRATCH(r13) /* Save TOC */ + + /* Clear the MSR RI since r1, r13 are all going to be foobar. */ + li r5, 0 + mtmsrd r5, 1 + + /* Load GPRs r0-r28 */ + reg = 0 + .rept 29 + ld reg, VCPU_GPRS_TM(reg)(r31) + reg = reg + 1 + .endr + + mtspr SPRN_DSCR, r29 + mtspr SPRN_PPR, r30 + + /* Load final GPRs */ + ld 29, VCPU_GPRS_TM(29)(r31) + ld 30, VCPU_GPRS_TM(30)(r31) + ld 31, VCPU_GPRS_TM(31)(r31) + + /* TM checkpointed state is now setup. All GPRs are now volatile. */ + TRECHKPT + + /* Now let's get back the state we need. */ + HMT_MEDIUM + GET_PACA(r13) +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE + ld r29, HSTATE_DSCR(r13) + mtspr SPRN_DSCR, r29 +#endif + ld r1, HSTATE_SCRATCH2(r13) + ld r2, PACATMSCRATCH(r13) + + /* Set the MSR RI since we have our registers back. */ + li r5, MSR_RI + mtmsrd r5, 1 + ld r0, PPC_LR_STKOFF(r1) + mtlr r0 + blr + +/* + * _kvmppc_restore_tm_pr() is a wrapper around __kvmppc_restore_tm(), so that it + * can be invoked from C function by PR KVM only. + */ +_GLOBAL(_kvmppc_restore_tm_pr) + mflr r5 + std r5, PPC_LR_STKOFF(r1) + stdu r1, -SWITCH_FRAME_SIZE(r1) + SAVE_NVGPRS(r1) + + /* save MSR to avoid TM/math bits change */ + mfmsr r5 + SAVE_GPR(5, r1) + + /* also save DSCR/CR/TAR so that it can be recovered later */ + mfspr r6, SPRN_DSCR + SAVE_GPR(6, r1) + + mfcr r7 + stw r7, _CCR(r1) + + mfspr r8, SPRN_TAR + SAVE_GPR(8, r1) + + bl __kvmppc_restore_tm + + REST_GPR(8, r1) + mtspr SPRN_TAR, r8 + + ld r7, _CCR(r1) + mtcr r7 + + REST_GPR(6, r1) + mtspr SPRN_DSCR, r6 + + /* need preserve current MSR's MSR_TS bits */ + REST_GPR(5, r1) + mfmsr r6 + rldicl r6, r6, 64 - MSR_TS_S_LG, 62 + rldimi r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG + mtmsrd r5 + + REST_NVGPRS(r1) + addi r1, r1, SWITCH_FRAME_SIZE + ld r5, PPC_LR_STKOFF(r1) + mtlr r5 + blr + +EXPORT_SYMBOL_GPL(_kvmppc_restore_tm_pr); +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 7c5f479c5c00..8a9a49c13865 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -337,7 +337,8 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif if (shift >= pdshift) hugepd_free(tlb, hugepte); else - pgtable_free_tlb(tlb, hugepte, pdshift - shift); + pgtable_free_tlb(tlb, hugepte, + get_hugepd_cache_index(pdshift - shift)); } static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 8cecda4bd66a..5c8530d0c611 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -215,7 +215,7 @@ void __init mem_topology_setup(void) /* Place all memblock_regions in the same node and merge contiguous * memblock_regions */ - memblock_set_node(0, (phys_addr_t)ULLONG_MAX, &memblock.memory, 0); + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); } void __init initmem_init(void) diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c index abb43646927a..a4ca57612558 100644 --- a/arch/powerpc/mm/mmu_context_iommu.c +++ b/arch/powerpc/mm/mmu_context_iommu.c @@ -19,6 +19,7 @@ #include #include #include +#include static DEFINE_MUTEX(mem_list_mutex); @@ -27,6 +28,7 @@ struct mm_iommu_table_group_mem_t { struct rcu_head rcu; unsigned long used; atomic64_t mapped; + unsigned int pageshift; u64 ua; /* userspace address */ u64 entries; /* number of entries in hpas[] */ u64 *hpas; /* vmalloc'ed */ @@ -125,6 +127,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, { struct mm_iommu_table_group_mem_t *mem; long i, j, ret = 0, locked_entries = 0; + unsigned int pageshift; + unsigned long flags; struct page *page = NULL; mutex_lock(&mem_list_mutex); @@ -159,6 +163,12 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, goto unlock_exit; } + /* + * For a starting point for a maximum page size calculation + * we use @ua and @entries natural alignment to allow IOMMU pages + * smaller than huge pages but still bigger than PAGE_SIZE. + */ + mem->pageshift = __ffs(ua | (entries << PAGE_SHIFT)); mem->hpas = vzalloc(array_size(entries, sizeof(mem->hpas[0]))); if (!mem->hpas) { kfree(mem); @@ -199,6 +209,23 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, } } populate: + pageshift = PAGE_SHIFT; + if (PageCompound(page)) { + pte_t *pte; + struct page *head = compound_head(page); + unsigned int compshift = compound_order(head); + + local_irq_save(flags); /* disables as well */ + pte = find_linux_pte(mm->pgd, ua, NULL, &pageshift); + local_irq_restore(flags); + + /* Double check it is still the same pinned page */ + if (pte && pte_page(*pte) == head && + pageshift == compshift) + pageshift = max_t(unsigned int, pageshift, + PAGE_SHIFT); + } + mem->pageshift = min(mem->pageshift, pageshift); mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT; } @@ -349,7 +376,7 @@ struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm, EXPORT_SYMBOL_GPL(mm_iommu_find); long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, - unsigned long ua, unsigned long *hpa) + unsigned long ua, unsigned int pageshift, unsigned long *hpa) { const long entry = (ua - mem->ua) >> PAGE_SHIFT; u64 *va = &mem->hpas[entry]; @@ -357,6 +384,9 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, if (entry >= mem->entries) return -EFAULT; + if (pageshift > mem->pageshift) + return -EFAULT; + *hpa = *va | (ua & ~PAGE_MASK); return 0; @@ -364,7 +394,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, EXPORT_SYMBOL_GPL(mm_iommu_ua_to_hpa); long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, - unsigned long ua, unsigned long *hpa) + unsigned long ua, unsigned int pageshift, unsigned long *hpa) { const long entry = (ua - mem->ua) >> PAGE_SHIFT; void *va = &mem->hpas[entry]; @@ -373,6 +403,9 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, if (entry >= mem->entries) return -EFAULT; + if (pageshift > mem->pageshift) + return -EFAULT; + pa = (void *) vmalloc_to_phys(va); if (!pa) return -EFAULT; diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index c1f4ca45c93a..4afbfbb64bfd 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -409,6 +409,18 @@ static inline void pgtable_free(void *table, int index) case PUD_INDEX: kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), table); break; +#if defined(CONFIG_PPC_4K_PAGES) && defined(CONFIG_HUGETLB_PAGE) + /* 16M hugepd directory at pud level */ + case HTLB_16M_INDEX: + BUILD_BUG_ON(H_16M_CACHE_INDEX <= 0); + kmem_cache_free(PGT_CACHE(H_16M_CACHE_INDEX), table); + break; + /* 16G hugepd directory at the pgd level */ + case HTLB_16G_INDEX: + BUILD_BUG_ON(H_16G_CACHE_INDEX <= 0); + kmem_cache_free(PGT_CACHE(H_16G_CACHE_INDEX), table); + break; +#endif /* We don't free pgd table via RCU callback */ default: BUG(); diff --git a/arch/powerpc/mm/subpage-prot.c b/arch/powerpc/mm/subpage-prot.c index 75cb646a79c3..9d16ee251fc0 100644 --- a/arch/powerpc/mm/subpage-prot.c +++ b/arch/powerpc/mm/subpage-prot.c @@ -186,9 +186,6 @@ static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr, * in a 2-bit field won't allow writes to a page that is otherwise * write-protected. */ -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wpragmas" -#pragma GCC diagnostic ignored "-Wattribute-alias" SYSCALL_DEFINE3(subpage_prot, unsigned long, addr, unsigned long, len, u32 __user *, map) { @@ -272,4 +269,3 @@ SYSCALL_DEFINE3(subpage_prot, unsigned long, addr, up_write(&mm->mmap_sem); return err; } -#pragma GCC diagnostic pop diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c index 67a6e86d3e7e..1135b43a597c 100644 --- a/arch/powerpc/mm/tlb-radix.c +++ b/arch/powerpc/mm/tlb-radix.c @@ -689,22 +689,17 @@ EXPORT_SYMBOL(radix__flush_tlb_kernel_range); static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; static unsigned long tlb_local_single_page_flush_ceiling __read_mostly = POWER9_TLB_SETS_RADIX * 2; -void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, - unsigned long end) +static inline void __radix__flush_tlb_range(struct mm_struct *mm, + unsigned long start, unsigned long end, + bool flush_all_sizes) { - struct mm_struct *mm = vma->vm_mm; unsigned long pid; unsigned int page_shift = mmu_psize_defs[mmu_virtual_psize].shift; unsigned long page_size = 1UL << page_shift; unsigned long nr_pages = (end - start) >> page_shift; bool local, full; -#ifdef CONFIG_HUGETLB_PAGE - if (is_vm_hugetlb_page(vma)) - return radix__flush_hugetlb_tlb_range(vma, start, end); -#endif - pid = mm->context.id; if (unlikely(pid == MMU_NO_CONTEXT)) return; @@ -738,37 +733,64 @@ is_local: _tlbie_pid(pid, RIC_FLUSH_TLB); } } else { - bool hflush = false; + bool hflush = flush_all_sizes; + bool gflush = flush_all_sizes; unsigned long hstart, hend; + unsigned long gstart, gend; -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - hstart = (start + HPAGE_PMD_SIZE - 1) >> HPAGE_PMD_SHIFT; - hend = end >> HPAGE_PMD_SHIFT; - if (hstart < hend) { - hstart <<= HPAGE_PMD_SHIFT; - hend <<= HPAGE_PMD_SHIFT; + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) hflush = true; + + if (hflush) { + hstart = (start + PMD_SIZE - 1) & PMD_MASK; + hend = end & PMD_MASK; + if (hstart == hend) + hflush = false; + } + + if (gflush) { + gstart = (start + PUD_SIZE - 1) & PUD_MASK; + gend = end & PUD_MASK; + if (gstart == gend) + gflush = false; } -#endif asm volatile("ptesync": : :"memory"); if (local) { __tlbiel_va_range(start, end, pid, page_size, mmu_virtual_psize); if (hflush) __tlbiel_va_range(hstart, hend, pid, - HPAGE_PMD_SIZE, MMU_PAGE_2M); + PMD_SIZE, MMU_PAGE_2M); + if (gflush) + __tlbiel_va_range(gstart, gend, pid, + PUD_SIZE, MMU_PAGE_1G); asm volatile("ptesync": : :"memory"); } else { __tlbie_va_range(start, end, pid, page_size, mmu_virtual_psize); if (hflush) __tlbie_va_range(hstart, hend, pid, - HPAGE_PMD_SIZE, MMU_PAGE_2M); + PMD_SIZE, MMU_PAGE_2M); + if (gflush) + __tlbie_va_range(gstart, gend, pid, + PUD_SIZE, MMU_PAGE_1G); fixup_tlbie(); asm volatile("eieio; tlbsync; ptesync": : :"memory"); } } preempt_enable(); } + +void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) + +{ +#ifdef CONFIG_HUGETLB_PAGE + if (is_vm_hugetlb_page(vma)) + return radix__flush_hugetlb_tlb_range(vma, start, end); +#endif + + __radix__flush_tlb_range(vma->vm_mm, start, end, false); +} EXPORT_SYMBOL(radix__flush_tlb_range); static int radix_get_mmu_psize(int page_size) @@ -837,6 +859,8 @@ void radix__tlb_flush(struct mmu_gather *tlb) int psize = 0; struct mm_struct *mm = tlb->mm; int page_size = tlb->page_size; + unsigned long start = tlb->start; + unsigned long end = tlb->end; /* * if page size is not something we understand, do a full mm flush @@ -847,15 +871,45 @@ void radix__tlb_flush(struct mmu_gather *tlb) */ if (tlb->fullmm) { __flush_all_mm(mm, true); +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE) + } else if (mm_tlb_flush_nested(mm)) { + /* + * If there is a concurrent invalidation that is clearing ptes, + * then it's possible this invalidation will miss one of those + * cleared ptes and miss flushing the TLB. If this invalidate + * returns before the other one flushes TLBs, that can result + * in it returning while there are still valid TLBs inside the + * range to be invalidated. + * + * See mm/memory.c:tlb_finish_mmu() for more details. + * + * The solution to this is ensure the entire range is always + * flushed here. The problem for powerpc is that the flushes + * are page size specific, so this "forced flush" would not + * do the right thing if there are a mix of page sizes in + * the range to be invalidated. So use __flush_tlb_range + * which invalidates all possible page sizes in the range. + * + * PWC flush probably is not be required because the core code + * shouldn't free page tables in this path, but accounting + * for the possibility makes us a bit more robust. + * + * need_flush_all is an uncommon case because page table + * teardown should be done with exclusive locks held (but + * after locks are dropped another invalidate could come + * in), it could be optimized further if necessary. + */ + if (!tlb->need_flush_all) + __radix__flush_tlb_range(mm, start, end, true); + else + radix__flush_all_mm(mm); +#endif } else if ( (psize = radix_get_mmu_psize(page_size)) == -1) { if (!tlb->need_flush_all) radix__flush_tlb_mm(mm); else radix__flush_all_mm(mm); } else { - unsigned long start = tlb->start; - unsigned long end = tlb->end; - if (!tlb->need_flush_all) radix__flush_tlb_range_psize(mm, start, end, psize); else @@ -1043,6 +1097,8 @@ extern void radix_kvm_prefetch_workaround(struct mm_struct *mm) for (; sib <= cpu_last_thread_sibling(cpu) && !flush; sib++) { if (sib == cpu) continue; + if (!cpu_possible(sib)) + continue; if (paca_ptrs[sib]->kvm_hstate.kvm_vcpu) flush = true; } diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 380cbf9a40d9..c0a9bcd28356 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -286,6 +286,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u64 imm64; u8 *func; u32 true_cond; + u32 tmp_idx; /* * addrs[] maps a BPF bytecode address into a real offset from @@ -637,11 +638,7 @@ emit_clear: case BPF_STX | BPF_XADD | BPF_W: /* Get EA into TMP_REG_1 */ PPC_ADDI(b2p[TMP_REG_1], dst_reg, off); - /* error if EA is not word-aligned */ - PPC_ANDI(b2p[TMP_REG_2], b2p[TMP_REG_1], 0x03); - PPC_BCC_SHORT(COND_EQ, (ctx->idx * 4) + 12); - PPC_LI(b2p[BPF_REG_0], 0); - PPC_JMP(exit_addr); + tmp_idx = ctx->idx * 4; /* load value from memory into TMP_REG_2 */ PPC_BPF_LWARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0); /* add value from src_reg into this */ @@ -649,32 +646,16 @@ emit_clear: /* store result back */ PPC_BPF_STWCX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1]); /* we're done if this succeeded */ - PPC_BCC_SHORT(COND_EQ, (ctx->idx * 4) + (7*4)); - /* otherwise, let's try once more */ - PPC_BPF_LWARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0); - PPC_ADD(b2p[TMP_REG_2], b2p[TMP_REG_2], src_reg); - PPC_BPF_STWCX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1]); - /* exit if the store was not successful */ - PPC_LI(b2p[BPF_REG_0], 0); - PPC_BCC(COND_NE, exit_addr); + PPC_BCC_SHORT(COND_NE, tmp_idx); break; /* *(u64 *)(dst + off) += src */ case BPF_STX | BPF_XADD | BPF_DW: PPC_ADDI(b2p[TMP_REG_1], dst_reg, off); - /* error if EA is not doubleword-aligned */ - PPC_ANDI(b2p[TMP_REG_2], b2p[TMP_REG_1], 0x07); - PPC_BCC_SHORT(COND_EQ, (ctx->idx * 4) + (3*4)); - PPC_LI(b2p[BPF_REG_0], 0); - PPC_JMP(exit_addr); - PPC_BPF_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0); - PPC_ADD(b2p[TMP_REG_2], b2p[TMP_REG_2], src_reg); - PPC_BPF_STDCX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1]); - PPC_BCC_SHORT(COND_EQ, (ctx->idx * 4) + (7*4)); + tmp_idx = ctx->idx * 4; PPC_BPF_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0); PPC_ADD(b2p[TMP_REG_2], b2p[TMP_REG_2], src_reg); PPC_BPF_STDCX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1]); - PPC_LI(b2p[BPF_REG_0], 0); - PPC_BCC(COND_NE, exit_addr); + PPC_BCC_SHORT(COND_NE, tmp_idx); break; /* diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 3f66fcf8ad99..19d8ab49d1bd 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -1469,7 +1469,7 @@ static int collect_events(struct perf_event *group, int max_count, } /* - * Add a event to the PMU. + * Add an event to the PMU. * If all events are not already frozen, then we disable and * re-enable the PMU in order to get hw_perf_enable to do the * actual work of reconfiguring the PMU. @@ -1548,7 +1548,7 @@ nocheck: } /* - * Remove a event from the PMU. + * Remove an event from the PMU. */ static void power_pmu_del(struct perf_event *event, int ef_flags) { @@ -1742,7 +1742,7 @@ static int power_pmu_commit_txn(struct pmu *pmu) /* * Return 1 if we might be able to put event on a limited PMC, * or 0 if not. - * A event can only go on a limited PMC if it counts something + * An event can only go on a limited PMC if it counts something * that a limited PMC can count, doesn't require interrupts, and * doesn't exclude any processor mode. */ diff --git a/arch/powerpc/platforms/powermac/time.c b/arch/powerpc/platforms/powermac/time.c index 7c968e46736f..12e6e4d30602 100644 --- a/arch/powerpc/platforms/powermac/time.c +++ b/arch/powerpc/platforms/powermac/time.c @@ -42,7 +42,11 @@ #define DBG(x...) #endif -/* Apparently the RTC stores seconds since 1 Jan 1904 */ +/* + * Offset between Unix time (1970-based) and Mac time (1904-based). Cuda and PMU + * times wrap in 2040. If we need to handle later times, the read_time functions + * need to be changed to interpret wrapped times as post-2040. + */ #define RTC_OFFSET 2082844800 /* @@ -97,8 +101,11 @@ static time64_t cuda_get_time(void) if (req.reply_len != 7) printk(KERN_ERR "cuda_get_time: got %d byte reply\n", req.reply_len); - now = (req.reply[3] << 24) + (req.reply[4] << 16) - + (req.reply[5] << 8) + req.reply[6]; + now = (u32)((req.reply[3] << 24) + (req.reply[4] << 16) + + (req.reply[5] << 8) + req.reply[6]); + /* it's either after year 2040, or the RTC has gone backwards */ + WARN_ON(now < RTC_OFFSET); + return now - RTC_OFFSET; } @@ -106,10 +113,10 @@ static time64_t cuda_get_time(void) static int cuda_set_rtc_time(struct rtc_time *tm) { - time64_t nowtime; + u32 nowtime; struct adb_request req; - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); if (cuda_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME, nowtime >> 24, nowtime >> 16, nowtime >> 8, nowtime) < 0) @@ -140,8 +147,12 @@ static time64_t pmu_get_time(void) if (req.reply_len != 4) printk(KERN_ERR "pmu_get_time: got %d byte reply from PMU\n", req.reply_len); - now = (req.reply[0] << 24) + (req.reply[1] << 16) - + (req.reply[2] << 8) + req.reply[3]; + now = (u32)((req.reply[0] << 24) + (req.reply[1] << 16) + + (req.reply[2] << 8) + req.reply[3]); + + /* it's either after year 2040, or the RTC has gone backwards */ + WARN_ON(now < RTC_OFFSET); + return now - RTC_OFFSET; } @@ -149,10 +160,10 @@ static time64_t pmu_get_time(void) static int pmu_set_rtc_time(struct rtc_time *tm) { - time64_t nowtime; + u32 nowtime; struct adb_request req; - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); if (pmu_request(&req, NULL, 5, PMU_SET_RTC, nowtime >> 24, nowtime >> 16, nowtime >> 8, nowtime) < 0) return -ENXIO; diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 5bd0eb6681bc..70b2e1e0f23c 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -46,6 +46,7 @@ #include "powernv.h" #include "pci.h" +#include "../../../../drivers/pci/pci.h" #define PNV_IODA1_M64_NUM 16 /* Number of M64 BARs */ #define PNV_IODA1_M64_SEGS 8 /* Segments per M64 BAR */ @@ -3138,7 +3139,7 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) struct pci_dn *pdn; int mul, total_vfs; - if (!pdev->is_physfn || pdev->is_added) + if (!pdev->is_physfn || pci_dev_is_added(pdev)) return; pdn = pci_get_pdn(pdev); diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 139f0af6c3d9..8a4868a3964b 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -71,6 +71,7 @@ #include #include "pseries.h" +#include "../../../../drivers/pci/pci.h" int CMO_PrPSP = -1; int CMO_SecPSP = -1; @@ -664,7 +665,7 @@ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev) const int *indexes; struct device_node *dn = pci_device_to_OF_node(pdev); - if (!pdev->is_physfn || pdev->is_added) + if (!pdev->is_physfn || pci_dev_is_added(pdev)) return; /*Firmware must support open sriov otherwise dont configure*/ indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL); diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index 47166ad2a669..196978733e64 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -2734,7 +2734,7 @@ generic_inst_dump(unsigned long adr, long count, int praddr, { int nr, dotted; unsigned long first_adr; - unsigned long inst, last_inst = 0; + unsigned int inst, last_inst = 0; unsigned char val[4]; dotted = 0; @@ -2758,7 +2758,7 @@ generic_inst_dump(unsigned long adr, long count, int praddr, dotted = 0; last_inst = inst; if (praddr) - printf(REG" %.8lx", adr, inst); + printf(REG" %.8x", adr, inst); printf("\t"); dump_func(inst, adr); printf("\n"); diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 42e581a268e1..4764fdeb4f1f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -32,6 +32,7 @@ config RISCV select HAVE_MEMBLOCK_NODE_MAP select HAVE_DMA_CONTIGUOUS select HAVE_GENERIC_DMA_COHERENT + select HAVE_PERF_EVENTS select IRQ_DOMAIN select NO_BOOTMEM select RISCV_ISA_A if SMP @@ -106,6 +107,7 @@ config ARCH_RV32I select GENERIC_LIB_ASHLDI3 select GENERIC_LIB_ASHRDI3 select GENERIC_LIB_LSHRDI3 + select GENERIC_LIB_UCMPDI2 config ARCH_RV64I bool "RV64I" @@ -193,6 +195,19 @@ config RISCV_ISA_C config RISCV_ISA_A def_bool y +menu "supported PMU type" + depends on PERF_EVENTS + +config RISCV_BASE_PMU + bool "Base Performance Monitoring Unit" + def_bool y + help + A base PMU that serves as a reference implementation and has limited + feature of perf. It can run on any RISC-V machines so serves as the + fallback, but this option can also be disable to reduce kernel size. + +endmenu + endmenu menu "Kernel type" diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index 76e958a5414a..6d4a5f6c3f4f 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -71,6 +71,9 @@ KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) # architectures. It's faster to have GCC emit only aligned accesses. KBUILD_CFLAGS += $(call cc-option,-mstrict-align) +# arch specific predefines for sparse +CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS) + head-y := arch/riscv/kernel/head.o core-y += arch/riscv/kernel/ arch/riscv/mm/ diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig index bca0eee733b0..07326466871b 100644 --- a/arch/riscv/configs/defconfig +++ b/arch/riscv/configs/defconfig @@ -44,6 +44,7 @@ CONFIG_INPUT_MOUSEDEV=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_OF_PLATFORM=y +CONFIG_HVC_RISCV_SBI=y # CONFIG_PTP_1588_CLOCK is not set CONFIG_DRM=y CONFIG_DRM_RADEON=y diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 4286a5f83876..576ffdca06ba 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -25,6 +25,7 @@ generic-y += kdebug.h generic-y += kmap_types.h generic-y += kvm_para.h generic-y += local.h +generic-y += local64.h generic-y += mm-arch-hooks.h generic-y += mman.h generic-y += module.h diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 855115ace98c..c452359c9cb8 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -25,18 +25,11 @@ #define ATOMIC_INIT(i) { (i) } -#define __atomic_op_acquire(op, args...) \ -({ \ - typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory"); \ - __ret; \ -}) - -#define __atomic_op_release(op, args...) \ -({ \ - __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); \ - op##_relaxed(args); \ -}) +#define __atomic_acquire_fence() \ + __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory") + +#define __atomic_release_fence() \ + __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); static __always_inline int atomic_read(const atomic_t *v) { @@ -209,130 +202,8 @@ ATOMIC_OPS(xor, xor, i) #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN -/* - * The extra atomic operations that are constructed from one of the core - * AMO-based operations above (aside from sub, which is easier to fit above). - * These are required to perform a full barrier, but they're OK this way - * because atomic_*_return is also required to perform a full barrier. - * - */ -#define ATOMIC_OP(op, func_op, comp_op, I, c_type, prefix) \ -static __always_inline \ -bool atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_##func_op##_return(i, v) comp_op I; \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS(op, func_op, comp_op, I) \ - ATOMIC_OP(op, func_op, comp_op, I, int, ) -#else -#define ATOMIC_OPS(op, func_op, comp_op, I) \ - ATOMIC_OP(op, func_op, comp_op, I, int, ) \ - ATOMIC_OP(op, func_op, comp_op, I, long, 64) -#endif - -ATOMIC_OPS(add_and_test, add, ==, 0) -ATOMIC_OPS(sub_and_test, sub, ==, 0) -ATOMIC_OPS(add_negative, add, <, 0) - -#undef ATOMIC_OP -#undef ATOMIC_OPS - -#define ATOMIC_OP(op, func_op, I, c_type, prefix) \ -static __always_inline \ -void atomic##prefix##_##op(atomic##prefix##_t *v) \ -{ \ - atomic##prefix##_##func_op(I, v); \ -} - -#define ATOMIC_FETCH_OP(op, func_op, I, c_type, prefix) \ -static __always_inline \ -c_type atomic##prefix##_fetch_##op##_relaxed(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##func_op##_relaxed(I, v); \ -} \ -static __always_inline \ -c_type atomic##prefix##_fetch_##op(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##func_op(I, v); \ -} - -#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, c_type, prefix) \ -static __always_inline \ -c_type atomic##prefix##_##op##_return_relaxed(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##op##_relaxed(v) c_op I; \ -} \ -static __always_inline \ -c_type atomic##prefix##_##op##_return(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##op(v) c_op I; \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_OP( op, asm_op, I, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, int, ) -#else -#define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_OP( op, asm_op, I, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, int, ) \ - ATOMIC_OP( op, asm_op, I, long, 64) \ - ATOMIC_FETCH_OP( op, asm_op, I, long, 64) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, long, 64) -#endif - -ATOMIC_OPS(inc, add, +, 1) -ATOMIC_OPS(dec, add, +, -1) - -#define atomic_inc_return_relaxed atomic_inc_return_relaxed -#define atomic_dec_return_relaxed atomic_dec_return_relaxed -#define atomic_inc_return atomic_inc_return -#define atomic_dec_return atomic_dec_return - -#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed -#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed -#define atomic_fetch_inc atomic_fetch_inc -#define atomic_fetch_dec atomic_fetch_dec - -#ifndef CONFIG_GENERIC_ATOMIC64 -#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed -#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed -#define atomic64_inc_return atomic64_inc_return -#define atomic64_dec_return atomic64_dec_return - -#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed -#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed -#define atomic64_fetch_inc atomic64_fetch_inc -#define atomic64_fetch_dec atomic64_fetch_dec -#endif - -#undef ATOMIC_OPS -#undef ATOMIC_OP -#undef ATOMIC_FETCH_OP -#undef ATOMIC_OP_RETURN - -#define ATOMIC_OP(op, func_op, comp_op, I, prefix) \ -static __always_inline \ -bool atomic##prefix##_##op(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_##func_op##_return(v) comp_op I; \ -} - -ATOMIC_OP(inc_and_test, inc, ==, 0, ) -ATOMIC_OP(dec_and_test, dec, ==, 0, ) -#ifndef CONFIG_GENERIC_ATOMIC64 -ATOMIC_OP(inc_and_test, inc, ==, 0, 64) -ATOMIC_OP(dec_and_test, dec, ==, 0, 64) -#endif - -#undef ATOMIC_OP - /* This is required to provide a full barrier on success. */ -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) +static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int prev, rc; @@ -349,9 +220,10 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) : "memory"); return prev; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u) +static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { long prev, rc; @@ -368,27 +240,7 @@ static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u) : "memory"); return prev; } - -static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - return __atomic64_add_unless(v, a, u) != u; -} -#endif - -/* - * The extra atomic operations that are constructed from one of the core - * LR/SC-based operations above. - */ -static __always_inline int atomic_inc_not_zero(atomic_t *v) -{ - return __atomic_add_unless(v, 1, 0); -} - -#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_inc_not_zero(atomic64_t *v) -{ - return atomic64_add_unless(v, 1, 0); -} +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #endif /* diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index efd89a88d2d0..8f13074413a7 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -47,7 +47,7 @@ static inline void flush_dcache_page(struct page *page) #else /* CONFIG_SMP */ -#define flush_icache_all() sbi_remote_fence_i(0) +#define flush_icache_all() sbi_remote_fence_i(NULL) void flush_icache_mm(struct mm_struct *mm, bool local); #endif /* CONFIG_SMP */ diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h new file mode 100644 index 000000000000..0e638a0c3feb --- /dev/null +++ b/arch/riscv/include/asm/perf_event.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2018 SiFive + * Copyright (C) 2018 Andes Technology Corporation + * + */ + +#ifndef _ASM_RISCV_PERF_EVENT_H +#define _ASM_RISCV_PERF_EVENT_H + +#include +#include + +#define RISCV_BASE_COUNTERS 2 + +/* + * The RISCV_MAX_COUNTERS parameter should be specified. + */ + +#ifdef CONFIG_RISCV_BASE_PMU +#define RISCV_MAX_COUNTERS 2 +#endif + +#ifndef RISCV_MAX_COUNTERS +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU." +#endif + +/* + * These are the indexes of bits in counteren register *minus* 1, + * except for cycle. It would be coherent if it can directly mapped + * to counteren bit definition, but there is a *time* register at + * counteren[1]. Per-cpu structure is scarce resource here. + * + * According to the spec, an implementation can support counter up to + * mhpmcounter31, but many high-end processors has at most 6 general + * PMCs, we give the definition to MHPMCOUNTER8 here. + */ +#define RISCV_PMU_CYCLE 0 +#define RISCV_PMU_INSTRET 1 +#define RISCV_PMU_MHPMCOUNTER3 2 +#define RISCV_PMU_MHPMCOUNTER4 3 +#define RISCV_PMU_MHPMCOUNTER5 4 +#define RISCV_PMU_MHPMCOUNTER6 5 +#define RISCV_PMU_MHPMCOUNTER7 6 +#define RISCV_PMU_MHPMCOUNTER8 7 + +#define RISCV_OP_UNSUPP (-EOPNOTSUPP) + +struct cpu_hw_events { + /* # currently enabled events*/ + int n_events; + /* currently enabled events */ + struct perf_event *events[RISCV_MAX_COUNTERS]; + /* vendor-defined PMU data */ + void *platform; +}; + +struct riscv_pmu { + struct pmu *pmu; + + /* generic hw/cache events table */ + const int *hw_events; + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX]; + /* method used to map hw/cache events */ + int (*map_hw_event)(u64 config); + int (*map_cache_event)(u64 config); + + /* max generic hw events in map */ + int max_events; + /* number total counters, 2(base) + x(general) */ + int num_counters; + /* the width of the counter */ + int counter_width; + + /* vendor-defined PMU features */ + void *platform; + + irqreturn_t (*handle_irq)(int irq_num, void *dev); + int irq; +}; + +#endif /* _ASM_RISCV_PERF_EVENT_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 7b209aec355d..85c2d8bae957 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -49,7 +49,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, #include -#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1) +#define flush_tlb_all() sbi_remote_sfence_vma(NULL, 0, -1) #define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0) #define flush_tlb_range(vma, start, end) \ sbi_remote_sfence_vma(mm_cpumask((vma)->vm_mm)->bits, \ diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index 14b0b22fb578..473cfc84e412 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -392,19 +392,21 @@ do { \ }) -extern unsigned long __must_check __copy_user(void __user *to, +extern unsigned long __must_check __asm_copy_to_user(void __user *to, + const void *from, unsigned long n); +extern unsigned long __must_check __asm_copy_from_user(void *to, const void __user *from, unsigned long n); static inline unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n) { - return __copy_user(to, from, n); + return __asm_copy_to_user(to, from, n); } static inline unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n) { - return __copy_user(to, from, n); + return __asm_copy_from_user(to, from, n); } extern long strncpy_from_user(char *dest, const char __user *src, long count); diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h index 5cae4c30cd8e..1e0dfc36aab9 100644 --- a/arch/riscv/include/uapi/asm/elf.h +++ b/arch/riscv/include/uapi/asm/elf.h @@ -21,8 +21,13 @@ typedef struct user_regs_struct elf_gregset_t; typedef union __riscv_fp_state elf_fpregset_t; -#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32) -#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff) +#if __riscv_xlen == 64 +#define ELF_RISCV_R_SYM(r_info) ELF64_R_SYM(r_info) +#define ELF_RISCV_R_TYPE(r_info) ELF64_R_TYPE(r_info) +#else +#define ELF_RISCV_R_SYM(r_info) ELF32_R_SYM(r_info) +#define ELF_RISCV_R_TYPE(r_info) ELF32_R_TYPE(r_info) +#endif /* * RISC-V relocation types diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 8586dd96c2f0..e1274fc03af4 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -39,4 +39,6 @@ obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o +obj-$(CONFIG_PERF_EVENTS) += perf_event.o + clean: diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c index b74cbfbce2d0..7bcdaed15703 100644 --- a/arch/riscv/kernel/irq.c +++ b/arch/riscv/kernel/irq.c @@ -16,10 +16,6 @@ #include #include -#ifdef CONFIG_RISCV_INTC -#include -#endif - void __init init_IRQ(void) { irqchip_init(); diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S index ce9bdc57a2a1..5721624886a1 100644 --- a/arch/riscv/kernel/mcount.S +++ b/arch/riscv/kernel/mcount.S @@ -126,5 +126,5 @@ do_trace: RESTORE_ABI_STATE ret ENDPROC(_mcount) -EXPORT_SYMBOL(_mcount) #endif +EXPORT_SYMBOL(_mcount) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 5dddba301d0a..3303ed2cd419 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -17,6 +17,17 @@ #include #include +static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v) +{ + if (v != (u32)v) { + pr_err("%s: value %016llx out of range for 32-bit field\n", + me->name, v); + return -EINVAL; + } + *location = v; + return 0; +} + static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v) { *(u64 *)location = v; @@ -26,7 +37,7 @@ static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v) static int apply_r_riscv_branch_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; u32 imm12 = (offset & 0x1000) << (31 - 12); u32 imm11 = (offset & 0x800) >> (11 - 7); u32 imm10_5 = (offset & 0x7e0) << (30 - 10); @@ -39,7 +50,7 @@ static int apply_r_riscv_branch_rela(struct module *me, u32 *location, static int apply_r_riscv_jal_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; u32 imm20 = (offset & 0x100000) << (31 - 20); u32 imm19_12 = (offset & 0xff000); u32 imm11 = (offset & 0x800) << (20 - 11); @@ -52,7 +63,7 @@ static int apply_r_riscv_jal_rela(struct module *me, u32 *location, static int apply_r_riscv_rcv_branch_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; u16 imm8 = (offset & 0x100) << (12 - 8); u16 imm7_6 = (offset & 0xc0) >> (6 - 5); u16 imm5 = (offset & 0x20) >> (5 - 2); @@ -67,7 +78,7 @@ static int apply_r_riscv_rcv_branch_rela(struct module *me, u32 *location, static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; u16 imm11 = (offset & 0x800) << (12 - 11); u16 imm10 = (offset & 0x400) >> (10 - 8); u16 imm9_8 = (offset & 0x300) << (12 - 11); @@ -85,7 +96,7 @@ static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location, static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; s32 hi20; if (offset != (s32)offset) { @@ -167,7 +178,7 @@ static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location, static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; s32 hi20; /* Always emit the got entry */ @@ -189,7 +200,7 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; s32 fill_v = offset; u32 hi20, lo12; @@ -216,7 +227,7 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, static int apply_r_riscv_call_rela(struct module *me, u32 *location, Elf_Addr v) { - s64 offset = (void *)v - (void *)location; + ptrdiff_t offset = (void *)v - (void *)location; s32 fill_v = offset; u32 hi20, lo12; @@ -252,19 +263,20 @@ static int apply_r_riscv_align_rela(struct module *me, u32 *location, static int apply_r_riscv_add32_rela(struct module *me, u32 *location, Elf_Addr v) { - *(u32 *)location += (*(u32 *)v); + *(u32 *)location += (u32)v; return 0; } static int apply_r_riscv_sub32_rela(struct module *me, u32 *location, Elf_Addr v) { - *(u32 *)location -= (*(u32 *)v); + *(u32 *)location -= (u32)v; return 0; } static int (*reloc_handlers_rela[]) (struct module *me, u32 *location, Elf_Addr v) = { + [R_RISCV_32] = apply_r_riscv_32_rela, [R_RISCV_64] = apply_r_riscv_64_rela, [R_RISCV_BRANCH] = apply_r_riscv_branch_rela, [R_RISCV_JAL] = apply_r_riscv_jal_rela, @@ -335,7 +347,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, unsigned int j; for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) { - u64 hi20_loc = + unsigned long hi20_loc = sechdrs[sechdrs[relsec].sh_info].sh_addr + rel[j].r_offset; u32 hi20_type = ELF_RISCV_R_TYPE(rel[j].r_info); @@ -348,12 +360,12 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, Elf_Sym *hi20_sym = (Elf_Sym *)sechdrs[symindex].sh_addr + ELF_RISCV_R_SYM(rel[j].r_info); - u64 hi20_sym_val = + unsigned long hi20_sym_val = hi20_sym->st_value + rel[j].r_addend; /* Calculate lo12 */ - u64 offset = hi20_sym_val - hi20_loc; + size_t offset = hi20_sym_val - hi20_loc; if (IS_ENABLED(CONFIG_MODULE_SECTIONS) && hi20_type == R_RISCV_GOT_HI20) { offset = module_emit_got_entry( diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c new file mode 100644 index 000000000000..b0e10c4e9f77 --- /dev/null +++ b/arch/riscv/kernel/perf_event.c @@ -0,0 +1,485 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2008 Thomas Gleixner + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar + * Copyright (C) 2009 Jaswinder Singh Rajput + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra + * Copyright (C) 2009 Intel Corporation, + * Copyright (C) 2009 Google, Inc., Stephane Eranian + * Copyright 2014 Tilera Corporation. All Rights Reserved. + * Copyright (C) 2018 Andes Technology Corporation + * + * Perf_events support for RISC-V platforms. + * + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough + * functionality for perf event to fully work, this file provides + * the very basic framework only. + * + * For platform portings, please check Documentations/riscv/pmu.txt. + * + * The Copyright line includes x86 and tile ones. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static const struct riscv_pmu *riscv_pmu __read_mostly; +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); + +/* + * Hardware & cache maps and their methods + */ + +static const int riscv_hw_event_map[] = { + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE, + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET, + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP, + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP, + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP, + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP, + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP, +}; + +#define C(x) PERF_COUNT_HW_CACHE_##x +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX] +[PERF_COUNT_HW_CACHE_OP_MAX] +[PERF_COUNT_HW_CACHE_RESULT_MAX] = { + [C(L1D)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, + [C(L1I)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, + [C(LL)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, + [C(DTLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, + [C(ITLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, + [C(BPU)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, + }, + }, +}; + +static int riscv_map_hw_event(u64 config) +{ + if (config >= riscv_pmu->max_events) + return -EINVAL; + + return riscv_pmu->hw_events[config]; +} + +int riscv_map_cache_decode(u64 config, unsigned int *type, + unsigned int *op, unsigned int *result) +{ + return -ENOENT; +} + +static int riscv_map_cache_event(u64 config) +{ + unsigned int type, op, result; + int err = -ENOENT; + int code; + + err = riscv_map_cache_decode(config, &type, &op, &result); + if (!riscv_pmu->cache_events || err) + return err; + + if (type >= PERF_COUNT_HW_CACHE_MAX || + op >= PERF_COUNT_HW_CACHE_OP_MAX || + result >= PERF_COUNT_HW_CACHE_RESULT_MAX) + return -EINVAL; + + code = (*riscv_pmu->cache_events)[type][op][result]; + if (code == RISCV_OP_UNSUPP) + return -EINVAL; + + return code; +} + +/* + * Low-level functions: reading/writing counters + */ + +static inline u64 read_counter(int idx) +{ + u64 val = 0; + + switch (idx) { + case RISCV_PMU_CYCLE: + val = csr_read(cycle); + break; + case RISCV_PMU_INSTRET: + val = csr_read(instret); + break; + default: + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS); + return -EINVAL; + } + + return val; +} + +static inline void write_counter(int idx, u64 value) +{ + /* currently not supported */ + WARN_ON_ONCE(1); +} + +/* + * pmu->read: read and update the counter + * + * Other architectures' implementation often have a xxx_perf_event_update + * routine, which can return counter values when called in the IRQ, but + * return void when being called by the pmu->read method. + */ +static void riscv_pmu_read(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + u64 prev_raw_count, new_raw_count; + u64 oldval; + int idx = hwc->idx; + u64 delta; + + do { + prev_raw_count = local64_read(&hwc->prev_count); + new_raw_count = read_counter(idx); + + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count, + new_raw_count); + } while (oldval != prev_raw_count); + + /* + * delta is the value to update the counter we maintain in the kernel. + */ + delta = (new_raw_count - prev_raw_count) & + ((1ULL << riscv_pmu->counter_width) - 1); + local64_add(delta, &event->count); + /* + * Something like local64_sub(delta, &hwc->period_left) here is + * needed if there is an interrupt for perf. + */ +} + +/* + * State transition functions: + * + * stop()/start() & add()/del() + */ + +/* + * pmu->stop: stop the counter + */ +static void riscv_pmu_stop(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); + hwc->state |= PERF_HES_STOPPED; + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + riscv_pmu->pmu->read(event); + hwc->state |= PERF_HES_UPTODATE; + } +} + +/* + * pmu->start: start the event. + */ +static void riscv_pmu_start(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + if (flags & PERF_EF_RELOAD) { + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); + + /* + * Set the counter to the period to the next interrupt here, + * if you have any. + */ + } + + hwc->state = 0; + perf_event_update_userpage(event); + + /* + * Since we cannot write to counters, this serves as an initialization + * to the delta-mechanism in pmu->read(); otherwise, the delta would be + * wrong when pmu->read is called for the first time. + */ + local64_set(&hwc->prev_count, read_counter(hwc->idx)); +} + +/* + * pmu->add: add the event to PMU. + */ +static int riscv_pmu_add(struct perf_event *event, int flags) +{ + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); + struct hw_perf_event *hwc = &event->hw; + + if (cpuc->n_events == riscv_pmu->num_counters) + return -ENOSPC; + + /* + * We don't have general conunters, so no binding-event-to-counter + * process here. + * + * Indexing using hwc->config generally not works, since config may + * contain extra information, but here the only info we have in + * hwc->config is the event index. + */ + hwc->idx = hwc->config; + cpuc->events[hwc->idx] = event; + cpuc->n_events++; + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + + if (flags & PERF_EF_START) + riscv_pmu->pmu->start(event, PERF_EF_RELOAD); + + return 0; +} + +/* + * pmu->del: delete the event from PMU. + */ +static void riscv_pmu_del(struct perf_event *event, int flags) +{ + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); + struct hw_perf_event *hwc = &event->hw; + + cpuc->events[hwc->idx] = NULL; + cpuc->n_events--; + riscv_pmu->pmu->stop(event, PERF_EF_UPDATE); + perf_event_update_userpage(event); +} + +/* + * Interrupt: a skeletion for reference. + */ + +static DEFINE_MUTEX(pmc_reserve_mutex); + +irqreturn_t riscv_base_pmu_handle_irq(int irq_num, void *dev) +{ + return IRQ_NONE; +} + +static int reserve_pmc_hardware(void) +{ + int err = 0; + + mutex_lock(&pmc_reserve_mutex); + if (riscv_pmu->irq >= 0 && riscv_pmu->handle_irq) { + err = request_irq(riscv_pmu->irq, riscv_pmu->handle_irq, + IRQF_PERCPU, "riscv-base-perf", NULL); + } + mutex_unlock(&pmc_reserve_mutex); + + return err; +} + +void release_pmc_hardware(void) +{ + mutex_lock(&pmc_reserve_mutex); + if (riscv_pmu->irq >= 0) + free_irq(riscv_pmu->irq, NULL); + mutex_unlock(&pmc_reserve_mutex); +} + +/* + * Event Initialization/Finalization + */ + +static atomic_t riscv_active_events = ATOMIC_INIT(0); + +static void riscv_event_destroy(struct perf_event *event) +{ + if (atomic_dec_return(&riscv_active_events) == 0) + release_pmc_hardware(); +} + +static int riscv_event_init(struct perf_event *event) +{ + struct perf_event_attr *attr = &event->attr; + struct hw_perf_event *hwc = &event->hw; + int err; + int code; + + if (atomic_inc_return(&riscv_active_events) == 1) { + err = reserve_pmc_hardware(); + + if (err) { + pr_warn("PMC hardware not available\n"); + atomic_dec(&riscv_active_events); + return -EBUSY; + } + } + + switch (event->attr.type) { + case PERF_TYPE_HARDWARE: + code = riscv_pmu->map_hw_event(attr->config); + break; + case PERF_TYPE_HW_CACHE: + code = riscv_pmu->map_cache_event(attr->config); + break; + case PERF_TYPE_RAW: + return -EOPNOTSUPP; + default: + return -ENOENT; + } + + event->destroy = riscv_event_destroy; + if (code < 0) { + event->destroy(event); + return code; + } + + /* + * idx is set to -1 because the index of a general event should not be + * decided until binding to some counter in pmu->add(). + * + * But since we don't have such support, later in pmu->add(), we just + * use hwc->config as the index instead. + */ + hwc->config = code; + hwc->idx = -1; + + return 0; +} + +/* + * Initialization + */ + +static struct pmu min_pmu = { + .name = "riscv-base", + .event_init = riscv_event_init, + .add = riscv_pmu_add, + .del = riscv_pmu_del, + .start = riscv_pmu_start, + .stop = riscv_pmu_stop, + .read = riscv_pmu_read, +}; + +static const struct riscv_pmu riscv_base_pmu = { + .pmu = &min_pmu, + .max_events = ARRAY_SIZE(riscv_hw_event_map), + .map_hw_event = riscv_map_hw_event, + .hw_events = riscv_hw_event_map, + .map_cache_event = riscv_map_cache_event, + .cache_events = &riscv_cache_event_map, + .counter_width = 63, + .num_counters = RISCV_BASE_COUNTERS + 0, + .handle_irq = &riscv_base_pmu_handle_irq, + + /* This means this PMU has no IRQ. */ + .irq = -1, +}; + +static const struct of_device_id riscv_pmu_of_ids[] = { + {.compatible = "riscv,base-pmu", .data = &riscv_base_pmu}, + { /* sentinel value */ } +}; + +int __init init_hw_perf_events(void) +{ + struct device_node *node = of_find_node_by_type(NULL, "pmu"); + const struct of_device_id *of_id; + + riscv_pmu = &riscv_base_pmu; + + if (node) { + of_id = of_match_node(riscv_pmu_of_ids, node); + + if (of_id) + riscv_pmu = of_id->data; + } + + perf_pmu_register(riscv_pmu->pmu, "cpu", PERF_TYPE_RAW); + return 0; +} +arch_initcall(init_hw_perf_events); diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index ba3e80712797..9f82a7e34c64 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -50,7 +50,7 @@ static int riscv_gpr_set(struct task_struct *target, struct pt_regs *regs; regs = task_pt_regs(target); - ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, ®s, 0, -1); + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, regs, 0, -1); return ret; } diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c index 551734248748..f247d6d2137c 100644 --- a/arch/riscv/kernel/riscv_ksyms.c +++ b/arch/riscv/kernel/riscv_ksyms.c @@ -13,6 +13,7 @@ * Assembly functions that may be used (directly or indirectly) by modules */ EXPORT_SYMBOL(__clear_user); -EXPORT_SYMBOL(__copy_user); +EXPORT_SYMBOL(__asm_copy_to_user); +EXPORT_SYMBOL(__asm_copy_from_user); EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memcpy); diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index ee44a48faf79..f0d2070866d4 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -220,8 +220,3 @@ void __init setup_arch(char **cmdline_p) riscv_fill_hwcap(); } -static int __init riscv_device_init(void) -{ - return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} -subsys_initcall_sync(riscv_device_init); diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index b99d9dd21fd0..81a1952015a6 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -148,7 +148,7 @@ int is_valid_bugaddr(unsigned long pc) if (pc < PAGE_OFFSET) return 0; - if (probe_kernel_address((bug_insn_t __user *)pc, insn)) + if (probe_kernel_address((bug_insn_t *)pc, insn)) return 0; return (insn == __BUG_INSN); } diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 58fb2877c865..399e6f0c2d98 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -13,7 +13,8 @@ _epc: .previous .endm -ENTRY(__copy_user) +ENTRY(__asm_copy_to_user) +ENTRY(__asm_copy_from_user) /* Enable access to user memory */ li t6, SR_SUM @@ -63,7 +64,8 @@ ENTRY(__copy_user) addi a0, a0, 1 bltu a1, a3, 5b j 3b -ENDPROC(__copy_user) +ENDPROC(__asm_copy_to_user) +ENDPROC(__asm_copy_from_user) ENTRY(__clear_user) @@ -84,7 +86,7 @@ ENTRY(__clear_user) bgeu t0, t1, 2f bltu a0, t0, 4f 1: - fixup REG_S, zero, (a0), 10f + fixup REG_S, zero, (a0), 11f addi a0, a0, SZREG bltu a0, t1, 1b 2: @@ -96,12 +98,12 @@ ENTRY(__clear_user) li a0, 0 ret 4: /* Edge case: unalignment */ - fixup sb, zero, (a0), 10f + fixup sb, zero, (a0), 11f addi a0, a0, 1 bltu a0, t0, 4b j 1b 5: /* Edge case: remainder */ - fixup sb, zero, (a0), 10f + fixup sb, zero, (a0), 11f addi a0, a0, 1 bltu a0, a3, 5b j 3b @@ -109,9 +111,14 @@ ENDPROC(__clear_user) .section .fixup,"ax" .balign 4 + /* Fixup code for __copy_user(10) and __clear_user(11) */ 10: /* Disable access to user memory */ csrs sstatus, t6 - sub a0, a3, a0 + mv a0, a2 + ret +11: + csrs sstatus, t6 + mv a0, a1 ret .previous diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index c77df8142be2..58a522f9bcc3 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -28,7 +28,9 @@ static void __init zone_sizes_init(void) { unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, }; +#ifdef CONFIG_ZONE_DMA32 max_zone_pfns[ZONE_DMA32] = PFN_DOWN(min(4UL * SZ_1G, max_low_pfn)); +#endif max_zone_pfns[ZONE_NORMAL] = max_low_pfn; free_area_init_nodes(max_zone_pfns); diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index baed39772c84..515240576930 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -106,7 +106,6 @@ config S390 select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_WANTS_DYNAMIC_TASK_STRUCT - select ARCH_WANTS_UBSAN_NO_NULL select ARCH_WANT_IPC_PARSE_VERSION select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS2 @@ -140,12 +139,13 @@ config S390 select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_FUTEX_CMPXCHG if FUTEX - select HAVE_GCC_PLUGINS + select HAVE_GCC_PLUGINS if BROKEN select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_GZIP select HAVE_KERNEL_LZ4 select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZO + select HAVE_KERNEL_UNCOMPRESSED select HAVE_KERNEL_XZ select HAVE_KPROBES select HAVE_KRETPROBES @@ -160,6 +160,7 @@ config S390 select HAVE_OPROFILE select HAVE_PERF_EVENTS select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RSEQ select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING select MODULES_USE_ELF_RELA diff --git a/arch/s390/Makefile b/arch/s390/Makefile index 68a690442be0..eee6703093c3 100644 --- a/arch/s390/Makefile +++ b/arch/s390/Makefile @@ -14,8 +14,18 @@ LD_BFD := elf64-s390 LDFLAGS := -m elf64_s390 KBUILD_AFLAGS_MODULE += -fPIC KBUILD_CFLAGS_MODULE += -fPIC -KBUILD_CFLAGS += -m64 KBUILD_AFLAGS += -m64 +KBUILD_CFLAGS += -m64 +aflags_dwarf := -Wa,-gdwarf-2 +KBUILD_AFLAGS_DECOMPRESSOR := -m64 -D__ASSEMBLY__ +KBUILD_AFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),$(aflags_dwarf)) +KBUILD_CFLAGS_DECOMPRESSOR := -m64 -O2 +KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY +KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float +KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables +KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-option,-ffreestanding) +KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g) +KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,)) UTS_MACHINE := s390x STACK_SIZE := 16384 CHECKFLAGS += -D__s390__ -D__s390x__ @@ -52,18 +62,14 @@ cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include # cflags-$(CONFIG_FRAME_POINTER) += -fno-optimize-sibling-calls -# old style option for packed stacks -ifeq ($(call cc-option-yn,-mkernel-backchain),y) -cflags-$(CONFIG_PACK_STACK) += -mkernel-backchain -D__PACK_STACK -aflags-$(CONFIG_PACK_STACK) += -D__PACK_STACK -endif - -# new style option for packed stacks ifeq ($(call cc-option-yn,-mpacked-stack),y) cflags-$(CONFIG_PACK_STACK) += -mpacked-stack -D__PACK_STACK aflags-$(CONFIG_PACK_STACK) += -D__PACK_STACK endif +KBUILD_AFLAGS_DECOMPRESSOR += $(aflags-y) +KBUILD_CFLAGS_DECOMPRESSOR += $(cflags-y) + ifeq ($(call cc-option-yn,-mstack-size=8192 -mstack-guard=128),y) cflags-$(CONFIG_CHECK_STACK) += -mstack-size=$(STACK_SIZE) ifneq ($(call cc-option-yn,-mstack-size=8192),y) @@ -71,8 +77,11 @@ cflags-$(CONFIG_CHECK_STACK) += -mstack-guard=$(CONFIG_STACK_GUARD) endif endif -ifeq ($(call cc-option-yn,-mwarn-dynamicstack),y) -cflags-$(CONFIG_WARN_DYNAMIC_STACK) += -mwarn-dynamicstack +ifdef CONFIG_WARN_DYNAMIC_STACK + ifeq ($(call cc-option-yn,-mwarn-dynamicstack),y) + KBUILD_CFLAGS += -mwarn-dynamicstack + KBUILD_CFLAGS_DECOMPRESSOR += -mwarn-dynamicstack + endif endif ifdef CONFIG_EXPOLINE @@ -82,6 +91,7 @@ ifdef CONFIG_EXPOLINE CC_FLAGS_EXPOLINE += -mindirect-branch-table export CC_FLAGS_EXPOLINE cflags-y += $(CC_FLAGS_EXPOLINE) -DCC_USING_EXPOLINE + aflags-y += -DCC_USING_EXPOLINE endif endif @@ -102,11 +112,12 @@ KBUILD_CFLAGS += -mbackchain -msoft-float $(cflags-y) KBUILD_CFLAGS += -pipe -fno-strength-reduce -Wno-sign-compare KBUILD_CFLAGS += -fno-asynchronous-unwind-tables $(cfi) KBUILD_AFLAGS += $(aflags-y) $(cfi) +export KBUILD_AFLAGS_DECOMPRESSOR +export KBUILD_CFLAGS_DECOMPRESSOR OBJCOPYFLAGS := -O binary -head-y := arch/s390/kernel/head.o -head-y += arch/s390/kernel/head64.o +head-y := arch/s390/kernel/head64.o # See arch/s390/Kbuild for content of core part of the kernel core-y += arch/s390/ @@ -121,7 +132,7 @@ boot := arch/s390/boot syscalls := arch/s390/kernel/syscalls tools := arch/s390/tools -all: image bzImage +all: bzImage #KBUILD_IMAGE is necessary for packaging targets like rpm-pkg, deb-pkg... KBUILD_IMAGE := $(boot)/bzImage @@ -129,7 +140,7 @@ KBUILD_IMAGE := $(boot)/bzImage install: vmlinux $(Q)$(MAKE) $(build)=$(boot) $@ -image bzImage: vmlinux +bzImage: vmlinux $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ zfcpdump: @@ -152,8 +163,7 @@ archprepare: # Don't use tabs in echo arguments define archhelp - echo '* image - Kernel image for IPL ($(boot)/image)' - echo '* bzImage - Compressed kernel image for IPL ($(boot)/bzImage)' + echo '* bzImage - Kernel image for IPL ($(boot)/bzImage)' echo ' install - Install kernel using' echo ' (your) ~/bin/$(INSTALLKERNEL) or' echo ' (distribution) /sbin/$(INSTALLKERNEL) or' diff --git a/arch/s390/appldata/appldata_base.c b/arch/s390/appldata/appldata_base.c index ee6a9c387c87..9bf8489df6e6 100644 --- a/arch/s390/appldata/appldata_base.c +++ b/arch/s390/appldata/appldata_base.c @@ -206,35 +206,28 @@ static int appldata_timer_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - unsigned int len; - char buf[2]; + int timer_active = appldata_timer_active; + int zero = 0; + int one = 1; + int rc; + struct ctl_table ctl_entry = { + .procname = ctl->procname, + .data = &timer_active, + .maxlen = sizeof(int), + .extra1 = &zero, + .extra2 = &one, + }; + + rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) + return rc; - if (!*lenp || *ppos) { - *lenp = 0; - return 0; - } - if (!write) { - strncpy(buf, appldata_timer_active ? "1\n" : "0\n", - ARRAY_SIZE(buf)); - len = strnlen(buf, ARRAY_SIZE(buf)); - if (len > *lenp) - len = *lenp; - if (copy_to_user(buffer, buf, len)) - return -EFAULT; - goto out; - } - len = *lenp; - if (copy_from_user(buf, buffer, len > sizeof(buf) ? sizeof(buf) : len)) - return -EFAULT; spin_lock(&appldata_timer_lock); - if (buf[0] == '1') + if (timer_active) __appldata_vtimer_setup(APPLDATA_ADD_TIMER); - else if (buf[0] == '0') + else __appldata_vtimer_setup(APPLDATA_DEL_TIMER); spin_unlock(&appldata_timer_lock); -out: - *lenp = len; - *ppos += len; return 0; } @@ -248,37 +241,24 @@ static int appldata_interval_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - unsigned int len; - int interval; - char buf[16]; + int interval = appldata_interval; + int one = 1; + int rc; + struct ctl_table ctl_entry = { + .procname = ctl->procname, + .data = &interval, + .maxlen = sizeof(int), + .extra1 = &one, + }; - if (!*lenp || *ppos) { - *lenp = 0; - return 0; - } - if (!write) { - len = sprintf(buf, "%i\n", appldata_interval); - if (len > *lenp) - len = *lenp; - if (copy_to_user(buffer, buf, len)) - return -EFAULT; - goto out; - } - len = *lenp; - if (copy_from_user(buf, buffer, len > sizeof(buf) ? sizeof(buf) : len)) - return -EFAULT; - interval = 0; - sscanf(buf, "%i", &interval); - if (interval <= 0) - return -EINVAL; + rc = proc_dointvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) + return rc; spin_lock(&appldata_timer_lock); appldata_interval = interval; __appldata_vtimer_setup(APPLDATA_MOD_TIMER); spin_unlock(&appldata_timer_lock); -out: - *lenp = len; - *ppos += len; return 0; } @@ -293,10 +273,17 @@ appldata_generic_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct appldata_ops *ops = NULL, *tmp_ops; - unsigned int len; - int rc, found; - char buf[2]; struct list_head *lh; + int rc, found; + int active; + int zero = 0; + int one = 1; + struct ctl_table ctl_entry = { + .data = &active, + .maxlen = sizeof(int), + .extra1 = &zero, + .extra2 = &one, + }; found = 0; mutex_lock(&appldata_ops_mutex); @@ -317,31 +304,15 @@ appldata_generic_handler(struct ctl_table *ctl, int write, } mutex_unlock(&appldata_ops_mutex); - if (!*lenp || *ppos) { - *lenp = 0; + active = ops->active; + rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) { module_put(ops->owner); - return 0; - } - if (!write) { - strncpy(buf, ops->active ? "1\n" : "0\n", ARRAY_SIZE(buf)); - len = strnlen(buf, ARRAY_SIZE(buf)); - if (len > *lenp) - len = *lenp; - if (copy_to_user(buffer, buf, len)) { - module_put(ops->owner); - return -EFAULT; - } - goto out; - } - len = *lenp; - if (copy_from_user(buf, buffer, - len > sizeof(buf) ? sizeof(buf) : len)) { - module_put(ops->owner); - return -EFAULT; + return rc; } mutex_lock(&appldata_ops_mutex); - if ((buf[0] == '1') && (ops->active == 0)) { + if (active && (ops->active == 0)) { // protect work queue callback if (!try_module_get(ops->owner)) { mutex_unlock(&appldata_ops_mutex); @@ -359,7 +330,7 @@ appldata_generic_handler(struct ctl_table *ctl, int write, module_put(ops->owner); } else ops->active = 1; - } else if ((buf[0] == '0') && (ops->active == 1)) { + } else if (!active && (ops->active == 1)) { ops->active = 0; rc = appldata_diag(ops->record_nr, APPLDATA_STOP_REC, (unsigned long) ops->data, ops->size, @@ -370,9 +341,6 @@ appldata_generic_handler(struct ctl_table *ctl, int write, module_put(ops->owner); } mutex_unlock(&appldata_ops_mutex); -out: - *lenp = len; - *ppos += len; module_put(ops->owner); return 0; } diff --git a/arch/s390/boot/Makefile b/arch/s390/boot/Makefile index d1fa37fcce83..9e6668ee93de 100644 --- a/arch/s390/boot/Makefile +++ b/arch/s390/boot/Makefile @@ -3,19 +3,52 @@ # Makefile for the linux s390-specific parts of the memory manager. # -targets := image -targets += bzImage -subdir- := compressed +KCOV_INSTRUMENT := n +GCOV_PROFILE := n +UBSAN_SANITIZE := n -$(obj)/image: vmlinux FORCE - $(call if_changed,objcopy) +KBUILD_AFLAGS := $(KBUILD_AFLAGS_DECOMPRESSOR) +KBUILD_CFLAGS := $(KBUILD_CFLAGS_DECOMPRESSOR) + +# +# Use -march=z900 for als.c to be able to print an error +# message if the kernel is started on a machine which is too old +# +ifneq ($(CC_FLAGS_MARCH),-march=z900) +AFLAGS_REMOVE_head.o += $(CC_FLAGS_MARCH) +AFLAGS_head.o += -march=z900 +AFLAGS_REMOVE_mem.o += $(CC_FLAGS_MARCH) +AFLAGS_mem.o += -march=z900 +CFLAGS_REMOVE_als.o += $(CC_FLAGS_MARCH) +CFLAGS_als.o += -march=z900 +CFLAGS_REMOVE_sclp_early_core.o += $(CC_FLAGS_MARCH) +CFLAGS_sclp_early_core.o += -march=z900 +endif + +CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char + +obj-y := head.o als.o ebcdic.o sclp_early_core.o mem.o +targets := bzImage startup.a $(obj-y) +subdir- := compressed + +OBJECTS := $(addprefix $(obj)/,$(obj-y)) $(obj)/bzImage: $(obj)/compressed/vmlinux FORCE $(call if_changed,objcopy) -$(obj)/compressed/vmlinux: FORCE +$(obj)/compressed/vmlinux: $(obj)/startup.a FORCE $(Q)$(MAKE) $(build)=$(obj)/compressed $@ +quiet_cmd_ar = AR $@ + cmd_ar = rm -f $@; $(AR) rcsTP$(KBUILD_ARFLAGS) $@ $(filter $(OBJECTS), $^) + +$(obj)/startup.a: $(OBJECTS) FORCE + $(call if_changed,ar) + install: $(CONFIGURE) $(obj)/bzImage sh -x $(srctree)/$(obj)/install.sh $(KERNELRELEASE) $(obj)/bzImage \ System.map "$(INSTALL_PATH)" + +chkbss := $(OBJECTS) +chkbss-target := $(obj)/startup.a +include $(srctree)/arch/s390/scripts/Makefile.chkbss diff --git a/arch/s390/kernel/als.c b/arch/s390/boot/als.c similarity index 83% rename from arch/s390/kernel/als.c rename to arch/s390/boot/als.c index d1892bf36cab..d592e0d90d9f 100644 --- a/arch/s390/kernel/als.c +++ b/arch/s390/boot/als.c @@ -3,12 +3,10 @@ * Copyright IBM Corp. 2016 */ #include -#include #include #include #include #include -#include "entry.h" /* * The code within this file will be called very early. It may _not_ @@ -18,9 +16,9 @@ * For temporary objects the stack (16k) should be used. */ -static unsigned long als[] __initdata = { FACILITIES_ALS }; +static unsigned long als[] = { FACILITIES_ALS }; -static void __init u16_to_hex(char *str, u16 val) +static void u16_to_hex(char *str, u16 val) { int i, num; @@ -33,9 +31,9 @@ static void __init u16_to_hex(char *str, u16 val) *str = '\0'; } -static void __init print_machine_type(void) +static void print_machine_type(void) { - static char mach_str[80] __initdata = "Detected machine-type number: "; + static char mach_str[80] = "Detected machine-type number: "; char type_str[5]; struct cpuid id; @@ -46,7 +44,7 @@ static void __init print_machine_type(void) sclp_early_printk(mach_str); } -static void __init u16_to_decimal(char *str, u16 val) +static void u16_to_decimal(char *str, u16 val) { int div = 1; @@ -60,9 +58,9 @@ static void __init u16_to_decimal(char *str, u16 val) *str = '\0'; } -static void __init print_missing_facilities(void) +static void print_missing_facilities(void) { - static char als_str[80] __initdata = "Missing facilities: "; + static char als_str[80] = "Missing facilities: "; unsigned long val; char val_str[6]; int i, j, first; @@ -95,7 +93,7 @@ static void __init print_missing_facilities(void) sclp_early_printk("See Principles of Operations for facility bits\n"); } -static void __init facility_mismatch(void) +static void facility_mismatch(void) { sclp_early_printk("The Linux kernel requires more recent processor hardware\n"); print_machine_type(); @@ -103,7 +101,7 @@ static void __init facility_mismatch(void) disabled_wait(0x8badcccc); } -void __init verify_facilities(void) +void verify_facilities(void) { int i; diff --git a/arch/s390/boot/compressed/.gitignore b/arch/s390/boot/compressed/.gitignore index 2088cc140629..45aeb4f08752 100644 --- a/arch/s390/boot/compressed/.gitignore +++ b/arch/s390/boot/compressed/.gitignore @@ -1,4 +1,5 @@ sizes.h vmlinux vmlinux.lds +vmlinux.scr.lds vmlinux.bin.full diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile index 5766f7b9b271..04609478d18b 100644 --- a/arch/s390/boot/compressed/Makefile +++ b/arch/s390/boot/compressed/Makefile @@ -6,39 +6,29 @@ # KCOV_INSTRUMENT := n +GCOV_PROFILE := n +UBSAN_SANITIZE := n +obj-y := $(if $(CONFIG_KERNEL_UNCOMPRESSED),,head.o misc.o) piggy.o targets := vmlinux.lds vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 targets += vmlinux.bin.xz vmlinux.bin.lzma vmlinux.bin.lzo vmlinux.bin.lz4 -targets += misc.o piggy.o sizes.h head.o - -KBUILD_CFLAGS := -m64 -D__KERNEL__ -O2 -KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY -KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float -KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -KBUILD_CFLAGS += $(call cc-option,-mpacked-stack) -KBUILD_CFLAGS += $(call cc-option,-ffreestanding) +targets += vmlinux.scr.lds $(obj-y) $(if $(CONFIG_KERNEL_UNCOMPRESSED),,sizes.h) -GCOV_PROFILE := n -UBSAN_SANITIZE := n +KBUILD_AFLAGS := $(KBUILD_AFLAGS_DECOMPRESSOR) +KBUILD_CFLAGS := $(KBUILD_CFLAGS_DECOMPRESSOR) -OBJECTS := $(addprefix $(objtree)/arch/s390/kernel/, head.o ebcdic.o als.o) -OBJECTS += $(objtree)/drivers/s390/char/sclp_early_core.o -OBJECTS += $(obj)/head.o $(obj)/misc.o $(obj)/piggy.o +OBJECTS := $(addprefix $(obj)/,$(obj-y)) LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T -$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS) +$(obj)/vmlinux: $(obj)/vmlinux.lds $(objtree)/arch/s390/boot/startup.a $(OBJECTS) $(call if_changed,ld) -TRIM_HEAD_SIZE := 0x11000 - -sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 (0x\1 - $(TRIM_HEAD_SIZE))/p' +# extract required uncompressed vmlinux symbols and adjust them to reflect offsets inside vmlinux.bin +sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 (0x\1 - 0x100000)/p' quiet_cmd_sizes = GEN $@ cmd_sizes = $(NM) $< | sed -n $(sed-sizes) > $@ -quiet_cmd_trim_head = TRIM $@ - cmd_trim_head = tail -c +$$(($(TRIM_HEAD_SIZE) + 1)) $< > $@ - $(obj)/sizes.h: vmlinux $(call if_changed,sizes) @@ -48,21 +38,18 @@ $(obj)/head.o: $(obj)/sizes.h CFLAGS_misc.o += -I$(objtree)/$(obj) $(obj)/misc.o: $(obj)/sizes.h -OBJCOPYFLAGS_vmlinux.bin.full := -R .comment -S -$(obj)/vmlinux.bin.full: vmlinux +OBJCOPYFLAGS_vmlinux.bin := -R .comment -S +$(obj)/vmlinux.bin: vmlinux $(call if_changed,objcopy) -$(obj)/vmlinux.bin: $(obj)/vmlinux.bin.full - $(call if_changed,trim_head) - vmlinux.bin.all-y := $(obj)/vmlinux.bin -suffix-$(CONFIG_KERNEL_GZIP) := gz -suffix-$(CONFIG_KERNEL_BZIP2) := bz2 -suffix-$(CONFIG_KERNEL_LZ4) := lz4 -suffix-$(CONFIG_KERNEL_LZMA) := lzma -suffix-$(CONFIG_KERNEL_LZO) := lzo -suffix-$(CONFIG_KERNEL_XZ) := xz +suffix-$(CONFIG_KERNEL_GZIP) := .gz +suffix-$(CONFIG_KERNEL_BZIP2) := .bz2 +suffix-$(CONFIG_KERNEL_LZ4) := .lz4 +suffix-$(CONFIG_KERNEL_LZMA) := .lzma +suffix-$(CONFIG_KERNEL_LZO) := .lzo +suffix-$(CONFIG_KERNEL_XZ) := .xz $(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) $(call if_changed,gzip) @@ -78,5 +65,9 @@ $(obj)/vmlinux.bin.xz: $(vmlinux.bin.all-y) $(call if_changed,xzkern) LDFLAGS_piggy.o := -r --format binary --oformat $(LD_BFD) -T -$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix-y) +$(obj)/piggy.o: $(obj)/vmlinux.scr.lds $(obj)/vmlinux.bin$(suffix-y) $(call if_changed,ld) + +chkbss := $(filter-out $(obj)/misc.o $(obj)/piggy.o,$(OBJECTS)) +chkbss-target := $(obj)/vmlinux.bin +include $(srctree)/arch/s390/scripts/Makefile.chkbss diff --git a/arch/s390/boot/compressed/head.S b/arch/s390/boot/compressed/head.S index 9f94eca0f467..df8dbbc17bcc 100644 --- a/arch/s390/boot/compressed/head.S +++ b/arch/s390/boot/compressed/head.S @@ -15,7 +15,7 @@ #include "sizes.h" __HEAD -ENTRY(startup_continue) +ENTRY(startup_decompressor) basr %r13,0 # get base .LPG1: # setup stack @@ -23,7 +23,7 @@ ENTRY(startup_continue) aghi %r15,-160 brasl %r14,decompress_kernel # Set up registers for memory mover. We move the decompressed image to - # 0x11000, where startup_continue of the decompressed image is supposed + # 0x100000, where startup_continue of the decompressed image is supposed # to be. lgr %r4,%r2 lg %r2,.Loffset-.LPG1(%r13) @@ -33,7 +33,7 @@ ENTRY(startup_continue) la %r1,0x200 mvc 0(mover_end-mover,%r1),mover-.LPG1(%r13) # When the memory mover is done we pass control to - # arch/s390/kernel/head64.S:startup_continue which lives at 0x11000 in + # arch/s390/kernel/head64.S:startup_continue which lives at 0x100000 in # the decompressed image. lgr %r6,%r2 br %r1 @@ -47,6 +47,6 @@ mover_end: .Lstack: .quad 0x8000 + (1<<(PAGE_SHIFT+THREAD_SIZE_ORDER)) .Loffset: - .quad 0x11000 + .quad 0x100000 .Lmvsize: .quad SZ__bss_start diff --git a/arch/s390/boot/compressed/misc.c b/arch/s390/boot/compressed/misc.c index 511b2cc9b91a..f66ad73c205b 100644 --- a/arch/s390/boot/compressed/misc.c +++ b/arch/s390/boot/compressed/misc.c @@ -71,43 +71,6 @@ static int puts(const char *s) return 0; } -void *memset(void *s, int c, size_t n) -{ - char *xs; - - xs = s; - while (n--) - *xs++ = c; - return s; -} - -void *memcpy(void *dest, const void *src, size_t n) -{ - const char *s = src; - char *d = dest; - - while (n--) - *d++ = *s++; - return dest; -} - -void *memmove(void *dest, const void *src, size_t n) -{ - const char *s = src; - char *d = dest; - - if (d <= s) { - while (n--) - *d++ = *s++; - } else { - d += n; - s += n; - while (n--) - *--d = *--s; - } - return dest; -} - static void error(char *x) { unsigned long long psw = 0x000a0000deadbeefULL; diff --git a/arch/s390/boot/compressed/vmlinux.lds.S b/arch/s390/boot/compressed/vmlinux.lds.S index d43c2db12d30..b16ac8b3c439 100644 --- a/arch/s390/boot/compressed/vmlinux.lds.S +++ b/arch/s390/boot/compressed/vmlinux.lds.S @@ -23,13 +23,10 @@ SECTIONS *(.text.*) _etext = . ; } - .rodata.compressed : { - *(.rodata.compressed) - } .rodata : { _rodata = . ; *(.rodata) /* read-only data */ - *(.rodata.*) + *(EXCLUDE_FILE (*piggy.o) .rodata.compressed) _erodata = . ; } .data : { @@ -38,6 +35,15 @@ SECTIONS *(.data.*) _edata = . ; } + startup_continue = 0x100000; +#ifdef CONFIG_KERNEL_UNCOMPRESSED + . = 0x100000; +#else + . = ALIGN(8); +#endif + .rodata.compressed : { + *(.rodata.compressed) + } . = ALIGN(256); .bss : { _bss = . ; @@ -54,5 +60,6 @@ SECTIONS *(.eh_frame) *(__ex_table) *(*__ksymtab*) + *(___kcrctab*) } } diff --git a/arch/s390/boot/compressed/vmlinux.scr b/arch/s390/boot/compressed/vmlinux.scr.lds.S similarity index 70% rename from arch/s390/boot/compressed/vmlinux.scr rename to arch/s390/boot/compressed/vmlinux.scr.lds.S index 42a242597f34..ff01d18c9222 100644 --- a/arch/s390/boot/compressed/vmlinux.scr +++ b/arch/s390/boot/compressed/vmlinux.scr.lds.S @@ -2,10 +2,14 @@ SECTIONS { .rodata.compressed : { +#ifndef CONFIG_KERNEL_UNCOMPRESSED input_len = .; LONG(input_data_end - input_data) input_data = .; +#endif *(.data) +#ifndef CONFIG_KERNEL_UNCOMPRESSED output_len = . - 4; input_data_end = .; +#endif } } diff --git a/arch/s390/boot/ebcdic.c b/arch/s390/boot/ebcdic.c new file mode 100644 index 000000000000..7391e7d36086 --- /dev/null +++ b/arch/s390/boot/ebcdic.c @@ -0,0 +1,2 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "../kernel/ebcdic.c" diff --git a/arch/s390/kernel/head.S b/arch/s390/boot/head.S similarity index 98% rename from arch/s390/kernel/head.S rename to arch/s390/boot/head.S index 5c42f16a54c4..f721913b73f1 100644 --- a/arch/s390/kernel/head.S +++ b/arch/s390/boot/head.S @@ -272,14 +272,14 @@ iplstart: .org 0x10000 ENTRY(startup) j .Lep_startup_normal - .org 0x10008 + .org EP_OFFSET # # This is a list of s390 kernel entry points. At address 0x1000f the number of # valid entry points is stored. # # IMPORTANT: Do not change this table, it is s390 kernel ABI! # - .ascii "S390EP" + .ascii EP_STRING .byte 0x00,0x01 # # kdump startup-code at 0x10010, running in 64 bit absolute addressing mode @@ -310,10 +310,11 @@ ENTRY(startup_kdump) l %r15,.Lstack-.LPG0(%r13) ahi %r15,-STACK_FRAME_OVERHEAD brasl %r14,verify_facilities -# For uncompressed images, continue in -# arch/s390/kernel/head64.S. For compressed images, continue in -# arch/s390/boot/compressed/head.S. +#ifdef CONFIG_KERNEL_UNCOMPRESSED jg startup_continue +#else + jg startup_decompressor +#endif .Lstack: .long 0x8000 + (1<<(PAGE_SHIFT+THREAD_SIZE_ORDER)) diff --git a/arch/s390/kernel/head_kdump.S b/arch/s390/boot/head_kdump.S similarity index 100% rename from arch/s390/kernel/head_kdump.S rename to arch/s390/boot/head_kdump.S diff --git a/arch/s390/boot/mem.S b/arch/s390/boot/mem.S new file mode 100644 index 000000000000..b33463633f03 --- /dev/null +++ b/arch/s390/boot/mem.S @@ -0,0 +1,2 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include "../lib/mem.S" diff --git a/arch/s390/boot/sclp_early_core.c b/arch/s390/boot/sclp_early_core.c new file mode 100644 index 000000000000..5a19fd7020b5 --- /dev/null +++ b/arch/s390/boot/sclp_early_core.c @@ -0,0 +1,2 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "../../../drivers/s390/char/sclp_early_core.c" diff --git a/arch/s390/hypfs/hypfs_diag.c b/arch/s390/hypfs/hypfs_diag.c index a2945b289a29..3452e18bb1ca 100644 --- a/arch/s390/hypfs/hypfs_diag.c +++ b/arch/s390/hypfs/hypfs_diag.c @@ -497,7 +497,7 @@ static int hypfs_create_cpu_files(struct dentry *cpus_dir, void *cpu_info) } diag224_idx2name(cpu_info__ctidx(diag204_info_type, cpu_info), buffer); rc = hypfs_create_str(cpu_dir, "type", buffer); - return PTR_RET(rc); + return PTR_ERR_OR_ZERO(rc); } static void *hypfs_create_lpar_files(struct dentry *systems_dir, void *part_hdr) @@ -544,7 +544,7 @@ static int hypfs_create_phys_cpu_files(struct dentry *cpus_dir, void *cpu_info) return PTR_ERR(rc); diag224_idx2name(phys_cpu__ctidx(diag204_info_type, cpu_info), buffer); rc = hypfs_create_str(cpu_dir, "type", buffer); - return PTR_RET(rc); + return PTR_ERR_OR_ZERO(rc); } static void *hypfs_create_phys_files(struct dentry *parent_dir, void *phys_hdr) diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c index 06b513d192b9..c681329fdeec 100644 --- a/arch/s390/hypfs/inode.c +++ b/arch/s390/hypfs/inode.c @@ -36,7 +36,7 @@ struct hypfs_sb_info { kuid_t uid; /* uid used for files and dirs */ kgid_t gid; /* gid used for files and dirs */ struct dentry *update_file; /* file to trigger update */ - time_t last_update; /* last update time in secs since 1970 */ + time64_t last_update; /* last update, CLOCK_MONOTONIC time */ struct mutex lock; /* lock to protect update process */ }; @@ -52,7 +52,7 @@ static void hypfs_update_update(struct super_block *sb) struct hypfs_sb_info *sb_info = sb->s_fs_info; struct inode *inode = d_inode(sb_info->update_file); - sb_info->last_update = get_seconds(); + sb_info->last_update = ktime_get_seconds(); inode->i_atime = inode->i_mtime = inode->i_ctime = current_time(inode); } @@ -179,7 +179,7 @@ static ssize_t hypfs_write_iter(struct kiocb *iocb, struct iov_iter *from) * to restart data collection in this case. */ mutex_lock(&fs_info->lock); - if (fs_info->last_update == get_seconds()) { + if (fs_info->last_update == ktime_get_seconds()) { rc = -EBUSY; goto out; } diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h index c1bedb4c8de0..046e044a48d0 100644 --- a/arch/s390/include/asm/ap.h +++ b/arch/s390/include/asm/ap.h @@ -46,6 +46,50 @@ struct ap_queue_status { unsigned int _pad2 : 16; }; +/** + * ap_intructions_available() - Test if AP instructions are available. + * + * Returns 0 if the AP instructions are installed. + */ +static inline int ap_instructions_available(void) +{ + register unsigned long reg0 asm ("0") = AP_MKQID(0, 0); + register unsigned long reg1 asm ("1") = -ENODEV; + register unsigned long reg2 asm ("2"); + + asm volatile( + " .long 0xb2af0000\n" /* PQAP(TAPQ) */ + "0: la %0,0\n" + "1:\n" + EX_TABLE(0b, 1b) + : "+d" (reg1), "=d" (reg2) + : "d" (reg0) + : "cc"); + return reg1; +} + +/** + * ap_tapq(): Test adjunct processor queue. + * @qid: The AP queue number + * @info: Pointer to queue descriptor + * + * Returns AP queue status structure. + */ +static inline struct ap_queue_status ap_tapq(ap_qid_t qid, unsigned long *info) +{ + register unsigned long reg0 asm ("0") = qid; + register struct ap_queue_status reg1 asm ("1"); + register unsigned long reg2 asm ("2"); + + asm volatile(".long 0xb2af0000" /* PQAP(TAPQ) */ + : "=d" (reg1), "=d" (reg2) + : "d" (reg0) + : "cc"); + if (info) + *info = reg2; + return reg1; +} + /** * ap_test_queue(): Test adjunct processor queue. * @qid: The AP queue number @@ -54,10 +98,57 @@ struct ap_queue_status { * * Returns AP queue status structure. */ -struct ap_queue_status ap_test_queue(ap_qid_t qid, - int tbit, - unsigned long *info); +static inline struct ap_queue_status ap_test_queue(ap_qid_t qid, + int tbit, + unsigned long *info) +{ + if (tbit) + qid |= 1UL << 23; /* set T bit*/ + return ap_tapq(qid, info); +} +/** + * ap_pqap_rapq(): Reset adjunct processor queue. + * @qid: The AP queue number + * + * Returns AP queue status structure. + */ +static inline struct ap_queue_status ap_rapq(ap_qid_t qid) +{ + register unsigned long reg0 asm ("0") = qid | (1UL << 24); + register struct ap_queue_status reg1 asm ("1"); + + asm volatile( + ".long 0xb2af0000" /* PQAP(RAPQ) */ + : "=d" (reg1) + : "d" (reg0) + : "cc"); + return reg1; +} + +/** + * ap_pqap_zapq(): Reset and zeroize adjunct processor queue. + * @qid: The AP queue number + * + * Returns AP queue status structure. + */ +static inline struct ap_queue_status ap_zapq(ap_qid_t qid) +{ + register unsigned long reg0 asm ("0") = qid | (2UL << 24); + register struct ap_queue_status reg1 asm ("1"); + + asm volatile( + ".long 0xb2af0000" /* PQAP(ZAPQ) */ + : "=d" (reg1) + : "d" (reg0) + : "cc"); + return reg1; +} + +/** + * struct ap_config_info - convenience struct for AP crypto + * config info as returned by the ap_qci() function. + */ struct ap_config_info { unsigned int apsc : 1; /* S bit */ unsigned int apxa : 1; /* N bit */ @@ -74,50 +165,189 @@ struct ap_config_info { unsigned char _reserved4[16]; } __aligned(8); -/* - * ap_query_configuration(): Fetch cryptographic config info +/** + * ap_qci(): Get AP configuration data * - * Returns the ap configuration info fetched via PQAP(QCI). - * On success 0 is returned, on failure a negative errno - * is returned, e.g. if the PQAP(QCI) instruction is not - * available, the return value will be -EOPNOTSUPP. + * Returns 0 on success, or -EOPNOTSUPP. */ -int ap_query_configuration(struct ap_config_info *info); +static inline int ap_qci(struct ap_config_info *config) +{ + register unsigned long reg0 asm ("0") = 4UL << 24; + register unsigned long reg1 asm ("1") = -EOPNOTSUPP; + register struct ap_config_info *reg2 asm ("2") = config; + + asm volatile( + ".long 0xb2af0000\n" /* PQAP(QCI) */ + "0: la %0,0\n" + "1:\n" + EX_TABLE(0b, 1b) + : "+d" (reg1) + : "d" (reg0), "d" (reg2) + : "cc", "memory"); + + return reg1; +} /* * struct ap_qirq_ctrl - convenient struct for easy invocation - * of the ap_queue_irq_ctrl() function. This struct is passed - * as GR1 parameter to the PQAP(AQIC) instruction. For details - * please see the AR documentation. + * of the ap_aqic() function. This struct is passed as GR1 + * parameter to the PQAP(AQIC) instruction. For details please + * see the AR documentation. */ struct ap_qirq_ctrl { unsigned int _res1 : 8; - unsigned int zone : 8; /* zone info */ - unsigned int ir : 1; /* ir flag: enable (1) or disable (0) irq */ + unsigned int zone : 8; /* zone info */ + unsigned int ir : 1; /* ir flag: enable (1) or disable (0) irq */ unsigned int _res2 : 4; - unsigned int gisc : 3; /* guest isc field */ + unsigned int gisc : 3; /* guest isc field */ unsigned int _res3 : 6; - unsigned int gf : 2; /* gisa format */ + unsigned int gf : 2; /* gisa format */ unsigned int _res4 : 1; - unsigned int gisa : 27; /* gisa origin */ + unsigned int gisa : 27; /* gisa origin */ unsigned int _res5 : 1; - unsigned int isc : 3; /* irq sub class */ + unsigned int isc : 3; /* irq sub class */ }; /** - * ap_queue_irq_ctrl(): Control interruption on a AP queue. + * ap_aqic(): Control interruption for a specific AP. * @qid: The AP queue number - * @qirqctrl: struct ap_qirq_ctrl, see above + * @qirqctrl: struct ap_qirq_ctrl (64 bit value) * @ind: The notification indicator byte * * Returns AP queue status. + */ +static inline struct ap_queue_status ap_aqic(ap_qid_t qid, + struct ap_qirq_ctrl qirqctrl, + void *ind) +{ + register unsigned long reg0 asm ("0") = qid | (3UL << 24); + register struct ap_qirq_ctrl reg1_in asm ("1") = qirqctrl; + register struct ap_queue_status reg1_out asm ("1"); + register void *reg2 asm ("2") = ind; + + asm volatile( + ".long 0xb2af0000" /* PQAP(AQIC) */ + : "=d" (reg1_out) + : "d" (reg0), "d" (reg1_in), "d" (reg2) + : "cc"); + return reg1_out; +} + +/* + * union ap_qact_ap_info - used together with the + * ap_aqic() function to provide a convenient way + * to handle the ap info needed by the qact function. + */ +union ap_qact_ap_info { + unsigned long val; + struct { + unsigned int : 3; + unsigned int mode : 3; + unsigned int : 26; + unsigned int cat : 8; + unsigned int : 8; + unsigned char ver[2]; + }; +}; + +/** + * ap_qact(): Query AP combatibility type. + * @qid: The AP queue number + * @apinfo: On input the info about the AP queue. On output the + * alternate AP queue info provided by the qact function + * in GR2 is stored in. * - * Control interruption on the given AP queue. - * Just a simple wrapper function for the low level PQAP(AQIC) - * instruction available for other kernel modules. + * Returns AP queue status. Check response_code field for failures. */ -struct ap_queue_status ap_queue_irq_ctrl(ap_qid_t qid, - struct ap_qirq_ctrl qirqctrl, - void *ind); +static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit, + union ap_qact_ap_info *apinfo) +{ + register unsigned long reg0 asm ("0") = qid | (5UL << 24) + | ((ifbit & 0x01) << 22); + register unsigned long reg1_in asm ("1") = apinfo->val; + register struct ap_queue_status reg1_out asm ("1"); + register unsigned long reg2 asm ("2"); + + asm volatile( + ".long 0xb2af0000" /* PQAP(QACT) */ + : "+d" (reg1_in), "=d" (reg1_out), "=d" (reg2) + : "d" (reg0) + : "cc"); + apinfo->val = reg2; + return reg1_out; +} + +/** + * ap_nqap(): Send message to adjunct processor queue. + * @qid: The AP queue number + * @psmid: The program supplied message identifier + * @msg: The message text + * @length: The message length + * + * Returns AP queue status structure. + * Condition code 1 on NQAP can't happen because the L bit is 1. + * Condition code 2 on NQAP also means the send is incomplete, + * because a segment boundary was reached. The NQAP is repeated. + */ +static inline struct ap_queue_status ap_nqap(ap_qid_t qid, + unsigned long long psmid, + void *msg, size_t length) +{ + register unsigned long reg0 asm ("0") = qid | 0x40000000UL; + register struct ap_queue_status reg1 asm ("1"); + register unsigned long reg2 asm ("2") = (unsigned long) msg; + register unsigned long reg3 asm ("3") = (unsigned long) length; + register unsigned long reg4 asm ("4") = (unsigned int) (psmid >> 32); + register unsigned long reg5 asm ("5") = psmid & 0xffffffff; + + asm volatile ( + "0: .long 0xb2ad0042\n" /* NQAP */ + " brc 2,0b" + : "+d" (reg0), "=d" (reg1), "+d" (reg2), "+d" (reg3) + : "d" (reg4), "d" (reg5) + : "cc", "memory"); + return reg1; +} + +/** + * ap_dqap(): Receive message from adjunct processor queue. + * @qid: The AP queue number + * @psmid: Pointer to program supplied message identifier + * @msg: The message text + * @length: The message length + * + * Returns AP queue status structure. + * Condition code 1 on DQAP means the receive has taken place + * but only partially. The response is incomplete, hence the + * DQAP is repeated. + * Condition code 2 on DQAP also means the receive is incomplete, + * this time because a segment boundary was reached. Again, the + * DQAP is repeated. + * Note that gpr2 is used by the DQAP instruction to keep track of + * any 'residual' length, in case the instruction gets interrupted. + * Hence it gets zeroed before the instruction. + */ +static inline struct ap_queue_status ap_dqap(ap_qid_t qid, + unsigned long long *psmid, + void *msg, size_t length) +{ + register unsigned long reg0 asm("0") = qid | 0x80000000UL; + register struct ap_queue_status reg1 asm ("1"); + register unsigned long reg2 asm("2") = 0UL; + register unsigned long reg4 asm("4") = (unsigned long) msg; + register unsigned long reg5 asm("5") = (unsigned long) length; + register unsigned long reg6 asm("6") = 0UL; + register unsigned long reg7 asm("7") = 0UL; + + + asm volatile( + "0: .long 0xb2ae0064\n" /* DQAP */ + " brc 6,0b\n" + : "+d" (reg0), "=d" (reg1), "+d" (reg2), + "+d" (reg4), "+d" (reg5), "+d" (reg6), "+d" (reg7) + : : "cc", "memory"); + *psmid = (((unsigned long long) reg6) << 32) + reg7; + return reg1; +} #endif /* _ASM_S390_AP_H_ */ diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 4b55532f15c4..fd20ab5d4cf7 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -55,17 +55,9 @@ static inline void atomic_add(int i, atomic_t *v) __atomic_add(i, &v->counter); } -#define atomic_add_negative(_i, _v) (atomic_add_return(_i, _v) < 0) -#define atomic_inc(_v) atomic_add(1, _v) -#define atomic_inc_return(_v) atomic_add_return(1, _v) -#define atomic_inc_and_test(_v) (atomic_add_return(1, _v) == 0) #define atomic_sub(_i, _v) atomic_add(-(int)(_i), _v) #define atomic_sub_return(_i, _v) atomic_add_return(-(int)(_i), _v) #define atomic_fetch_sub(_i, _v) atomic_fetch_add(-(int)(_i), _v) -#define atomic_sub_and_test(_i, _v) (atomic_sub_return(_i, _v) == 0) -#define atomic_dec(_v) atomic_sub(1, _v) -#define atomic_dec_return(_v) atomic_sub_return(1, _v) -#define atomic_dec_and_test(_v) (atomic_sub_return(1, _v) == 0) #define ATOMIC_OPS(op) \ static inline void atomic_##op(int i, atomic_t *v) \ @@ -90,21 +82,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return __atomic_cmpxchg(&v->counter, old, new); } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == u)) - break; - old = atomic_cmpxchg(v, c, c + a); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define ATOMIC64_INIT(i) { (i) } static inline long atomic64_read(const atomic64_t *v) @@ -168,50 +145,8 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OPS -static inline int atomic64_add_unless(atomic64_t *v, long i, long u) -{ - long c, old; - - c = atomic64_read(v); - for (;;) { - if (unlikely(c == u)) - break; - old = atomic64_cmpxchg(v, c, c + i); - if (likely(old == c)) - break; - c = old; - } - return c != u; -} - -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - -#define atomic64_add_negative(_i, _v) (atomic64_add_return(_i, _v) < 0) -#define atomic64_inc(_v) atomic64_add(1, _v) -#define atomic64_inc_return(_v) atomic64_add_return(1, _v) -#define atomic64_inc_and_test(_v) (atomic64_add_return(1, _v) == 0) #define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) #define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) #define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) -#define atomic64_sub_and_test(_i, _v) (atomic64_sub_return(_i, _v) == 0) -#define atomic64_dec(_v) atomic64_sub(1, _v) -#define atomic64_dec_return(_v) atomic64_sub_return(1, _v) -#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) #endif /* __ARCH_S390_ATOMIC__ */ diff --git a/arch/s390/include/asm/cpu_mf.h b/arch/s390/include/asm/cpu_mf.h index de023a9a88ca..bf2cbff926ef 100644 --- a/arch/s390/include/asm/cpu_mf.h +++ b/arch/s390/include/asm/cpu_mf.h @@ -2,7 +2,7 @@ /* * CPU-measurement facilities * - * Copyright IBM Corp. 2012 + * Copyright IBM Corp. 2012, 2018 * Author(s): Hendrik Brueckner * Jan Glauber */ @@ -139,8 +139,14 @@ struct hws_trailer_entry { unsigned char timestamp[16]; /* 16 - 31 timestamp */ unsigned long long reserved1; /* 32 -Reserved */ unsigned long long reserved2; /* */ - unsigned long long progusage1; /* 48 - reserved for programming use */ - unsigned long long progusage2; /* */ + union { /* 48 - reserved for programming use */ + struct { + unsigned int clock_base:1; /* in progusage2 */ + unsigned long long progusage1:63; + unsigned long long progusage2; + }; + unsigned long long progusage[2]; + }; } __packed; /* Load program parameter */ diff --git a/arch/s390/include/asm/css_chars.h b/arch/s390/include/asm/css_chars.h index 0563fd3e8458..480bb02ccacd 100644 --- a/arch/s390/include/asm/css_chars.h +++ b/arch/s390/include/asm/css_chars.h @@ -6,36 +6,38 @@ struct css_general_char { u64 : 12; - u32 dynio : 1; /* bit 12 */ - u32 : 4; - u32 eadm : 1; /* bit 17 */ - u32 : 23; - u32 aif : 1; /* bit 41 */ - u32 : 3; - u32 mcss : 1; /* bit 45 */ - u32 fcs : 1; /* bit 46 */ - u32 : 1; - u32 ext_mb : 1; /* bit 48 */ - u32 : 7; - u32 aif_tdd : 1; /* bit 56 */ - u32 : 1; - u32 qebsm : 1; /* bit 58 */ - u32 : 2; - u32 aiv : 1; /* bit 61 */ - u32 : 5; - u32 aif_osa : 1; /* bit 67 */ - u32 : 12; - u32 eadm_rf : 1; /* bit 80 */ - u32 : 1; - u32 cib : 1; /* bit 82 */ - u32 : 5; - u32 fcx : 1; /* bit 88 */ - u32 : 19; - u32 alt_ssi : 1; /* bit 108 */ - u32 : 1; - u32 narf : 1; /* bit 110 */ - u32 : 12; - u32 util_str : 1;/* bit 123 */ + u64 dynio : 1; /* bit 12 */ + u64 : 4; + u64 eadm : 1; /* bit 17 */ + u64 : 23; + u64 aif : 1; /* bit 41 */ + u64 : 3; + u64 mcss : 1; /* bit 45 */ + u64 fcs : 1; /* bit 46 */ + u64 : 1; + u64 ext_mb : 1; /* bit 48 */ + u64 : 7; + u64 aif_tdd : 1; /* bit 56 */ + u64 : 1; + u64 qebsm : 1; /* bit 58 */ + u64 : 2; + u64 aiv : 1; /* bit 61 */ + u64 : 2; + + u64 : 3; + u64 aif_osa : 1; /* bit 67 */ + u64 : 12; + u64 eadm_rf : 1; /* bit 80 */ + u64 : 1; + u64 cib : 1; /* bit 82 */ + u64 : 5; + u64 fcx : 1; /* bit 88 */ + u64 : 19; + u64 alt_ssi : 1; /* bit 108 */ + u64 : 1; + u64 narf : 1; /* bit 110 */ + u64 : 12; + u64 util_str : 1;/* bit 123 */ } __packed; extern struct css_general_char css_general_characteristics; diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h index e07cce88dfb0..fcbd638fb9f4 100644 --- a/arch/s390/include/asm/gmap.h +++ b/arch/s390/include/asm/gmap.h @@ -9,6 +9,14 @@ #ifndef _ASM_S390_GMAP_H #define _ASM_S390_GMAP_H +/* Generic bits for GMAP notification on DAT table entry changes. */ +#define GMAP_NOTIFY_SHADOW 0x2 +#define GMAP_NOTIFY_MPROT 0x1 + +/* Status bits only for huge segment entries */ +#define _SEGMENT_ENTRY_GMAP_IN 0x8000 /* invalidation notify bit */ +#define _SEGMENT_ENTRY_GMAP_UC 0x4000 /* dirty (migration) */ + /** * struct gmap_struct - guest address space * @list: list head for the mm->context gmap list @@ -132,4 +140,6 @@ void gmap_pte_notify(struct mm_struct *, unsigned long addr, pte_t *, int gmap_mprotect_notify(struct gmap *, unsigned long start, unsigned long len, int prot); +void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4], + unsigned long gaddr, unsigned long vmaddr); #endif /* _ASM_S390_GMAP_H */ diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 9c5fc50204dd..2d1afa58a4b6 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -37,7 +37,10 @@ static inline int prepare_hugepage_range(struct file *file, return 0; } -#define arch_clear_hugepage_flags(page) do { } while (0) +static inline void arch_clear_hugepage_flags(struct page *page) +{ + clear_bit(PG_arch_1, &page->flags); +} static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz) diff --git a/arch/s390/include/asm/kprobes.h b/arch/s390/include/asm/kprobes.h index 13de80cf741c..b106aa29bf55 100644 --- a/arch/s390/include/asm/kprobes.h +++ b/arch/s390/include/asm/kprobes.h @@ -68,8 +68,6 @@ struct kprobe_ctlblk { unsigned long kprobe_saved_imask; unsigned long kprobe_saved_ctl[3]; struct prev_kprobe prev_kprobe; - struct pt_regs jprobe_saved_regs; - kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE]; }; void arch_remove_kprobe(struct kprobe *p); diff --git a/arch/s390/include/asm/lowcore.h b/arch/s390/include/asm/lowcore.h index 5bc888841eaf..406d940173ab 100644 --- a/arch/s390/include/asm/lowcore.h +++ b/arch/s390/include/asm/lowcore.h @@ -185,7 +185,7 @@ struct lowcore { /* Transaction abort diagnostic block */ __u8 pgm_tdb[256]; /* 0x1800 */ __u8 pad_0x1900[0x2000-0x1900]; /* 0x1900 */ -} __packed; +} __packed __aligned(8192); #define S390_lowcore (*((struct lowcore *) 0)) diff --git a/arch/s390/include/asm/mmu.h b/arch/s390/include/asm/mmu.h index f5ff9dbad8ac..f31a15044c24 100644 --- a/arch/s390/include/asm/mmu.h +++ b/arch/s390/include/asm/mmu.h @@ -24,6 +24,8 @@ typedef struct { unsigned int uses_skeys:1; /* The mmu context uses CMM. */ unsigned int uses_cmm:1; + /* The gmaps associated with this context are allowed to use huge pages. */ + unsigned int allow_gmap_hpage_1m:1; } mm_context_t; #define INIT_MM_CONTEXT(name) \ diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h index d16bc79c30bb..0717ee76885d 100644 --- a/arch/s390/include/asm/mmu_context.h +++ b/arch/s390/include/asm/mmu_context.h @@ -32,6 +32,7 @@ static inline int init_new_context(struct task_struct *tsk, mm->context.has_pgste = 0; mm->context.uses_skeys = 0; mm->context.uses_cmm = 0; + mm->context.allow_gmap_hpage_1m = 0; #endif switch (mm->context.asce_limit) { case _REGION2_SIZE: diff --git a/arch/s390/include/asm/nospec-insn.h b/arch/s390/include/asm/nospec-insn.h index a01f81186e86..123dac3717b3 100644 --- a/arch/s390/include/asm/nospec-insn.h +++ b/arch/s390/include/asm/nospec-insn.h @@ -8,7 +8,7 @@ #ifdef __ASSEMBLY__ -#ifdef CONFIG_EXPOLINE +#ifdef CC_USING_EXPOLINE _LC_BR_R1 = __LC_BR_R1 @@ -189,7 +189,7 @@ _LC_BR_R1 = __LC_BR_R1 .macro BASR_EX rsave,rtarget,ruse=%r1 basr \rsave,\rtarget .endm -#endif +#endif /* CC_USING_EXPOLINE */ #endif /* __ASSEMBLY__ */ diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h index 94f8db468c9b..10fe982f2b4b 100644 --- a/arch/s390/include/asm/pci.h +++ b/arch/s390/include/asm/pci.h @@ -51,6 +51,10 @@ struct zpci_fmb_fmt2 { u64 max_work_units; }; +struct zpci_fmb_fmt3 { + u64 tx_bytes; +}; + struct zpci_fmb { u32 format : 8; u32 fmt_ind : 24; @@ -66,6 +70,7 @@ struct zpci_fmb { struct zpci_fmb_fmt0 fmt0; struct zpci_fmb_fmt1 fmt1; struct zpci_fmb_fmt2 fmt2; + struct zpci_fmb_fmt3 fmt3; }; } __packed __aligned(128); diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 5ab636089c60..0e7cb0dc9c33 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -268,8 +268,10 @@ static inline int is_module_addr(void *addr) #define _REGION_ENTRY_BITS_LARGE 0xffffffff8000fe2fUL /* Bits in the segment table entry */ -#define _SEGMENT_ENTRY_BITS 0xfffffffffffffe33UL -#define _SEGMENT_ENTRY_BITS_LARGE 0xfffffffffff0ff33UL +#define _SEGMENT_ENTRY_BITS 0xfffffffffffffe33UL +#define _SEGMENT_ENTRY_BITS_LARGE 0xfffffffffff0ff33UL +#define _SEGMENT_ENTRY_HARDWARE_BITS 0xfffffffffffffe30UL +#define _SEGMENT_ENTRY_HARDWARE_BITS_LARGE 0xfffffffffff00730UL #define _SEGMENT_ENTRY_ORIGIN_LARGE ~0xfffffUL /* large page address */ #define _SEGMENT_ENTRY_ORIGIN ~0x7ffUL/* page table origin */ #define _SEGMENT_ENTRY_PROTECT 0x200 /* segment protection bit */ @@ -1101,7 +1103,8 @@ int ptep_shadow_pte(struct mm_struct *mm, unsigned long saddr, pte_t *sptep, pte_t *tptep, pte_t pte); void ptep_unshadow_pte(struct mm_struct *mm, unsigned long saddr, pte_t *ptep); -bool test_and_clear_guest_dirty(struct mm_struct *mm, unsigned long address); +bool ptep_test_and_clear_uc(struct mm_struct *mm, unsigned long address, + pte_t *ptep); int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, unsigned char key, bool nq); int cond_set_guest_storage_key(struct mm_struct *mm, unsigned long addr, @@ -1116,6 +1119,10 @@ int set_pgste_bits(struct mm_struct *mm, unsigned long addr, int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep); int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc, unsigned long *oldpte, unsigned long *oldpgste); +void gmap_pmdp_csp(struct mm_struct *mm, unsigned long vmaddr); +void gmap_pmdp_invalidate(struct mm_struct *mm, unsigned long vmaddr); +void gmap_pmdp_idte_local(struct mm_struct *mm, unsigned long vmaddr); +void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr); /* * Certain architectures need to do special things when PTEs diff --git a/arch/s390/include/asm/purgatory.h b/arch/s390/include/asm/purgatory.h index 6090670df51f..e297bcfc476f 100644 --- a/arch/s390/include/asm/purgatory.h +++ b/arch/s390/include/asm/purgatory.h @@ -13,11 +13,5 @@ int verify_sha256_digest(void); -extern u64 kernel_entry; -extern u64 kernel_type; - -extern u64 crash_start; -extern u64 crash_size; - #endif /* __ASSEMBLY__ */ #endif /* _S390_PURGATORY_H_ */ diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h index de11ecc99c7c..9c9970a5dfb1 100644 --- a/arch/s390/include/asm/qdio.h +++ b/arch/s390/include/asm/qdio.h @@ -262,7 +262,6 @@ struct qdio_outbuf_state { void *user; }; -#define QDIO_OUTBUF_STATE_FLAG_NONE 0x00 #define QDIO_OUTBUF_STATE_FLAG_PENDING 0x01 #define CHSC_AC1_INITIATE_INPUTQ 0x80 diff --git a/arch/s390/include/asm/sections.h b/arch/s390/include/asm/sections.h index 54f81f8ed662..724faede8ac5 100644 --- a/arch/s390/include/asm/sections.h +++ b/arch/s390/include/asm/sections.h @@ -4,6 +4,4 @@ #include -extern char _ehead[]; - #endif diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h index 9c30ebe046f3..1d66016f4170 100644 --- a/arch/s390/include/asm/setup.h +++ b/arch/s390/include/asm/setup.h @@ -9,8 +9,10 @@ #include #include - +#define EP_OFFSET 0x10008 +#define EP_STRING "S390EP" #define PARMAREA 0x10400 +#define PARMAREA_END 0x11000 /* * Machine features detected in early.c diff --git a/arch/s390/include/uapi/asm/chsc.h b/arch/s390/include/uapi/asm/chsc.h index dc329aa03f76..83a574e95b3a 100644 --- a/arch/s390/include/uapi/asm/chsc.h +++ b/arch/s390/include/uapi/asm/chsc.h @@ -23,29 +23,29 @@ struct chsc_async_header { __u32 key : 4; __u32 : 28; struct subchannel_id sid; -} __attribute__ ((packed)); +}; struct chsc_async_area { struct chsc_async_header header; __u8 data[CHSC_SIZE - sizeof(struct chsc_async_header)]; -} __attribute__ ((packed)); +}; struct chsc_header { __u16 length; __u16 code; -} __attribute__ ((packed)); +}; struct chsc_sync_area { struct chsc_header header; __u8 data[CHSC_SIZE - sizeof(struct chsc_header)]; -} __attribute__ ((packed)); +}; struct chsc_response_struct { __u16 length; __u16 code; __u32 parms; __u8 data[CHSC_SIZE - 2 * sizeof(__u16) - sizeof(__u32)]; -} __attribute__ ((packed)); +}; struct chsc_chp_cd { struct chp_id chpid; diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile index 2fed39b26b42..dbfd1730e631 100644 --- a/arch/s390/kernel/Makefile +++ b/arch/s390/kernel/Makefile @@ -9,38 +9,20 @@ ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) # Do not trace early setup code -CFLAGS_REMOVE_als.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_early_nobss.o = $(CC_FLAGS_FTRACE) endif -GCOV_PROFILE_als.o := n GCOV_PROFILE_early.o := n GCOV_PROFILE_early_nobss.o := n -KCOV_INSTRUMENT_als.o := n KCOV_INSTRUMENT_early.o := n KCOV_INSTRUMENT_early_nobss.o := n -UBSAN_SANITIZE_als.o := n UBSAN_SANITIZE_early.o := n UBSAN_SANITIZE_early_nobss.o := n -# -# Use -march=z900 for als.c to be able to print an error -# message if the kernel is started on a machine which is too old -# -ifneq ($(CC_FLAGS_MARCH),-march=z900) -CFLAGS_REMOVE_als.o += $(CC_FLAGS_MARCH) -CFLAGS_REMOVE_als.o += $(CC_FLAGS_EXPOLINE) -CFLAGS_als.o += -march=z900 -AFLAGS_REMOVE_head.o += $(CC_FLAGS_MARCH) -AFLAGS_head.o += -march=z900 -endif - -CFLAGS_als.o += -D__NO_FORTIFY - # # Passing null pointers is ok for smp code, since we access the lowcore here. # @@ -61,13 +43,13 @@ CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o -obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o early_nobss.o +obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o early_nobss.o obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o obj-y += nospec-branch.o -extra-y += head.o head64.o vmlinux.lds +extra-y += head64.o vmlinux.lds obj-$(CONFIG_SYSFS) += nospec-sysfs.o CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE) @@ -99,5 +81,5 @@ obj-$(CONFIG_TRACEPOINTS) += trace.o obj-y += vdso64/ obj-$(CONFIG_COMPAT) += vdso32/ -chkbss := head.o head64.o als.o early_nobss.o +chkbss := head64.o early_nobss.o include $(srctree)/arch/s390/scripts/Makefile.chkbss diff --git a/arch/s390/kernel/compat_wrapper.c b/arch/s390/kernel/compat_wrapper.c index 607c5e9fba3d..2ce28bf0c5ec 100644 --- a/arch/s390/kernel/compat_wrapper.c +++ b/arch/s390/kernel/compat_wrapper.c @@ -183,3 +183,4 @@ COMPAT_SYSCALL_WRAP2(s390_guarded_storage, int, command, struct gs_cb *, gs_cb); COMPAT_SYSCALL_WRAP5(statx, int, dfd, const char __user *, path, unsigned, flags, unsigned, mask, struct statx __user *, buffer); COMPAT_SYSCALL_WRAP4(s390_sthyi, unsigned long, code, void __user *, info, u64 __user *, rc, unsigned long, flags); COMPAT_SYSCALL_WRAP5(kexec_file_load, int, kernel_fd, int, initrd_fd, unsigned long, cmdline_len, const char __user *, cmdline_ptr, unsigned long, flags) +COMPAT_SYSCALL_WRAP4(rseq, struct rseq __user *, rseq, u32, rseq_len, int, flags, u32, sig) diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c index 9f5ea9d87069..c3620bafc374 100644 --- a/arch/s390/kernel/crash_dump.c +++ b/arch/s390/kernel/crash_dump.c @@ -306,6 +306,15 @@ static void *kzalloc_panic(int len) return rc; } +static const char *nt_name(Elf64_Word type) +{ + const char *name = "LINUX"; + + if (type == NT_PRPSINFO || type == NT_PRSTATUS || type == NT_PRFPREG) + name = KEXEC_CORE_NOTE_NAME; + return name; +} + /* * Initialize ELF note */ @@ -332,11 +341,26 @@ static void *nt_init_name(void *buf, Elf64_Word type, void *desc, int d_len, static inline void *nt_init(void *buf, Elf64_Word type, void *desc, int d_len) { - const char *note_name = "LINUX"; + return nt_init_name(buf, type, desc, d_len, nt_name(type)); +} + +/* + * Calculate the size of ELF note + */ +static size_t nt_size_name(int d_len, const char *name) +{ + size_t size; - if (type == NT_PRPSINFO || type == NT_PRSTATUS || type == NT_PRFPREG) - note_name = KEXEC_CORE_NOTE_NAME; - return nt_init_name(buf, type, desc, d_len, note_name); + size = sizeof(Elf64_Nhdr); + size += roundup(strlen(name) + 1, 4); + size += roundup(d_len, 4); + + return size; +} + +static inline size_t nt_size(Elf64_Word type, int d_len) +{ + return nt_size_name(d_len, nt_name(type)); } /* @@ -374,6 +398,29 @@ static void *fill_cpu_elf_notes(void *ptr, int cpu, struct save_area *sa) return ptr; } +/* + * Calculate size of ELF notes per cpu + */ +static size_t get_cpu_elf_notes_size(void) +{ + struct save_area *sa = NULL; + size_t size; + + size = nt_size(NT_PRSTATUS, sizeof(struct elf_prstatus)); + size += nt_size(NT_PRFPREG, sizeof(elf_fpregset_t)); + size += nt_size(NT_S390_TIMER, sizeof(sa->timer)); + size += nt_size(NT_S390_TODCMP, sizeof(sa->todcmp)); + size += nt_size(NT_S390_TODPREG, sizeof(sa->todpreg)); + size += nt_size(NT_S390_CTRS, sizeof(sa->ctrs)); + size += nt_size(NT_S390_PREFIX, sizeof(sa->prefix)); + if (MACHINE_HAS_VX) { + size += nt_size(NT_S390_VXRS_HIGH, sizeof(sa->vxrs_high)); + size += nt_size(NT_S390_VXRS_LOW, sizeof(sa->vxrs_low)); + } + + return size; +} + /* * Initialize prpsinfo note (new kernel) */ @@ -429,6 +476,30 @@ static void *nt_vmcoreinfo(void *ptr) return nt_init_name(ptr, 0, vmcoreinfo, size, "VMCOREINFO"); } +static size_t nt_vmcoreinfo_size(void) +{ + const char *name = "VMCOREINFO"; + char nt_name[11]; + Elf64_Nhdr note; + void *addr; + + if (copy_oldmem_kernel(&addr, &S390_lowcore.vmcore_info, sizeof(addr))) + return 0; + + if (copy_oldmem_kernel(¬e, addr, sizeof(note))) + return 0; + + memset(nt_name, 0, sizeof(nt_name)); + if (copy_oldmem_kernel(nt_name, addr + sizeof(note), + sizeof(nt_name) - 1)) + return 0; + + if (strcmp(nt_name, name) != 0) + return 0; + + return nt_size_name(note.n_descsz, name); +} + /* * Initialize final note (needed for /proc/vmcore code) */ @@ -539,6 +610,27 @@ static void *notes_init(Elf64_Phdr *phdr, void *ptr, u64 notes_offset) return ptr; } +static size_t get_elfcorehdr_size(int mem_chunk_cnt) +{ + size_t size; + + size = sizeof(Elf64_Ehdr); + /* PT_NOTES */ + size += sizeof(Elf64_Phdr); + /* nt_prpsinfo */ + size += nt_size(NT_PRPSINFO, sizeof(struct elf_prpsinfo)); + /* regsets */ + size += get_cpu_cnt() * get_cpu_elf_notes_size(); + /* nt_vmcoreinfo */ + size += nt_vmcoreinfo_size(); + /* nt_final */ + size += sizeof(Elf64_Nhdr); + /* PT_LOADS */ + size += mem_chunk_cnt * sizeof(Elf64_Phdr); + + return size; +} + /* * Create ELF core header (new kernel) */ @@ -566,8 +658,8 @@ int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size) mem_chunk_cnt = get_mem_chunk_cnt(); - alloc_size = 0x1000 + get_cpu_cnt() * 0x4a0 + - mem_chunk_cnt * sizeof(Elf64_Phdr); + alloc_size = get_elfcorehdr_size(mem_chunk_cnt); + hdr = kzalloc_panic(alloc_size); /* Init elf header */ ptr = ehdr_init(hdr, mem_chunk_cnt); diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c index 827699eb48fa..5b28b434f8a1 100644 --- a/arch/s390/kernel/early.c +++ b/arch/s390/kernel/early.c @@ -331,8 +331,20 @@ static void __init setup_boot_command_line(void) append_to_cmdline(append_ipl_scpdata); } +static void __init check_image_bootable(void) +{ + if (!memcmp(EP_STRING, (void *)EP_OFFSET, strlen(EP_STRING))) + return; + + sclp_early_printk("Linux kernel boot failure: An attempt to boot a vmlinux ELF image failed.\n"); + sclp_early_printk("This image does not contain all parts necessary for starting up. Use\n"); + sclp_early_printk("bzImage or arch/s390/boot/compressed/vmlinux instead.\n"); + disabled_wait(0xbadb007); +} + void __init startup_init(void) { + check_image_bootable(); time_early_init(); init_kernel_storage_key(); lockdep_off(); diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S index f03402efab4b..150130c897c3 100644 --- a/arch/s390/kernel/entry.S +++ b/arch/s390/kernel/entry.S @@ -357,6 +357,10 @@ ENTRY(system_call) stg %r2,__PT_R2(%r11) # store return value .Lsysc_return: +#ifdef CONFIG_DEBUG_RSEQ + lgr %r2,%r11 + brasl %r14,rseq_syscall +#endif LOCKDEP_SYS_EXIT .Lsysc_tif: TSTMSK __PT_FLAGS(%r11),_PIF_WORK @@ -1265,7 +1269,7 @@ cleanup_critical: jl 0f clg %r9,BASED(.Lcleanup_table+104) # .Lload_fpu_regs_end jl .Lcleanup_load_fpu_regs -0: BR_EX %r14 +0: BR_EX %r14,%r11 .align 8 .Lcleanup_table: @@ -1301,7 +1305,7 @@ cleanup_critical: ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE lctlg %c1,%c1,__LC_USER_ASCE # load primary asce larl %r9,sie_exit # skip forward to sie_exit - BR_EX %r14 + BR_EX %r14,%r11 #endif .Lcleanup_system_call: diff --git a/arch/s390/kernel/entry.h b/arch/s390/kernel/entry.h index 961abfac2c5f..472fa2f1a4a5 100644 --- a/arch/s390/kernel/entry.h +++ b/arch/s390/kernel/entry.h @@ -83,7 +83,6 @@ long sys_s390_sthyi(unsigned long function_code, void __user *buffer, u64 __user DECLARE_PER_CPU(u64, mt_cycles[8]); -void verify_facilities(void); void gs_load_bc_cb(struct pt_regs *regs); void set_fs_fixup(void); diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S index 791cb9000e86..6d14ad42ba88 100644 --- a/arch/s390/kernel/head64.S +++ b/arch/s390/kernel/head64.S @@ -48,11 +48,23 @@ ENTRY(startup_continue) # Early machine initialization and detection functions. # brasl %r14,startup_init - lpswe .Lentry-.LPG1(13) # jump to _stext in primary-space, - # virtual and never return ... + +# check control registers + stctg %c0,%c15,0(%r15) + oi 6(%r15),0x60 # enable sigp emergency & external call + oi 4(%r15),0x10 # switch on low address proctection + lctlg %c0,%c15,0(%r15) + + lam 0,15,.Laregs-.LPG1(%r13) # load acrs needed by uaccess + brasl %r14,start_kernel # go to C code +# +# We returned from start_kernel ?!? PANIK +# + basr %r13,0 + lpswe .Ldw-.(%r13) # load disabled wait psw + .align 16 .LPG1: -.Lentry:.quad 0x0000000180000000,_stext .Lctl: .quad 0x04040000 # cr0: AFP registers & secondary space .quad 0 # cr1: primary space segment table .quad .Lduct # cr2: dispatchable unit control table @@ -85,30 +97,5 @@ ENTRY(startup_continue) .endr .Llinkage_stack: .long 0,0,0x89000000,0,0,0,0x8a000000,0 - -ENTRY(_ehead) - - .org 0x100000 - 0x11000 # head.o ends at 0x11000 -# -# startup-code, running in absolute addressing mode -# -ENTRY(_stext) - basr %r13,0 # get base -.LPG3: -# check control registers - stctg %c0,%c15,0(%r15) - oi 6(%r15),0x60 # enable sigp emergency & external call - oi 4(%r15),0x10 # switch on low address proctection - lctlg %c0,%c15,0(%r15) - - lam 0,15,.Laregs-.LPG3(%r13) # load acrs needed by uaccess - brasl %r14,start_kernel # go to C code -# -# We returned from start_kernel ?!? PANIK -# - basr %r13,0 - lpswe .Ldw-.(%r13) # load disabled wait psw - - .align 8 .Ldw: .quad 0x0002000180000000,0x0000000000000000 .Laregs:.long 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c index 60f60afa645c..7c0a095e9c5f 100644 --- a/arch/s390/kernel/kprobes.c +++ b/arch/s390/kernel/kprobes.c @@ -321,38 +321,20 @@ static int kprobe_handler(struct pt_regs *regs) * If we have no pre-handler or it returned 0, we * continue with single stepping. If we have a * pre-handler and it returned non-zero, it prepped - * for calling the break_handler below on re-entry - * for jprobe processing, so get out doing nothing - * more here. + * for changing execution path, so get out doing + * nothing more here. */ push_kprobe(kcb, p); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (p->pre_handler && p->pre_handler(p, regs)) + if (p->pre_handler && p->pre_handler(p, regs)) { + pop_kprobe(kcb); + preempt_enable_no_resched(); return 1; + } kcb->kprobe_status = KPROBE_HIT_SS; } enable_singlestep(kcb, regs, (unsigned long) p->ainsn.insn); return 1; - } else if (kprobe_running()) { - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - /* - * Continuation after the jprobe completed and - * caused the jprobe_return trap. The jprobe - * break_handler "returns" to the original - * function that still has the kprobe breakpoint - * installed. We continue with single stepping. - */ - kcb->kprobe_status = KPROBE_HIT_SS; - enable_singlestep(kcb, regs, - (unsigned long) p->ainsn.insn); - return 1; - } /* else: - * No kprobe at this address and the current kprobe - * has no break handler (no jprobe!). The kernel just - * exploded, let the standard trap handler pick up the - * pieces. - */ } /* else: * No kprobe at this address and no active kprobe. The trap has * not been caused by a kprobe breakpoint. The race of breakpoint @@ -452,9 +434,7 @@ static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) regs->psw.addr = orig_ret_address; - pop_kprobe(get_kprobe_ctlblk()); kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); @@ -661,60 +641,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, } NOKPROBE_SYMBOL(kprobe_exceptions_notify); -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long stack; - - memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs)); - - /* setup return addr to the jprobe handler routine */ - regs->psw.addr = (unsigned long) jp->entry; - regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT); - - /* r15 is the stack pointer */ - stack = (unsigned long) regs->gprs[15]; - - memcpy(kcb->jprobes_stack, (void *) stack, MIN_STACK_SIZE(stack)); - - /* - * jprobes use jprobe_return() which skips the normal return - * path of the function, and this messes up the accounting of the - * function graph tracer to get messed up. - * - * Pause function graph tracing while performing the jprobe function. - */ - pause_graph_tracing(); - return 1; -} -NOKPROBE_SYMBOL(setjmp_pre_handler); - -void jprobe_return(void) -{ - asm volatile(".word 0x0002"); -} -NOKPROBE_SYMBOL(jprobe_return); - -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long stack; - - /* It's OK to start function graph tracing again */ - unpause_graph_tracing(); - - stack = (unsigned long) kcb->jprobe_saved_regs.gprs[15]; - - /* Put the regs back */ - memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs)); - /* put the stack back */ - memcpy((void *) stack, kcb->jprobes_stack, MIN_STACK_SIZE(stack)); - preempt_enable_no_resched(); - return 1; -} -NOKPROBE_SYMBOL(longjmp_break_handler); - static struct kprobe trampoline = { .addr = (kprobe_opcode_t *) &kretprobe_trampoline, .pre_handler = trampoline_probe_handler diff --git a/arch/s390/kernel/nospec-branch.c b/arch/s390/kernel/nospec-branch.c index 18ae7b9c71d6..bdddaae96559 100644 --- a/arch/s390/kernel/nospec-branch.c +++ b/arch/s390/kernel/nospec-branch.c @@ -35,6 +35,8 @@ early_param("nospec", nospec_setup_early); static int __init nospec_report(void) { + if (test_facility(156)) + pr_info("Spectre V2 mitigation: etokens\n"); if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) pr_info("Spectre V2 mitigation: execute trampolines\n"); if (__test_facility(82, S390_lowcore.alt_stfle_fac_list)) @@ -56,7 +58,15 @@ early_param("nospectre_v2", nospectre_v2_setup_early); void __init nospec_auto_detect(void) { - if (IS_ENABLED(CC_USING_EXPOLINE)) { + if (test_facility(156)) { + /* + * The machine supports etokens. + * Disable expolines and disable nobp. + */ + if (IS_ENABLED(CC_USING_EXPOLINE)) + nospec_disable = 1; + __clear_facility(82, S390_lowcore.alt_stfle_fac_list); + } else if (IS_ENABLED(CC_USING_EXPOLINE)) { /* * The kernel has been compiled with expolines. * Keep expolines enabled and disable nobp. diff --git a/arch/s390/kernel/nospec-sysfs.c b/arch/s390/kernel/nospec-sysfs.c index 8affad5f18cb..e30e580ae362 100644 --- a/arch/s390/kernel/nospec-sysfs.c +++ b/arch/s390/kernel/nospec-sysfs.c @@ -13,6 +13,8 @@ ssize_t cpu_show_spectre_v1(struct device *dev, ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf) { + if (test_facility(156)) + return sprintf(buf, "Mitigation: etokens\n"); if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) return sprintf(buf, "Mitigation: execute trampolines\n"); if (__test_facility(82, S390_lowcore.alt_stfle_fac_list)) diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c index 0292d68e7dde..cb198d4a6dca 100644 --- a/arch/s390/kernel/perf_cpum_sf.c +++ b/arch/s390/kernel/perf_cpum_sf.c @@ -2,7 +2,7 @@ /* * Performance event support for the System z CPU-measurement Sampling Facility * - * Copyright IBM Corp. 2013 + * Copyright IBM Corp. 2013, 2018 * Author(s): Hendrik Brueckner */ #define KMSG_COMPONENT "cpum_sf" @@ -1587,6 +1587,17 @@ static void aux_buffer_free(void *data) "%lu SDBTs\n", num_sdbt); } +static void aux_sdb_init(unsigned long sdb) +{ + struct hws_trailer_entry *te; + + te = (struct hws_trailer_entry *)trailer_entry_ptr(sdb); + + /* Save clock base */ + te->clock_base = 1; + memcpy(&te->progusage2, &tod_clock_base[1], 8); +} + /* * aux_buffer_setup() - Setup AUX buffer for diagnostic mode sampling * @cpu: On which to allocate, -1 means current @@ -1666,6 +1677,7 @@ static void *aux_buffer_setup(int cpu, void **pages, int nr_pages, /* Tail is the entry in a SDBT */ *tail = (unsigned long)pages[i]; aux->sdb_index[i] = (unsigned long)pages[i]; + aux_sdb_init((unsigned long)pages[i]); } sfb->num_sdb = nr_pages; diff --git a/arch/s390/kernel/perf_regs.c b/arch/s390/kernel/perf_regs.c index 54e2d634b849..4352a504f235 100644 --- a/arch/s390/kernel/perf_regs.c +++ b/arch/s390/kernel/perf_regs.c @@ -12,9 +12,6 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) { freg_t fp; - if (WARN_ON_ONCE((u32)idx >= PERF_REG_S390_MAX)) - return 0; - if (idx >= PERF_REG_S390_R0 && idx <= PERF_REG_S390_R15) return regs->gprs[idx]; @@ -33,7 +30,8 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) if (idx == PERF_REG_S390_PC) return regs->psw.addr; - return regs->gprs[idx]; + WARN_ON_ONCE((u32)idx >= PERF_REG_S390_MAX); + return 0; } #define REG_RESERVED (~((1UL << PERF_REG_S390_MAX) - 1)) diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index d82a9ec64ea9..c637c12f9e37 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -674,12 +674,12 @@ static void __init reserve_kernel(void) #ifdef CONFIG_DMA_API_DEBUG /* * DMA_API_DEBUG code stumbles over addresses from the - * range [_ehead, _stext]. Mark the memory as reserved + * range [PARMAREA_END, _stext]. Mark the memory as reserved * so it is not used for CONFIG_DMA_API_DEBUG=y. */ memblock_reserve(0, PFN_PHYS(start_pfn)); #else - memblock_reserve(0, (unsigned long)_ehead); + memblock_reserve(0, PARMAREA_END); memblock_reserve((unsigned long)_stext, PFN_PHYS(start_pfn) - (unsigned long)_stext); #endif diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c index 2d2960ab3e10..22f08245aa5d 100644 --- a/arch/s390/kernel/signal.c +++ b/arch/s390/kernel/signal.c @@ -498,7 +498,7 @@ void do_signal(struct pt_regs *regs) } /* No longer in a system call */ clear_pt_regs_flag(regs, PIF_SYSCALL); - + rseq_signal_deliver(&ksig, regs); if (is_compat_task()) handle_signal32(&ksig, oldset, regs); else @@ -537,4 +537,5 @@ void do_notify_resume(struct pt_regs *regs) { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 8b210ead7956..022fc099b628 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -389,3 +389,5 @@ 379 common statx sys_statx compat_sys_statx 380 common s390_sthyi sys_s390_sthyi compat_sys_s390_sthyi 381 common kexec_file_load sys_kexec_file_load compat_sys_kexec_file_load +382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents +383 common rseq sys_rseq compat_sys_rseq diff --git a/arch/s390/kernel/sysinfo.c b/arch/s390/kernel/sysinfo.c index 54f5496913fa..12f80d1f0415 100644 --- a/arch/s390/kernel/sysinfo.c +++ b/arch/s390/kernel/sysinfo.c @@ -59,6 +59,8 @@ int stsi(void *sysinfo, int fc, int sel1, int sel2) } EXPORT_SYMBOL(stsi); +#ifdef CONFIG_PROC_FS + static bool convert_ext_name(unsigned char encoding, char *name, size_t len) { switch (encoding) { @@ -301,6 +303,8 @@ static int __init sysinfo_create_proc(void) } device_initcall(sysinfo_create_proc); +#endif /* CONFIG_PROC_FS */ + /* * Service levels interface. */ diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c index cf561160ea88..e8766beee5ad 100644 --- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -221,17 +221,22 @@ void read_persistent_clock64(struct timespec64 *ts) ext_to_timespec64(clk, ts); } -void read_boot_clock64(struct timespec64 *ts) +void __init read_persistent_wall_and_boot_offset(struct timespec64 *wall_time, + struct timespec64 *boot_offset) { unsigned char clk[STORE_CLOCK_EXT_SIZE]; + struct timespec64 boot_time; __u64 delta; delta = initial_leap_seconds + TOD_UNIX_EPOCH; - memcpy(clk, tod_clock_base, 16); - *(__u64 *) &clk[1] -= delta; - if (*(__u64 *) &clk[1] > delta) + memcpy(clk, tod_clock_base, STORE_CLOCK_EXT_SIZE); + *(__u64 *)&clk[1] -= delta; + if (*(__u64 *)&clk[1] > delta) clk[0]--; - ext_to_timespec64(clk, ts); + ext_to_timespec64(clk, &boot_time); + + read_persistent_clock64(wall_time); + *boot_offset = timespec64_sub(*wall_time, boot_time); } static u64 read_tod_clock(struct clocksource *cs) diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c index 4b6e0397f66d..e8184a15578a 100644 --- a/arch/s390/kernel/topology.c +++ b/arch/s390/kernel/topology.c @@ -579,41 +579,33 @@ early_param("topology", topology_setup); static int topology_ctl_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - unsigned int len; + int enabled = topology_is_enabled(); int new_mode; - char buf[2]; + int zero = 0; + int one = 1; + int rc; + struct ctl_table ctl_entry = { + .procname = ctl->procname, + .data = &enabled, + .maxlen = sizeof(int), + .extra1 = &zero, + .extra2 = &one, + }; + + rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) + return rc; - if (!*lenp || *ppos) { - *lenp = 0; - return 0; - } - if (!write) { - strncpy(buf, topology_is_enabled() ? "1\n" : "0\n", - ARRAY_SIZE(buf)); - len = strnlen(buf, ARRAY_SIZE(buf)); - if (len > *lenp) - len = *lenp; - if (copy_to_user(buffer, buf, len)) - return -EFAULT; - goto out; - } - len = *lenp; - if (copy_from_user(buf, buffer, len > sizeof(buf) ? sizeof(buf) : len)) - return -EFAULT; - if (buf[0] != '0' && buf[0] != '1') - return -EINVAL; mutex_lock(&smp_cpu_state_mutex); - new_mode = topology_get_mode(buf[0] == '1'); + new_mode = topology_get_mode(enabled); if (topology_mode != new_mode) { topology_mode = new_mode; topology_schedule_update(); } mutex_unlock(&smp_cpu_state_mutex); topology_flush_work(); -out: - *lenp = len; - *ppos += len; - return 0; + + return rc; } static struct ctl_table topology_ctl_table[] = { diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index 09abae40f917..3031cc6dd0ab 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -47,7 +47,7 @@ static struct page **vdso64_pagelist; */ unsigned int __read_mostly vdso_enabled = 1; -static int vdso_fault(const struct vm_special_mapping *sm, +static vm_fault_t vdso_fault(const struct vm_special_mapping *sm, struct vm_area_struct *vma, struct vm_fault *vmf) { struct page **vdso_pagelist; diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S index f0414f52817b..b43f8d33a369 100644 --- a/arch/s390/kernel/vmlinux.lds.S +++ b/arch/s390/kernel/vmlinux.lds.S @@ -19,7 +19,7 @@ OUTPUT_FORMAT("elf64-s390", "elf64-s390", "elf64-s390") OUTPUT_ARCH(s390:64-bit) -ENTRY(startup) +ENTRY(startup_continue) jiffies = jiffies_64; PHDRS { @@ -30,16 +30,12 @@ PHDRS { SECTIONS { - . = 0x00000000; + . = 0x100000; + _stext = .; /* Start of text section */ .text : { /* Text and read-only data */ + _text = .; HEAD_TEXT - /* - * E.g. perf doesn't like symbols starting at address zero, - * therefore skip the initial PSW and channel program located - * at address zero and let _text start at 0x200. - */ - _text = 0x200; TEXT_TEXT SCHED_TEXT CPUIDLE_TEXT @@ -47,6 +43,7 @@ SECTIONS KPROBES_TEXT IRQENTRY_TEXT SOFTIRQENTRY_TEXT + *(.text.*_indirect_*) *(.fixup) *(.gnu.warning) } :text = 0x0700 diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index daa09f89ca2d..fcb55b02990e 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -1145,7 +1145,7 @@ void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu) * yield-candidate. */ vcpu->preempted = true; - swake_up(&vcpu->wq); + swake_up_one(&vcpu->wq); vcpu->stat.halt_wakeup++; } /* diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 3b7a5151b6a5..f9d90337e64a 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -172,6 +172,10 @@ static int nested; module_param(nested, int, S_IRUGO); MODULE_PARM_DESC(nested, "Nested virtualization support"); +/* allow 1m huge page guest backing, if !nested */ +static int hpage; +module_param(hpage, int, 0444); +MODULE_PARM_DESC(hpage, "1m huge page backing support"); /* * For now we handle at most 16 double words as this is what the s390 base @@ -475,6 +479,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_S390_AIS_MIGRATION: r = 1; break; + case KVM_CAP_S390_HPAGE_1M: + r = 0; + if (hpage) + r = 1; + break; case KVM_CAP_S390_MEM_OP: r = MEM_OP_MAX_SIZE; break; @@ -511,19 +520,30 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) } static void kvm_s390_sync_dirty_log(struct kvm *kvm, - struct kvm_memory_slot *memslot) + struct kvm_memory_slot *memslot) { + int i; gfn_t cur_gfn, last_gfn; - unsigned long address; + unsigned long gaddr, vmaddr; struct gmap *gmap = kvm->arch.gmap; + DECLARE_BITMAP(bitmap, _PAGE_ENTRIES); - /* Loop over all guest pages */ + /* Loop over all guest segments */ + cur_gfn = memslot->base_gfn; last_gfn = memslot->base_gfn + memslot->npages; - for (cur_gfn = memslot->base_gfn; cur_gfn <= last_gfn; cur_gfn++) { - address = gfn_to_hva_memslot(memslot, cur_gfn); + for (; cur_gfn <= last_gfn; cur_gfn += _PAGE_ENTRIES) { + gaddr = gfn_to_gpa(cur_gfn); + vmaddr = gfn_to_hva_memslot(memslot, cur_gfn); + if (kvm_is_error_hva(vmaddr)) + continue; + + bitmap_zero(bitmap, _PAGE_ENTRIES); + gmap_sync_dirty_log_pmd(gmap, bitmap, gaddr, vmaddr); + for (i = 0; i < _PAGE_ENTRIES; i++) { + if (test_bit(i, bitmap)) + mark_page_dirty(kvm, cur_gfn + i); + } - if (test_and_clear_guest_dirty(gmap->mm, address)) - mark_page_dirty(kvm, cur_gfn); if (fatal_signal_pending(current)) return; cond_resched(); @@ -667,6 +687,27 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) VM_EVENT(kvm, 3, "ENABLE: CAP_S390_GS %s", r ? "(not available)" : "(success)"); break; + case KVM_CAP_S390_HPAGE_1M: + mutex_lock(&kvm->lock); + if (kvm->created_vcpus) + r = -EBUSY; + else if (!hpage || kvm->arch.use_cmma) + r = -EINVAL; + else { + r = 0; + kvm->mm->context.allow_gmap_hpage_1m = 1; + /* + * We might have to create fake 4k page + * tables. To avoid that the hardware works on + * stale PGSTEs, we emulate these instructions. + */ + kvm->arch.use_skf = 0; + kvm->arch.use_pfmfi = 0; + } + mutex_unlock(&kvm->lock); + VM_EVENT(kvm, 3, "ENABLE: CAP_S390_HPAGE %s", + r ? "(not available)" : "(success)"); + break; case KVM_CAP_S390_USER_STSI: VM_EVENT(kvm, 3, "%s", "ENABLE: CAP_S390_USER_STSI"); kvm->arch.user_stsi = 1; @@ -714,10 +755,13 @@ static int kvm_s390_set_mem_control(struct kvm *kvm, struct kvm_device_attr *att if (!sclp.has_cmma) break; - ret = -EBUSY; VM_EVENT(kvm, 3, "%s", "ENABLE: CMMA support"); mutex_lock(&kvm->lock); - if (!kvm->created_vcpus) { + if (kvm->created_vcpus) + ret = -EBUSY; + else if (kvm->mm->context.allow_gmap_hpage_1m) + ret = -EINVAL; + else { kvm->arch.use_cmma = 1; /* Not compatible with cmma. */ kvm->arch.use_pfmfi = 0; @@ -1540,6 +1584,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) uint8_t *keys; uint64_t hva; int srcu_idx, i, r = 0; + bool unlocked; if (args->flags != 0) return -EINVAL; @@ -1564,9 +1609,11 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) if (r) goto out; + i = 0; down_read(¤t->mm->mmap_sem); srcu_idx = srcu_read_lock(&kvm->srcu); - for (i = 0; i < args->count; i++) { + while (i < args->count) { + unlocked = false; hva = gfn_to_hva(kvm, args->start_gfn + i); if (kvm_is_error_hva(hva)) { r = -EFAULT; @@ -1580,8 +1627,14 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) } r = set_guest_storage_key(current->mm, hva, keys[i], 0); - if (r) - break; + if (r) { + r = fixup_user_fault(current, current->mm, hva, + FAULT_FLAG_WRITE, &unlocked); + if (r) + break; + } + if (!r) + i++; } srcu_read_unlock(&kvm->srcu, srcu_idx); up_read(¤t->mm->mmap_sem); @@ -4082,6 +4135,11 @@ static int __init kvm_s390_init(void) return -ENODEV; } + if (nested && hpage) { + pr_info("nested (vSIE) and hpage (huge page backing) can currently not be activated concurrently"); + return -EINVAL; + } + for (i = 0; i < 16; i++) kvm_s390_fac_base[i] |= S390_lowcore.stfle_fac_list[i] & nonhyp_mask(i); diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index eb0eb60c7be6..cfc5a62329f6 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -246,9 +246,10 @@ static int try_handle_skey(struct kvm_vcpu *vcpu) static int handle_iske(struct kvm_vcpu *vcpu) { - unsigned long addr; + unsigned long gaddr, vmaddr; unsigned char key; int reg1, reg2; + bool unlocked; int rc; vcpu->stat.instruction_iske++; @@ -262,18 +263,28 @@ static int handle_iske(struct kvm_vcpu *vcpu) kvm_s390_get_regs_rre(vcpu, ®1, ®2); - addr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; - addr = kvm_s390_logical_to_effective(vcpu, addr); - addr = kvm_s390_real_to_abs(vcpu, addr); - addr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(addr)); - if (kvm_is_error_hva(addr)) + gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + gaddr = kvm_s390_logical_to_effective(vcpu, gaddr); + gaddr = kvm_s390_real_to_abs(vcpu, gaddr); + vmaddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(gaddr)); + if (kvm_is_error_hva(vmaddr)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); - +retry: + unlocked = false; down_read(¤t->mm->mmap_sem); - rc = get_guest_storage_key(current->mm, addr, &key); - up_read(¤t->mm->mmap_sem); + rc = get_guest_storage_key(current->mm, vmaddr, &key); + + if (rc) { + rc = fixup_user_fault(current, current->mm, vmaddr, + FAULT_FLAG_WRITE, &unlocked); + if (!rc) { + up_read(¤t->mm->mmap_sem); + goto retry; + } + } if (rc) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); + up_read(¤t->mm->mmap_sem); vcpu->run->s.regs.gprs[reg1] &= ~0xff; vcpu->run->s.regs.gprs[reg1] |= key; return 0; @@ -281,8 +292,9 @@ static int handle_iske(struct kvm_vcpu *vcpu) static int handle_rrbe(struct kvm_vcpu *vcpu) { - unsigned long addr; + unsigned long vmaddr, gaddr; int reg1, reg2; + bool unlocked; int rc; vcpu->stat.instruction_rrbe++; @@ -296,19 +308,27 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) kvm_s390_get_regs_rre(vcpu, ®1, ®2); - addr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; - addr = kvm_s390_logical_to_effective(vcpu, addr); - addr = kvm_s390_real_to_abs(vcpu, addr); - addr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(addr)); - if (kvm_is_error_hva(addr)) + gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + gaddr = kvm_s390_logical_to_effective(vcpu, gaddr); + gaddr = kvm_s390_real_to_abs(vcpu, gaddr); + vmaddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(gaddr)); + if (kvm_is_error_hva(vmaddr)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); - +retry: + unlocked = false; down_read(¤t->mm->mmap_sem); - rc = reset_guest_reference_bit(current->mm, addr); - up_read(¤t->mm->mmap_sem); + rc = reset_guest_reference_bit(current->mm, vmaddr); + if (rc < 0) { + rc = fixup_user_fault(current, current->mm, vmaddr, + FAULT_FLAG_WRITE, &unlocked); + if (!rc) { + up_read(¤t->mm->mmap_sem); + goto retry; + } + } if (rc < 0) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); - + up_read(¤t->mm->mmap_sem); kvm_s390_set_psw_cc(vcpu, rc); return 0; } @@ -323,6 +343,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) unsigned long start, end; unsigned char key, oldkey; int reg1, reg2; + bool unlocked; int rc; vcpu->stat.instruction_sske++; @@ -355,19 +376,28 @@ static int handle_sske(struct kvm_vcpu *vcpu) } while (start != end) { - unsigned long addr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(start)); + unsigned long vmaddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(start)); + unlocked = false; - if (kvm_is_error_hva(addr)) + if (kvm_is_error_hva(vmaddr)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); down_read(¤t->mm->mmap_sem); - rc = cond_set_guest_storage_key(current->mm, addr, key, &oldkey, + rc = cond_set_guest_storage_key(current->mm, vmaddr, key, &oldkey, m3 & SSKE_NQ, m3 & SSKE_MR, m3 & SSKE_MC); - up_read(¤t->mm->mmap_sem); - if (rc < 0) + + if (rc < 0) { + rc = fixup_user_fault(current, current->mm, vmaddr, + FAULT_FLAG_WRITE, &unlocked); + rc = !rc ? -EAGAIN : rc; + } + if (rc == -EFAULT) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); - start += PAGE_SIZE; + + up_read(¤t->mm->mmap_sem); + if (rc >= 0) + start += PAGE_SIZE; } if (m3 & (SSKE_MC | SSKE_MR)) { @@ -948,15 +978,16 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) } while (start != end) { - unsigned long useraddr; + unsigned long vmaddr; + bool unlocked = false; /* Translate guest address to host address */ - useraddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(start)); - if (kvm_is_error_hva(useraddr)) + vmaddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(start)); + if (kvm_is_error_hva(vmaddr)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) { - if (clear_user((void __user *)useraddr, PAGE_SIZE)) + if (clear_user((void __user *)vmaddr, PAGE_SIZE)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); } @@ -966,14 +997,20 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) if (rc) return rc; down_read(¤t->mm->mmap_sem); - rc = cond_set_guest_storage_key(current->mm, useraddr, + rc = cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); - up_read(¤t->mm->mmap_sem); - if (rc < 0) + if (rc < 0) { + rc = fixup_user_fault(current, current->mm, vmaddr, + FAULT_FLAG_WRITE, &unlocked); + rc = !rc ? -EAGAIN : rc; + } + if (rc == -EFAULT) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); - } - start += PAGE_SIZE; + up_read(¤t->mm->mmap_sem); + if (rc >= 0) + start += PAGE_SIZE; + } } if (vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) { if (psw_bits(vcpu->arch.sie_block->gpsw).eaba == PSW_BITS_AMODE_64BIT) { diff --git a/arch/s390/lib/mem.S b/arch/s390/lib/mem.S index 2311f15be9cf..40c4d59c926e 100644 --- a/arch/s390/lib/mem.S +++ b/arch/s390/lib/mem.S @@ -17,7 +17,7 @@ ENTRY(memmove) ltgr %r4,%r4 lgr %r1,%r2 - bzr %r14 + jz .Lmemmove_exit aghi %r4,-1 clgr %r2,%r3 jnh .Lmemmove_forward @@ -36,6 +36,7 @@ ENTRY(memmove) .Lmemmove_forward_remainder: larl %r5,.Lmemmove_mvc ex %r4,0(%r5) +.Lmemmove_exit: BR_EX %r14 .Lmemmove_reverse: ic %r0,0(%r4,%r3) @@ -65,7 +66,7 @@ EXPORT_SYMBOL(memmove) */ ENTRY(memset) ltgr %r4,%r4 - bzr %r14 + jz .Lmemset_exit ltgr %r3,%r3 jnz .Lmemset_fill aghi %r4,-1 @@ -80,6 +81,7 @@ ENTRY(memset) .Lmemset_clear_remainder: larl %r3,.Lmemset_xc ex %r4,0(%r3) +.Lmemset_exit: BR_EX %r14 .Lmemset_fill: cghi %r4,1 @@ -115,7 +117,7 @@ EXPORT_SYMBOL(memset) */ ENTRY(memcpy) ltgr %r4,%r4 - bzr %r14 + jz .Lmemcpy_exit aghi %r4,-1 srlg %r5,%r4,8 ltgr %r5,%r5 @@ -124,6 +126,7 @@ ENTRY(memcpy) .Lmemcpy_remainder: larl %r5,.Lmemcpy_mvc ex %r4,0(%r5) +.Lmemcpy_exit: BR_EX %r14 .Lmemcpy_loop: mvc 0(256,%r1),0(%r3) @@ -145,9 +148,9 @@ EXPORT_SYMBOL(memcpy) .macro __MEMSET bits,bytes,insn ENTRY(__memset\bits) ltgr %r4,%r4 - bzr %r14 + jz .L__memset_exit\bits cghi %r4,\bytes - je .L__memset_exit\bits + je .L__memset_store\bits aghi %r4,-(\bytes+1) srlg %r5,%r4,8 ltgr %r5,%r5 @@ -163,8 +166,9 @@ ENTRY(__memset\bits) larl %r5,.L__memset_mvc\bits ex %r4,0(%r5) BR_EX %r14 -.L__memset_exit\bits: +.L__memset_store\bits: \insn %r3,0(%r2) +.L__memset_exit\bits: BR_EX %r14 .L__memset_mvc\bits: mvc \bytes(1,%r1),0(%r1) diff --git a/arch/s390/mm/cmm.c b/arch/s390/mm/cmm.c index 6cf024eb2085..510a18299196 100644 --- a/arch/s390/mm/cmm.c +++ b/arch/s390/mm/cmm.c @@ -191,12 +191,7 @@ static void cmm_set_timer(void) del_timer(&cmm_timer); return; } - if (timer_pending(&cmm_timer)) { - if (mod_timer(&cmm_timer, jiffies + cmm_timeout_seconds*HZ)) - return; - } - cmm_timer.expires = jiffies + cmm_timeout_seconds*HZ; - add_timer(&cmm_timer); + mod_timer(&cmm_timer, jiffies + cmm_timeout_seconds * HZ); } static void cmm_timer_fn(struct timer_list *unused) @@ -251,45 +246,42 @@ static int cmm_skip_blanks(char *cp, char **endp) return str != cp; } -static struct ctl_table cmm_table[]; - static int cmm_pages_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - char buf[16], *p; - unsigned int len; - long nr; + long nr = cmm_get_pages(); + struct ctl_table ctl_entry = { + .procname = ctl->procname, + .data = &nr, + .maxlen = sizeof(long), + }; + int rc; - if (!*lenp || (*ppos && !write)) { - *lenp = 0; - return 0; - } + rc = proc_doulongvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) + return rc; - if (write) { - len = *lenp; - if (copy_from_user(buf, buffer, - len > sizeof(buf) ? sizeof(buf) : len)) - return -EFAULT; - buf[sizeof(buf) - 1] = '\0'; - cmm_skip_blanks(buf, &p); - nr = simple_strtoul(p, &p, 0); - if (ctl == &cmm_table[0]) - cmm_set_pages(nr); - else - cmm_add_timed_pages(nr); - } else { - if (ctl == &cmm_table[0]) - nr = cmm_get_pages(); - else - nr = cmm_get_timed_pages(); - len = sprintf(buf, "%ld\n", nr); - if (len > *lenp) - len = *lenp; - if (copy_to_user(buffer, buf, len)) - return -EFAULT; - } - *lenp = len; - *ppos += len; + cmm_set_pages(nr); + return 0; +} + +static int cmm_timed_pages_handler(struct ctl_table *ctl, int write, + void __user *buffer, size_t *lenp, + loff_t *ppos) +{ + long nr = cmm_get_timed_pages(); + struct ctl_table ctl_entry = { + .procname = ctl->procname, + .data = &nr, + .maxlen = sizeof(long), + }; + int rc; + + rc = proc_doulongvec_minmax(&ctl_entry, write, buffer, lenp, ppos); + if (rc < 0 || !write) + return rc; + + cmm_add_timed_pages(nr); return 0; } @@ -338,7 +330,7 @@ static struct ctl_table cmm_table[] = { { .procname = "cmm_timed_pages", .mode = 0644, - .proc_handler = cmm_pages_handler, + .proc_handler = cmm_timed_pages_handler, }, { .procname = "cmm_timeout", diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c index 6ad15d3fab81..84111a43ea29 100644 --- a/arch/s390/mm/extmem.c +++ b/arch/s390/mm/extmem.c @@ -80,7 +80,7 @@ struct qin64 { struct dcss_segment { struct list_head list; char dcss_name[8]; - char res_name[15]; + char res_name[16]; unsigned long start_addr; unsigned long end; atomic_t ref_count; @@ -433,7 +433,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long memcpy(&seg->res_name, seg->dcss_name, 8); EBCASC(seg->res_name, 8); seg->res_name[8] = '\0'; - strncat(seg->res_name, " (DCSS)", 7); + strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name)); seg->res->name = seg->res_name; rc = seg->vm_segtype; if (rc == SEG_TYPE_SC || diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index e074480d3598..4cc3f06b0ab3 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -502,6 +502,8 @@ retry: /* No reason to continue if interrupted by SIGKILL. */ if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { fault = VM_FAULT_SIGNAL; + if (flags & FAULT_FLAG_RETRY_NOWAIT) + goto out_up; goto out; } if (unlikely(fault & VM_FAULT_ERROR)) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index bc56ec8abcf7..bb44990c8212 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2,8 +2,10 @@ /* * KVM guest address space mapping code * - * Copyright IBM Corp. 2007, 2016 + * Copyright IBM Corp. 2007, 2016, 2018 * Author(s): Martin Schwidefsky + * David Hildenbrand + * Janosch Frank */ #include @@ -521,6 +523,9 @@ void gmap_unlink(struct mm_struct *mm, unsigned long *table, rcu_read_unlock(); } +static void gmap_pmdp_xchg(struct gmap *gmap, pmd_t *old, pmd_t new, + unsigned long gaddr); + /** * gmap_link - set up shadow page tables to connect a host to a guest address * @gmap: pointer to guest mapping meta data structure @@ -541,6 +546,7 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr) p4d_t *p4d; pud_t *pud; pmd_t *pmd; + u64 unprot; int rc; BUG_ON(gmap_is_shadow(gmap)); @@ -584,8 +590,8 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr) return -EFAULT; pmd = pmd_offset(pud, vmaddr); VM_BUG_ON(pmd_none(*pmd)); - /* large pmds cannot yet be handled */ - if (pmd_large(*pmd)) + /* Are we allowed to use huge pages? */ + if (pmd_large(*pmd) && !gmap->mm->context.allow_gmap_hpage_1m) return -EFAULT; /* Link gmap segment table entry location to page table. */ rc = radix_tree_preload(GFP_KERNEL); @@ -596,10 +602,22 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr) if (*table == _SEGMENT_ENTRY_EMPTY) { rc = radix_tree_insert(&gmap->host_to_guest, vmaddr >> PMD_SHIFT, table); - if (!rc) - *table = pmd_val(*pmd); - } else - rc = 0; + if (!rc) { + if (pmd_large(*pmd)) { + *table = (pmd_val(*pmd) & + _SEGMENT_ENTRY_HARDWARE_BITS_LARGE) + | _SEGMENT_ENTRY_GMAP_UC; + } else + *table = pmd_val(*pmd) & + _SEGMENT_ENTRY_HARDWARE_BITS; + } + } else if (*table & _SEGMENT_ENTRY_PROTECT && + !(pmd_val(*pmd) & _SEGMENT_ENTRY_PROTECT)) { + unprot = (u64)*table; + unprot &= ~_SEGMENT_ENTRY_PROTECT; + unprot |= _SEGMENT_ENTRY_GMAP_UC; + gmap_pmdp_xchg(gmap, (pmd_t *)table, __pmd(unprot), gaddr); + } spin_unlock(&gmap->guest_table_lock); spin_unlock(ptl); radix_tree_preload_end(); @@ -690,6 +708,12 @@ void gmap_discard(struct gmap *gmap, unsigned long from, unsigned long to) vmaddr |= gaddr & ~PMD_MASK; /* Find vma in the parent mm */ vma = find_vma(gmap->mm, vmaddr); + /* + * We do not discard pages that are backed by + * hugetlbfs, so we don't have to refault them. + */ + if (vma && is_vm_hugetlb_page(vma)) + continue; size = min(to - gaddr, PMD_SIZE - (gaddr & ~PMD_MASK)); zap_page_range(vma, vmaddr, size); } @@ -864,7 +888,128 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsigned long gaddr, */ static void gmap_pte_op_end(spinlock_t *ptl) { - spin_unlock(ptl); + if (ptl) + spin_unlock(ptl); +} + +/** + * gmap_pmd_op_walk - walk the gmap tables, get the guest table lock + * and return the pmd pointer + * @gmap: pointer to guest mapping meta data structure + * @gaddr: virtual address in the guest address space + * + * Returns a pointer to the pmd for a guest address, or NULL + */ +static inline pmd_t *gmap_pmd_op_walk(struct gmap *gmap, unsigned long gaddr) +{ + pmd_t *pmdp; + + BUG_ON(gmap_is_shadow(gmap)); + spin_lock(&gmap->guest_table_lock); + pmdp = (pmd_t *) gmap_table_walk(gmap, gaddr, 1); + + if (!pmdp || pmd_none(*pmdp)) { + spin_unlock(&gmap->guest_table_lock); + return NULL; + } + + /* 4k page table entries are locked via the pte (pte_alloc_map_lock). */ + if (!pmd_large(*pmdp)) + spin_unlock(&gmap->guest_table_lock); + return pmdp; +} + +/** + * gmap_pmd_op_end - release the guest_table_lock if needed + * @gmap: pointer to the guest mapping meta data structure + * @pmdp: pointer to the pmd + */ +static inline void gmap_pmd_op_end(struct gmap *gmap, pmd_t *pmdp) +{ + if (pmd_large(*pmdp)) + spin_unlock(&gmap->guest_table_lock); +} + +/* + * gmap_protect_pmd - remove access rights to memory and set pmd notification bits + * @pmdp: pointer to the pmd to be protected + * @prot: indicates access rights: PROT_NONE, PROT_READ or PROT_WRITE + * @bits: notification bits to set + * + * Returns: + * 0 if successfully protected + * -EAGAIN if a fixup is needed + * -EINVAL if unsupported notifier bits have been specified + * + * Expected to be called with sg->mm->mmap_sem in read and + * guest_table_lock held. + */ +static int gmap_protect_pmd(struct gmap *gmap, unsigned long gaddr, + pmd_t *pmdp, int prot, unsigned long bits) +{ + int pmd_i = pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID; + int pmd_p = pmd_val(*pmdp) & _SEGMENT_ENTRY_PROTECT; + pmd_t new = *pmdp; + + /* Fixup needed */ + if ((pmd_i && (prot != PROT_NONE)) || (pmd_p && (prot == PROT_WRITE))) + return -EAGAIN; + + if (prot == PROT_NONE && !pmd_i) { + pmd_val(new) |= _SEGMENT_ENTRY_INVALID; + gmap_pmdp_xchg(gmap, pmdp, new, gaddr); + } + + if (prot == PROT_READ && !pmd_p) { + pmd_val(new) &= ~_SEGMENT_ENTRY_INVALID; + pmd_val(new) |= _SEGMENT_ENTRY_PROTECT; + gmap_pmdp_xchg(gmap, pmdp, new, gaddr); + } + + if (bits & GMAP_NOTIFY_MPROT) + pmd_val(*pmdp) |= _SEGMENT_ENTRY_GMAP_IN; + + /* Shadow GMAP protection needs split PMDs */ + if (bits & GMAP_NOTIFY_SHADOW) + return -EINVAL; + + return 0; +} + +/* + * gmap_protect_pte - remove access rights to memory and set pgste bits + * @gmap: pointer to guest mapping meta data structure + * @gaddr: virtual address in the guest address space + * @pmdp: pointer to the pmd associated with the pte + * @prot: indicates access rights: PROT_NONE, PROT_READ or PROT_WRITE + * @bits: notification bits to set + * + * Returns 0 if successfully protected, -ENOMEM if out of memory and + * -EAGAIN if a fixup is needed. + * + * Expected to be called with sg->mm->mmap_sem in read + */ +static int gmap_protect_pte(struct gmap *gmap, unsigned long gaddr, + pmd_t *pmdp, int prot, unsigned long bits) +{ + int rc; + pte_t *ptep; + spinlock_t *ptl = NULL; + unsigned long pbits = 0; + + if (pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID) + return -EAGAIN; + + ptep = pte_alloc_map_lock(gmap->mm, pmdp, gaddr, &ptl); + if (!ptep) + return -ENOMEM; + + pbits |= (bits & GMAP_NOTIFY_MPROT) ? PGSTE_IN_BIT : 0; + pbits |= (bits & GMAP_NOTIFY_SHADOW) ? PGSTE_VSIE_BIT : 0; + /* Protect and unlock. */ + rc = ptep_force_prot(gmap->mm, gaddr, ptep, prot, pbits); + gmap_pte_op_end(ptl); + return rc; } /* @@ -883,30 +1028,45 @@ static void gmap_pte_op_end(spinlock_t *ptl) static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, unsigned long len, int prot, unsigned long bits) { - unsigned long vmaddr; - spinlock_t *ptl; - pte_t *ptep; + unsigned long vmaddr, dist; + pmd_t *pmdp; int rc; BUG_ON(gmap_is_shadow(gmap)); while (len) { rc = -EAGAIN; - ptep = gmap_pte_op_walk(gmap, gaddr, &ptl); - if (ptep) { - rc = ptep_force_prot(gmap->mm, gaddr, ptep, prot, bits); - gmap_pte_op_end(ptl); + pmdp = gmap_pmd_op_walk(gmap, gaddr); + if (pmdp) { + if (!pmd_large(*pmdp)) { + rc = gmap_protect_pte(gmap, gaddr, pmdp, prot, + bits); + if (!rc) { + len -= PAGE_SIZE; + gaddr += PAGE_SIZE; + } + } else { + rc = gmap_protect_pmd(gmap, gaddr, pmdp, prot, + bits); + if (!rc) { + dist = HPAGE_SIZE - (gaddr & ~HPAGE_MASK); + len = len < dist ? 0 : len - dist; + gaddr = (gaddr & HPAGE_MASK) + HPAGE_SIZE; + } + } + gmap_pmd_op_end(gmap, pmdp); } if (rc) { + if (rc == -EINVAL) + return rc; + + /* -EAGAIN, fixup of userspace mm and gmap */ vmaddr = __gmap_translate(gmap, gaddr); if (IS_ERR_VALUE(vmaddr)) return vmaddr; rc = gmap_pte_op_fixup(gmap, gaddr, vmaddr, prot); if (rc) return rc; - continue; } - gaddr += PAGE_SIZE; - len -= PAGE_SIZE; } return 0; } @@ -935,7 +1095,7 @@ int gmap_mprotect_notify(struct gmap *gmap, unsigned long gaddr, if (!MACHINE_HAS_ESOP && prot == PROT_READ) return -EINVAL; down_read(&gmap->mm->mmap_sem); - rc = gmap_protect_range(gmap, gaddr, len, prot, PGSTE_IN_BIT); + rc = gmap_protect_range(gmap, gaddr, len, prot, GMAP_NOTIFY_MPROT); up_read(&gmap->mm->mmap_sem); return rc; } @@ -1474,6 +1634,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, unsigned long limit; int rc; + BUG_ON(parent->mm->context.allow_gmap_hpage_1m); BUG_ON(gmap_is_shadow(parent)); spin_lock(&parent->shadow_lock); sg = gmap_find_shadow(parent, asce, edat_level); @@ -1526,7 +1687,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, down_read(&parent->mm->mmap_sem); rc = gmap_protect_range(parent, asce & _ASCE_ORIGIN, ((asce & _ASCE_TABLE_LENGTH) + 1) * PAGE_SIZE, - PROT_READ, PGSTE_VSIE_BIT); + PROT_READ, GMAP_NOTIFY_SHADOW); up_read(&parent->mm->mmap_sem); spin_lock(&parent->shadow_lock); new->initialized = true; @@ -2092,6 +2253,225 @@ void ptep_notify(struct mm_struct *mm, unsigned long vmaddr, } EXPORT_SYMBOL_GPL(ptep_notify); +static void pmdp_notify_gmap(struct gmap *gmap, pmd_t *pmdp, + unsigned long gaddr) +{ + pmd_val(*pmdp) &= ~_SEGMENT_ENTRY_GMAP_IN; + gmap_call_notifier(gmap, gaddr, gaddr + HPAGE_SIZE - 1); +} + +/** + * gmap_pmdp_xchg - exchange a gmap pmd with another + * @gmap: pointer to the guest address space structure + * @pmdp: pointer to the pmd entry + * @new: replacement entry + * @gaddr: the affected guest address + * + * This function is assumed to be called with the guest_table_lock + * held. + */ +static void gmap_pmdp_xchg(struct gmap *gmap, pmd_t *pmdp, pmd_t new, + unsigned long gaddr) +{ + gaddr &= HPAGE_MASK; + pmdp_notify_gmap(gmap, pmdp, gaddr); + pmd_val(new) &= ~_SEGMENT_ENTRY_GMAP_IN; + if (MACHINE_HAS_TLB_GUEST) + __pmdp_idte(gaddr, (pmd_t *)pmdp, IDTE_GUEST_ASCE, gmap->asce, + IDTE_GLOBAL); + else if (MACHINE_HAS_IDTE) + __pmdp_idte(gaddr, (pmd_t *)pmdp, 0, 0, IDTE_GLOBAL); + else + __pmdp_csp(pmdp); + *pmdp = new; +} + +static void gmap_pmdp_clear(struct mm_struct *mm, unsigned long vmaddr, + int purge) +{ + pmd_t *pmdp; + struct gmap *gmap; + unsigned long gaddr; + + rcu_read_lock(); + list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) { + spin_lock(&gmap->guest_table_lock); + pmdp = (pmd_t *)radix_tree_delete(&gmap->host_to_guest, + vmaddr >> PMD_SHIFT); + if (pmdp) { + gaddr = __gmap_segment_gaddr((unsigned long *)pmdp); + pmdp_notify_gmap(gmap, pmdp, gaddr); + WARN_ON(pmd_val(*pmdp) & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE | + _SEGMENT_ENTRY_GMAP_UC)); + if (purge) + __pmdp_csp(pmdp); + pmd_val(*pmdp) = _SEGMENT_ENTRY_EMPTY; + } + spin_unlock(&gmap->guest_table_lock); + } + rcu_read_unlock(); +} + +/** + * gmap_pmdp_invalidate - invalidate all affected guest pmd entries without + * flushing + * @mm: pointer to the process mm_struct + * @vmaddr: virtual address in the process address space + */ +void gmap_pmdp_invalidate(struct mm_struct *mm, unsigned long vmaddr) +{ + gmap_pmdp_clear(mm, vmaddr, 0); +} +EXPORT_SYMBOL_GPL(gmap_pmdp_invalidate); + +/** + * gmap_pmdp_csp - csp all affected guest pmd entries + * @mm: pointer to the process mm_struct + * @vmaddr: virtual address in the process address space + */ +void gmap_pmdp_csp(struct mm_struct *mm, unsigned long vmaddr) +{ + gmap_pmdp_clear(mm, vmaddr, 1); +} +EXPORT_SYMBOL_GPL(gmap_pmdp_csp); + +/** + * gmap_pmdp_idte_local - invalidate and clear a guest pmd entry + * @mm: pointer to the process mm_struct + * @vmaddr: virtual address in the process address space + */ +void gmap_pmdp_idte_local(struct mm_struct *mm, unsigned long vmaddr) +{ + unsigned long *entry, gaddr; + struct gmap *gmap; + pmd_t *pmdp; + + rcu_read_lock(); + list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) { + spin_lock(&gmap->guest_table_lock); + entry = radix_tree_delete(&gmap->host_to_guest, + vmaddr >> PMD_SHIFT); + if (entry) { + pmdp = (pmd_t *)entry; + gaddr = __gmap_segment_gaddr(entry); + pmdp_notify_gmap(gmap, pmdp, gaddr); + WARN_ON(*entry & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE | + _SEGMENT_ENTRY_GMAP_UC)); + if (MACHINE_HAS_TLB_GUEST) + __pmdp_idte(gaddr, pmdp, IDTE_GUEST_ASCE, + gmap->asce, IDTE_LOCAL); + else if (MACHINE_HAS_IDTE) + __pmdp_idte(gaddr, pmdp, 0, 0, IDTE_LOCAL); + *entry = _SEGMENT_ENTRY_EMPTY; + } + spin_unlock(&gmap->guest_table_lock); + } + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(gmap_pmdp_idte_local); + +/** + * gmap_pmdp_idte_global - invalidate and clear a guest pmd entry + * @mm: pointer to the process mm_struct + * @vmaddr: virtual address in the process address space + */ +void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr) +{ + unsigned long *entry, gaddr; + struct gmap *gmap; + pmd_t *pmdp; + + rcu_read_lock(); + list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) { + spin_lock(&gmap->guest_table_lock); + entry = radix_tree_delete(&gmap->host_to_guest, + vmaddr >> PMD_SHIFT); + if (entry) { + pmdp = (pmd_t *)entry; + gaddr = __gmap_segment_gaddr(entry); + pmdp_notify_gmap(gmap, pmdp, gaddr); + WARN_ON(*entry & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE | + _SEGMENT_ENTRY_GMAP_UC)); + if (MACHINE_HAS_TLB_GUEST) + __pmdp_idte(gaddr, pmdp, IDTE_GUEST_ASCE, + gmap->asce, IDTE_GLOBAL); + else if (MACHINE_HAS_IDTE) + __pmdp_idte(gaddr, pmdp, 0, 0, IDTE_GLOBAL); + else + __pmdp_csp(pmdp); + *entry = _SEGMENT_ENTRY_EMPTY; + } + spin_unlock(&gmap->guest_table_lock); + } + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(gmap_pmdp_idte_global); + +/** + * gmap_test_and_clear_dirty_pmd - test and reset segment dirty status + * @gmap: pointer to guest address space + * @pmdp: pointer to the pmd to be tested + * @gaddr: virtual address in the guest address space + * + * This function is assumed to be called with the guest_table_lock + * held. + */ +bool gmap_test_and_clear_dirty_pmd(struct gmap *gmap, pmd_t *pmdp, + unsigned long gaddr) +{ + if (pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID) + return false; + + /* Already protected memory, which did not change is clean */ + if (pmd_val(*pmdp) & _SEGMENT_ENTRY_PROTECT && + !(pmd_val(*pmdp) & _SEGMENT_ENTRY_GMAP_UC)) + return false; + + /* Clear UC indication and reset protection */ + pmd_val(*pmdp) &= ~_SEGMENT_ENTRY_GMAP_UC; + gmap_protect_pmd(gmap, gaddr, pmdp, PROT_READ, 0); + return true; +} + +/** + * gmap_sync_dirty_log_pmd - set bitmap based on dirty status of segment + * @gmap: pointer to guest address space + * @bitmap: dirty bitmap for this pmd + * @gaddr: virtual address in the guest address space + * @vmaddr: virtual address in the host address space + * + * This function is assumed to be called with the guest_table_lock + * held. + */ +void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long bitmap[4], + unsigned long gaddr, unsigned long vmaddr) +{ + int i; + pmd_t *pmdp; + pte_t *ptep; + spinlock_t *ptl; + + pmdp = gmap_pmd_op_walk(gmap, gaddr); + if (!pmdp) + return; + + if (pmd_large(*pmdp)) { + if (gmap_test_and_clear_dirty_pmd(gmap, pmdp, gaddr)) + bitmap_fill(bitmap, _PAGE_ENTRIES); + } else { + for (i = 0; i < _PAGE_ENTRIES; i++, vmaddr += PAGE_SIZE) { + ptep = pte_alloc_map_lock(gmap->mm, pmdp, vmaddr, &ptl); + if (!ptep) + continue; + if (ptep_test_and_clear_uc(gmap->mm, vmaddr, ptep)) + set_bit(i, bitmap); + spin_unlock(ptl); + } + } + gmap_pmd_op_end(gmap, pmdp); +} +EXPORT_SYMBOL_GPL(gmap_sync_dirty_log_pmd); + static inline void thp_split_mm(struct mm_struct *mm) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -2168,17 +2548,45 @@ EXPORT_SYMBOL_GPL(s390_enable_sie); * Enable storage key handling from now on and initialize the storage * keys with the default key. */ -static int __s390_enable_skey(pte_t *pte, unsigned long addr, - unsigned long next, struct mm_walk *walk) +static int __s390_enable_skey_pte(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) { /* Clear storage key */ ptep_zap_key(walk->mm, addr, pte); return 0; } +static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, + unsigned long hmask, unsigned long next, + struct mm_walk *walk) +{ + pmd_t *pmd = (pmd_t *)pte; + unsigned long start, end; + struct page *page = pmd_page(*pmd); + + /* + * The write check makes sure we do not set a key on shared + * memory. This is needed as the walker does not differentiate + * between actual guest memory and the process executable or + * shared libraries. + */ + if (pmd_val(*pmd) & _SEGMENT_ENTRY_INVALID || + !(pmd_val(*pmd) & _SEGMENT_ENTRY_WRITE)) + return 0; + + start = pmd_val(*pmd) & HPAGE_MASK; + end = start + HPAGE_SIZE - 1; + __storage_key_init_range(start, end); + set_bit(PG_arch_1, &page->flags); + return 0; +} + int s390_enable_skey(void) { - struct mm_walk walk = { .pte_entry = __s390_enable_skey }; + struct mm_walk walk = { + .hugetlb_entry = __s390_enable_skey_hugetlb, + .pte_entry = __s390_enable_skey_pte, + }; struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int rc = 0; diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index e804090f4470..b0246c705a19 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -123,6 +123,29 @@ static inline pte_t __rste_to_pte(unsigned long rste) return pte; } +static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste) +{ + struct page *page; + unsigned long size, paddr; + + if (!mm_uses_skeys(mm) || + rste & _SEGMENT_ENTRY_INVALID) + return; + + if ((rste & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) { + page = pud_page(__pud(rste)); + size = PUD_SIZE; + paddr = rste & PUD_MASK; + } else { + page = pmd_page(__pmd(rste)); + size = PMD_SIZE; + paddr = rste & PMD_MASK; + } + + if (!test_and_set_bit(PG_arch_1, &page->flags)) + __storage_key_init_range(paddr, paddr + size - 1); +} + void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { @@ -137,6 +160,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, rste |= _REGION_ENTRY_TYPE_R3 | _REGION3_ENTRY_LARGE; else rste |= _SEGMENT_ENTRY_LARGE; + clear_huge_pte_skeys(mm, rste); pte_val(*ptep) = rste; } diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c index 382153ff17e3..dc3cede7f2ec 100644 --- a/arch/s390/mm/page-states.c +++ b/arch/s390/mm/page-states.c @@ -271,7 +271,7 @@ void arch_set_page_states(int make_stable) list_for_each(l, &zone->free_area[order].free_list[t]) { page = list_entry(l, struct page, lru); if (make_stable) - set_page_stable_dat(page, 0); + set_page_stable_dat(page, order); else set_page_unused(page, order); } diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index c44171588d08..f8c6faab41f4 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -14,7 +14,7 @@ static inline unsigned long sske_frame(unsigned long addr, unsigned char skey) { - asm volatile(".insn rrf,0xb22b0000,%[skey],%[addr],9,0" + asm volatile(".insn rrf,0xb22b0000,%[skey],%[addr],1,0" : [addr] "+a" (addr) : [skey] "d" (skey)); return addr; } @@ -23,8 +23,6 @@ void __storage_key_init_range(unsigned long start, unsigned long end) { unsigned long boundary, size; - if (!PAGE_DEFAULT_KEY) - return; while (start < end) { if (MACHINE_HAS_EDAT1) { /* set storage keys for a 1MB frame */ @@ -37,7 +35,7 @@ void __storage_key_init_range(unsigned long start, unsigned long end) continue; } } - page_set_storage_key(start, PAGE_DEFAULT_KEY, 0); + page_set_storage_key(start, PAGE_DEFAULT_KEY, 1); start += PAGE_SIZE; } } diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 84bd6329a88d..76d89ee8b428 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -28,7 +28,7 @@ static struct ctl_table page_table_sysctl[] = { .data = &page_table_allocate_pgste, .maxlen = sizeof(int), .mode = S_IRUGO | S_IWUSR, - .proc_handler = proc_dointvec, + .proc_handler = proc_dointvec_minmax, .extra1 = &page_table_allocate_pgste_min, .extra2 = &page_table_allocate_pgste_max, }, @@ -252,6 +252,8 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) spin_unlock_bh(&mm->context.lock); if (mask != 0) return; + } else { + atomic_xor_bits(&page->_refcount, 3U << 24); } pgtable_page_dtor(page); @@ -304,6 +306,8 @@ static void __tlb_remove_table(void *_table) break; /* fallthrough */ case 3: /* 4K page table with pgstes */ + if (mask & 3) + atomic_xor_bits(&page->_refcount, 3 << 24); pgtable_page_dtor(page); __free_page(page); break; diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 301e466e4263..f2cc7da473e4 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -347,18 +347,27 @@ static inline void pmdp_idte_local(struct mm_struct *mm, mm->context.asce, IDTE_LOCAL); else __pmdp_idte(addr, pmdp, 0, 0, IDTE_LOCAL); + if (mm_has_pgste(mm) && mm->context.allow_gmap_hpage_1m) + gmap_pmdp_idte_local(mm, addr); } static inline void pmdp_idte_global(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { - if (MACHINE_HAS_TLB_GUEST) + if (MACHINE_HAS_TLB_GUEST) { __pmdp_idte(addr, pmdp, IDTE_NODAT | IDTE_GUEST_ASCE, mm->context.asce, IDTE_GLOBAL); - else if (MACHINE_HAS_IDTE) + if (mm_has_pgste(mm) && mm->context.allow_gmap_hpage_1m) + gmap_pmdp_idte_global(mm, addr); + } else if (MACHINE_HAS_IDTE) { __pmdp_idte(addr, pmdp, 0, 0, IDTE_GLOBAL); - else + if (mm_has_pgste(mm) && mm->context.allow_gmap_hpage_1m) + gmap_pmdp_idte_global(mm, addr); + } else { __pmdp_csp(pmdp); + if (mm_has_pgste(mm) && mm->context.allow_gmap_hpage_1m) + gmap_pmdp_csp(mm, addr); + } } static inline pmd_t pmdp_flush_direct(struct mm_struct *mm, @@ -392,6 +401,8 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm, cpumask_of(smp_processor_id()))) { pmd_val(*pmdp) |= _SEGMENT_ENTRY_INVALID; mm->context.flush_mm = 1; + if (mm_has_pgste(mm)) + gmap_pmdp_invalidate(mm, addr); } else { pmdp_idte_global(mm, addr, pmdp); } @@ -399,6 +410,24 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm, return old; } +static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return NULL; + pmd = pmd_alloc(mm, pud, addr); + return pmd; +} + pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t new) { @@ -693,40 +722,14 @@ void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep) /* * Test and reset if a guest page is dirty */ -bool test_and_clear_guest_dirty(struct mm_struct *mm, unsigned long addr) +bool ptep_test_and_clear_uc(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) { - spinlock_t *ptl; - pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; pgste_t pgste; - pte_t *ptep; pte_t pte; bool dirty; int nodat; - pgd = pgd_offset(mm, addr); - p4d = p4d_alloc(mm, pgd, addr); - if (!p4d) - return false; - pud = pud_alloc(mm, p4d, addr); - if (!pud) - return false; - pmd = pmd_alloc(mm, pud, addr); - if (!pmd) - return false; - /* We can't run guests backed by huge pages, but userspace can - * still set them up and then try to migrate them without any - * migration support. - */ - if (pmd_large(*pmd)) - return true; - - ptep = pte_alloc_map_lock(mm, pmd, addr, &ptl); - if (unlikely(!ptep)) - return false; - pgste = pgste_get_lock(ptep); dirty = !!(pgste_val(pgste) & PGSTE_UC_BIT); pgste_val(pgste) &= ~PGSTE_UC_BIT; @@ -742,21 +745,43 @@ bool test_and_clear_guest_dirty(struct mm_struct *mm, unsigned long addr) *ptep = pte; } pgste_set_unlock(ptep, pgste); - - spin_unlock(ptl); return dirty; } -EXPORT_SYMBOL_GPL(test_and_clear_guest_dirty); +EXPORT_SYMBOL_GPL(ptep_test_and_clear_uc); int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, unsigned char key, bool nq) { - unsigned long keyul; + unsigned long keyul, paddr; spinlock_t *ptl; pgste_t old, new; + pmd_t *pmdp; pte_t *ptep; - ptep = get_locked_pte(mm, addr, &ptl); + pmdp = pmd_alloc_map(mm, addr); + if (unlikely(!pmdp)) + return -EFAULT; + + ptl = pmd_lock(mm, pmdp); + if (!pmd_present(*pmdp)) { + spin_unlock(ptl); + return -EFAULT; + } + + if (pmd_large(*pmdp)) { + paddr = pmd_val(*pmdp) & HPAGE_MASK; + paddr |= addr & ~HPAGE_MASK; + /* + * Huge pmds need quiescing operations, they are + * always mapped. + */ + page_set_storage_key(paddr, key, 1); + spin_unlock(ptl); + return 0; + } + spin_unlock(ptl); + + ptep = pte_alloc_map_lock(mm, pmdp, addr, &ptl); if (unlikely(!ptep)) return -EFAULT; @@ -767,14 +792,14 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, pgste_val(new) |= (keyul & (_PAGE_CHANGED | _PAGE_REFERENCED)) << 48; pgste_val(new) |= (keyul & (_PAGE_ACC_BITS | _PAGE_FP_BIT)) << 56; if (!(pte_val(*ptep) & _PAGE_INVALID)) { - unsigned long address, bits, skey; + unsigned long bits, skey; - address = pte_val(*ptep) & PAGE_MASK; - skey = (unsigned long) page_get_storage_key(address); + paddr = pte_val(*ptep) & PAGE_MASK; + skey = (unsigned long) page_get_storage_key(paddr); bits = skey & (_PAGE_CHANGED | _PAGE_REFERENCED); skey = key & (_PAGE_ACC_BITS | _PAGE_FP_BIT); /* Set storage key ACC and FP */ - page_set_storage_key(address, skey, !nq); + page_set_storage_key(paddr, skey, !nq); /* Merge host changed & referenced into pgste */ pgste_val(new) |= bits << 52; } @@ -830,11 +855,32 @@ EXPORT_SYMBOL(cond_set_guest_storage_key); int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr) { spinlock_t *ptl; + unsigned long paddr; pgste_t old, new; + pmd_t *pmdp; pte_t *ptep; int cc = 0; - ptep = get_locked_pte(mm, addr, &ptl); + pmdp = pmd_alloc_map(mm, addr); + if (unlikely(!pmdp)) + return -EFAULT; + + ptl = pmd_lock(mm, pmdp); + if (!pmd_present(*pmdp)) { + spin_unlock(ptl); + return -EFAULT; + } + + if (pmd_large(*pmdp)) { + paddr = pmd_val(*pmdp) & HPAGE_MASK; + paddr |= addr & ~HPAGE_MASK; + cc = page_reset_referenced(paddr); + spin_unlock(ptl); + return cc; + } + spin_unlock(ptl); + + ptep = pte_alloc_map_lock(mm, pmdp, addr, &ptl); if (unlikely(!ptep)) return -EFAULT; @@ -843,7 +889,8 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr) pgste_val(new) &= ~PGSTE_GR_BIT; if (!(pte_val(*ptep) & _PAGE_INVALID)) { - cc = page_reset_referenced(pte_val(*ptep) & PAGE_MASK); + paddr = pte_val(*ptep) & PAGE_MASK; + cc = page_reset_referenced(paddr); /* Merge real referenced bit into host-set */ pgste_val(new) |= ((unsigned long) cc << 53) & PGSTE_HR_BIT; } @@ -862,18 +909,42 @@ EXPORT_SYMBOL(reset_guest_reference_bit); int get_guest_storage_key(struct mm_struct *mm, unsigned long addr, unsigned char *key) { + unsigned long paddr; spinlock_t *ptl; pgste_t pgste; + pmd_t *pmdp; pte_t *ptep; - ptep = get_locked_pte(mm, addr, &ptl); + pmdp = pmd_alloc_map(mm, addr); + if (unlikely(!pmdp)) + return -EFAULT; + + ptl = pmd_lock(mm, pmdp); + if (!pmd_present(*pmdp)) { + /* Not yet mapped memory has a zero key */ + spin_unlock(ptl); + *key = 0; + return 0; + } + + if (pmd_large(*pmdp)) { + paddr = pmd_val(*pmdp) & HPAGE_MASK; + paddr |= addr & ~HPAGE_MASK; + *key = page_get_storage_key(paddr); + spin_unlock(ptl); + return 0; + } + spin_unlock(ptl); + + ptep = pte_alloc_map_lock(mm, pmdp, addr, &ptl); if (unlikely(!ptep)) return -EFAULT; pgste = pgste_get_lock(ptep); *key = (pgste_val(pgste) & (PGSTE_ACC_BITS | PGSTE_FP_BIT)) >> 56; + paddr = pte_val(*ptep) & PAGE_MASK; if (!(pte_val(*ptep) & _PAGE_INVALID)) - *key = page_get_storage_key(pte_val(*ptep) & PAGE_MASK); + *key = page_get_storage_key(paddr); /* Reflect guest's logical view, not physical */ *key |= (pgste_val(pgste) & (PGSTE_GR_BIT | PGSTE_GC_BIT)) >> 48; pgste_set_unlock(ptep, pgste); diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index d2db8acb1a55..d7052cbe984f 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -485,8 +485,6 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth) /* br %r1 */ _EMIT2(0x07f1); } else { - /* larl %r1,.+14 */ - EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14); /* ex 0,S390_lowcore.br_r1_tampoline */ EMIT4_DISP(0x44000000, REG_0, REG_0, offsetof(struct lowcore, br_r1_trampoline)); @@ -1286,6 +1284,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) goto free_addrs; } if (bpf_jit_prog(&jit, fp)) { + bpf_jit_binary_free(header); fp = orig_fp; goto free_addrs; } diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c index 06a80434cfe6..5bd374491f94 100644 --- a/arch/s390/numa/numa.c +++ b/arch/s390/numa/numa.c @@ -134,26 +134,14 @@ void __init numa_setup(void) { pr_info("NUMA mode: %s\n", mode->name); nodes_clear(node_possible_map); + /* Initially attach all possible CPUs to node 0. */ + cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask); if (mode->setup) mode->setup(); numa_setup_memory(); memblock_dump_all(); } -/* - * numa_init_early() - Initialization initcall - * - * This runs when only one CPU is online and before the first - * topology update is called for by the scheduler. - */ -static int __init numa_init_early(void) -{ - /* Attach all possible CPUs to node 0 for now. */ - cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask); - return 0; -} -early_initcall(numa_init_early); - /* * numa_init_late() - Initialization initcall * diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index b482e95b6249..57f7cdac70a3 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -48,6 +48,10 @@ static char *pci_fmt2_names[] = { "Maximum work units", }; +static char *pci_fmt3_names[] = { + "Transmitted bytes", +}; + static char *pci_sw_names[] = { "Allocated pages", "Mapped pages", @@ -112,6 +116,10 @@ static int pci_perf_show(struct seq_file *m, void *v) pci_fmb_show(m, pci_fmt2_names, ARRAY_SIZE(pci_fmt2_names), &zdev->fmb->fmt2.consumed_work_units); break; + case 3: + pci_fmb_show(m, pci_fmt3_names, ARRAY_SIZE(pci_fmt3_names), + &zdev->fmb->fmt3.tx_bytes); + break; default: seq_puts(m, "Unknown format\n"); } diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile index 1ace023cbdce..8d61218a71aa 100644 --- a/arch/s390/purgatory/Makefile +++ b/arch/s390/purgatory/Makefile @@ -7,13 +7,13 @@ purgatory-y := head.o purgatory.o string.o sha256.o mem.o targets += $(purgatory-y) purgatory.ro kexec-purgatory.c PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) -$(obj)/sha256.o: $(srctree)/lib/sha256.c +$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE $(call if_changed_rule,cc_o_c) -$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S +$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE $(call if_changed_rule,as_o_S) -$(obj)/string.o: $(srctree)/arch/s390/lib/string.c +$(obj)/string.o: $(srctree)/arch/s390/lib/string.c FORCE $(call if_changed_rule,cc_o_c) LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib @@ -21,8 +21,9 @@ LDFLAGS_purgatory.ro += -z nodefaultlib KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float +KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float -fno-common KBUILD_CFLAGS += $(call cc-option,-fno-PIE) +KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS)) $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE $(call if_changed,ld) diff --git a/arch/s390/purgatory/head.S b/arch/s390/purgatory/head.S index 660c96a05a9b..2e3707b12edd 100644 --- a/arch/s390/purgatory/head.S +++ b/arch/s390/purgatory/head.S @@ -243,33 +243,26 @@ gprregs: .quad 0 .endr -purgatory_sha256_digest: - .global purgatory_sha256_digest - .rept 32 /* SHA256_DIGEST_SIZE */ - .byte 0 - .endr - -purgatory_sha_regions: - .global purgatory_sha_regions - .rept 16 * __KEXEC_SHA_REGION_SIZE /* KEXEC_SEGMENTS_MAX */ - .byte 0 - .endr - -kernel_entry: - .global kernel_entry - .quad 0 - -kernel_type: - .global kernel_type - .quad 0 - -crash_start: - .global crash_start - .quad 0 +/* Macro to define a global variable with name and size (in bytes) to be + * shared with C code. + * + * Add the .size and .type attribute to satisfy checks on the Elf_Sym during + * purgatory load. + */ +.macro GLOBAL_VARIABLE name,size +\name: + .global \name + .size \name,\size + .type \name,object + .skip \size,0 +.endm -crash_size: - .global crash_size - .quad 0 +GLOBAL_VARIABLE purgatory_sha256_digest,32 +GLOBAL_VARIABLE purgatory_sha_regions,16*__KEXEC_SHA_REGION_SIZE +GLOBAL_VARIABLE kernel_entry,8 +GLOBAL_VARIABLE kernel_type,8 +GLOBAL_VARIABLE crash_start,8 +GLOBAL_VARIABLE crash_size,8 .align PAGE_SIZE stack: diff --git a/arch/s390/purgatory/purgatory.c b/arch/s390/purgatory/purgatory.c index 4e2beb3c29b7..3528e6da4e87 100644 --- a/arch/s390/purgatory/purgatory.c +++ b/arch/s390/purgatory/purgatory.c @@ -12,15 +12,6 @@ #include #include -struct kexec_sha_region purgatory_sha_regions[KEXEC_SEGMENT_MAX]; -u8 purgatory_sha256_digest[SHA256_DIGEST_SIZE]; - -u64 kernel_entry; -u64 kernel_type; - -u64 crash_start; -u64 crash_size; - int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; diff --git a/arch/s390/scripts/Makefile.chkbss b/arch/s390/scripts/Makefile.chkbss index d92f2d94a5d9..9bba2c14e0ca 100644 --- a/arch/s390/scripts/Makefile.chkbss +++ b/arch/s390/scripts/Makefile.chkbss @@ -2,13 +2,22 @@ quiet_cmd_chkbss = CHKBSS $< define cmd_chkbss + rm -f $@; \ if ! $(OBJDUMP) -j .bss -w -h $< | awk 'END { if ($$3) exit 1 }'; then \ echo "error: $< .bss section is not empty" >&2; exit 1; \ fi; \ touch $@; endef -$(obj)/built-in.a: $(patsubst %, $(obj)/%.chkbss, $(chkbss)) +chkbss-target ?= $(obj)/built-in.a +ifneq (,$(findstring /,$(chkbss))) +chkbss-files := $(patsubst %, %.chkbss, $(chkbss)) +else +chkbss-files := $(patsubst %, $(obj)/%.chkbss, $(chkbss)) +endif + +$(chkbss-target): $(chkbss-files) +targets += $(notdir $(chkbss-files)) %.o.chkbss: %.o $(call cmd,chkbss) diff --git a/arch/s390/tools/gen_opcode_table.c b/arch/s390/tools/gen_opcode_table.c index 259aa0680d1a..a1bc02b29c81 100644 --- a/arch/s390/tools/gen_opcode_table.c +++ b/arch/s390/tools/gen_opcode_table.c @@ -257,7 +257,7 @@ static void add_to_group(struct gen_opcode *desc, struct insn *insn, int offset) if (!desc->group) exit(EXIT_FAILURE); group = &desc->group[desc->nr_groups - 1]; - strncpy(group->opcode, insn->opcode, 2); + memcpy(group->opcode, insn->opcode, 2); group->type = insn->type; group->offset = offset; group->count = 1; @@ -283,7 +283,7 @@ static void print_opcode_table(struct gen_opcode *desc) continue; add_to_group(desc, insn, offset); if (strncmp(opcode, insn->opcode, 2)) { - strncpy(opcode, insn->opcode, 2); + memcpy(opcode, insn->opcode, 2); printf("\t/* %.2s */ \\\n", opcode); } print_opcode(insn, offset); diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 4d61a085982b..dd4f3d3e644f 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -77,7 +77,7 @@ config SUPERH32 select PERF_EVENTS select ARCH_HIBERNATION_POSSIBLE if MMU select SPARSE_IRQ - select HAVE_CC_STACKPROTECTOR + select HAVE_STACKPROTECTOR config SUPERH64 def_bool "$(ARCH)" = "sh64" @@ -687,7 +687,7 @@ config SMP People using multiprocessor machines who say Y here should also say Y to "Enhanced Real Time Clock Support", below. - See also and the SMP-HOWTO + See also and the SMP-HOWTO available at . If you don't know what to do here, say N. diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 0fd0099f43cc..f37b95a80232 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -32,44 +32,9 @@ #include #endif -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) - -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) - #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) -/** - * __atomic_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - - return c; -} - #endif /* CONFIG_CPU_J2 */ #endif /* __ASM_SH_ATOMIC_H */ diff --git a/arch/sh/include/asm/cmpxchg-xchg.h b/arch/sh/include/asm/cmpxchg-xchg.h index 1e881f5db659..593a9704782b 100644 --- a/arch/sh/include/asm/cmpxchg-xchg.h +++ b/arch/sh/include/asm/cmpxchg-xchg.h @@ -8,7 +8,8 @@ * This work is licensed under the terms of the GNU GPL, version 2. See the * file "COPYING" in the main directory of this archive for more details. */ -#include +#include +#include #include /* diff --git a/arch/sh/include/asm/hw_breakpoint.h b/arch/sh/include/asm/hw_breakpoint.h index 7431c172c0cb..199d17b765f2 100644 --- a/arch/sh/include/asm/hw_breakpoint.h +++ b/arch/sh/include/asm/hw_breakpoint.h @@ -10,7 +10,6 @@ #include struct arch_hw_breakpoint { - char *name; /* Contains name of the symbol to set bkpt */ unsigned long address; u16 len; u16 type; @@ -41,6 +40,7 @@ struct sh_ubc { struct clk *clk; /* optional interface clock / MSTP bit */ }; +struct perf_event_attr; struct perf_event; struct task_struct; struct pmu; @@ -54,8 +54,10 @@ static inline int hw_breakpoint_slots(int type) } /* arch/sh/kernel/hw_breakpoint.c */ -extern int arch_check_bp_in_kernelspace(struct perf_event *bp); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp); +extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +extern int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/sh/include/asm/kprobes.h b/arch/sh/include/asm/kprobes.h index 85d8bcaa8493..6171682f7798 100644 --- a/arch/sh/include/asm/kprobes.h +++ b/arch/sh/include/asm/kprobes.h @@ -27,7 +27,6 @@ struct kprobe; void arch_remove_kprobe(struct kprobe *); void kretprobe_trampoline(void); -void jprobe_return_end(void); /* Architecture specific copy of original instruction*/ struct arch_specific_insn { @@ -43,9 +42,6 @@ struct prev_kprobe { /* per-cpu kprobe control block */ struct kprobe_ctlblk { unsigned long kprobe_status; - unsigned long jprobe_saved_r15; - struct pt_regs jprobe_saved_regs; - kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE]; struct prev_kprobe prev_kprobe; }; diff --git a/arch/sh/kernel/hw_breakpoint.c b/arch/sh/kernel/hw_breakpoint.c index 8648ed05ccf0..d9ff3b42da7c 100644 --- a/arch/sh/kernel/hw_breakpoint.c +++ b/arch/sh/kernel/hw_breakpoint.c @@ -124,14 +124,13 @@ static int get_hbp_len(u16 hbp_len) /* * Check for virtual address in kernel space. */ -int arch_check_bp_in_kernelspace(struct perf_event *bp) +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { unsigned int len; unsigned long va; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - va = info->address; - len = get_hbp_len(info->len); + va = hw->address; + len = get_hbp_len(hw->len); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -174,40 +173,40 @@ int arch_bp_generic_fields(int sh_len, int sh_type, return 0; } -static int arch_build_bp_info(struct perf_event *bp) +static int arch_build_bp_info(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - - info->address = bp->attr.bp_addr; + hw->address = attr->bp_addr; /* Len */ - switch (bp->attr.bp_len) { + switch (attr->bp_len) { case HW_BREAKPOINT_LEN_1: - info->len = SH_BREAKPOINT_LEN_1; + hw->len = SH_BREAKPOINT_LEN_1; break; case HW_BREAKPOINT_LEN_2: - info->len = SH_BREAKPOINT_LEN_2; + hw->len = SH_BREAKPOINT_LEN_2; break; case HW_BREAKPOINT_LEN_4: - info->len = SH_BREAKPOINT_LEN_4; + hw->len = SH_BREAKPOINT_LEN_4; break; case HW_BREAKPOINT_LEN_8: - info->len = SH_BREAKPOINT_LEN_8; + hw->len = SH_BREAKPOINT_LEN_8; break; default: return -EINVAL; } /* Type */ - switch (bp->attr.bp_type) { + switch (attr->bp_type) { case HW_BREAKPOINT_R: - info->type = SH_BREAKPOINT_READ; + hw->type = SH_BREAKPOINT_READ; break; case HW_BREAKPOINT_W: - info->type = SH_BREAKPOINT_WRITE; + hw->type = SH_BREAKPOINT_WRITE; break; case HW_BREAKPOINT_W | HW_BREAKPOINT_R: - info->type = SH_BREAKPOINT_RW; + hw->type = SH_BREAKPOINT_RW; break; default: return -EINVAL; @@ -219,19 +218,20 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings */ -int arch_validate_hwbkpt_settings(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); unsigned int align; int ret; - ret = arch_build_bp_info(bp); + ret = arch_build_bp_info(bp, attr, hw); if (ret) return ret; ret = -EINVAL; - switch (info->len) { + switch (hw->len) { case SH_BREAKPOINT_LEN_1: align = 0; break; @@ -248,18 +248,11 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) return ret; } - /* - * For kernel-addresses, either the address or symbol name can be - * specified. - */ - if (info->name) - info->address = (unsigned long)kallsyms_lookup_name(info->name); - /* * Check that the low-order bits of the address are appropriate * for the alignment implied by len. */ - if (info->address & align) + if (hw->address & align) return -EINVAL; return 0; @@ -346,7 +339,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args) perf_bp_event(bp, args->regs); /* Deliver the signal to userspace */ - if (!arch_check_bp_in_kernelspace(bp)) { + if (!arch_check_bp_in_kernelspace(&bp->hw.info)) { force_sig_fault(SIGTRAP, TRAP_HWBKPT, (void __user *)NULL, current); } diff --git a/arch/sh/kernel/kprobes.c b/arch/sh/kernel/kprobes.c index 52a5e11247d1..241e903dd3ee 100644 --- a/arch/sh/kernel/kprobes.c +++ b/arch/sh/kernel/kprobes.c @@ -248,11 +248,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) prepare_singlestep(p, regs); kcb->kprobe_status = KPROBE_REENTER; return 1; - } else { - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - goto ss_probe; - } } goto no_kprobe; } @@ -277,11 +272,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) set_current_kprobe(p, regs, kcb); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (p->pre_handler && p->pre_handler(p, regs)) + if (p->pre_handler && p->pre_handler(p, regs)) { /* handler has already set things up, so skip ss setup */ + reset_current_kprobe(); + preempt_enable_no_resched(); return 1; + } -ss_probe: prepare_singlestep(p, regs); kcb->kprobe_status = KPROBE_HIT_SS; return 1; @@ -358,8 +355,6 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) regs->pc = orig_ret_address; kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); - hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); kfree(ri); @@ -508,14 +503,8 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, if (post_kprobe_handler(args->regs)) ret = NOTIFY_STOP; } else { - if (kprobe_handler(args->regs)) { + if (kprobe_handler(args->regs)) ret = NOTIFY_STOP; - } else { - p = __this_cpu_read(current_kprobe); - if (p->break_handler && - p->break_handler(p, args->regs)) - ret = NOTIFY_STOP; - } } } } @@ -523,57 +512,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, return ret; } -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - unsigned long addr; - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - kcb->jprobe_saved_regs = *regs; - kcb->jprobe_saved_r15 = regs->regs[15]; - addr = kcb->jprobe_saved_r15; - - /* - * TBD: As Linus pointed out, gcc assumes that the callee - * owns the argument space and could overwrite it, e.g. - * tailcall optimization. So, to be absolutely safe - * we also save and restore enough stack bytes to cover - * the argument area. - */ - memcpy(kcb->jprobes_stack, (kprobe_opcode_t *) addr, - MIN_STACK_SIZE(addr)); - - regs->pc = (unsigned long)(jp->entry); - - return 1; -} - -void __kprobes jprobe_return(void) -{ - asm volatile ("trapa #0x3a\n\t" "jprobe_return_end:\n\t" "nop\n\t"); -} - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long stack_addr = kcb->jprobe_saved_r15; - u8 *addr = (u8 *)regs->pc; - - if ((addr >= (u8 *)jprobe_return) && - (addr <= (u8 *)jprobe_return_end)) { - *regs = kcb->jprobe_saved_regs; - - memcpy((kprobe_opcode_t *)stack_addr, kcb->jprobes_stack, - MIN_STACK_SIZE(stack_addr)); - - kcb->kprobe_status = KPROBE_HIT_SS; - preempt_enable_no_resched(); - return 1; - } - - return 0; -} - static struct kprobe trampoline_p = { .addr = (kprobe_opcode_t *)&kretprobe_trampoline, .pre_handler = trampoline_probe_handler diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 9a2b8877f174..0f535debf802 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -178,7 +178,7 @@ config SMP Y to "Enhanced Real Time Clock Support", below. The "Advanced Power Management" code will be disabled if you say Y here. - See also and the SMP-HOWTO + See also and the SMP-HOWTO available at . If you don't know what to do here, say N. diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild index ac67828da201..410b263ef5c8 100644 --- a/arch/sparc/include/asm/Kbuild +++ b/arch/sparc/include/asm/Kbuild @@ -13,6 +13,7 @@ generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h generic-y += module.h +generic-y += msi.h generic-y += preempt.h generic-y += rwsem.h generic-y += serial.h diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index d13ce517f4b9..94c930f0bc62 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -27,17 +27,17 @@ int atomic_fetch_or(int, atomic_t *); int atomic_fetch_xor(int, atomic_t *); int atomic_cmpxchg(atomic_t *, int, int); int atomic_xchg(atomic_t *, int); -int __atomic_add_unless(atomic_t *, int, int); +int atomic_fetch_add_unless(atomic_t *, int, int); void atomic_set(atomic_t *, int); +#define atomic_fetch_add_unless atomic_fetch_add_unless + #define atomic_set_release(v, i) atomic_set((v), (i)) #define atomic_read(v) READ_ONCE((v)->counter) #define atomic_add(i, v) ((void)atomic_add_return( (int)(i), (v))) #define atomic_sub(i, v) ((void)atomic_add_return(-(int)(i), (v))) -#define atomic_inc(v) ((void)atomic_add_return( 1, (v))) -#define atomic_dec(v) ((void)atomic_add_return( -1, (v))) #define atomic_and(i, v) ((void)atomic_fetch_and((i), (v))) #define atomic_or(i, v) ((void)atomic_fetch_or((i), (v))) @@ -46,22 +46,4 @@ void atomic_set(atomic_t *, int); #define atomic_sub_return(i, v) (atomic_add_return(-(int)(i), (v))) #define atomic_fetch_sub(i, v) (atomic_fetch_add (-(int)(i), (v))) -#define atomic_inc_return(v) (atomic_add_return( 1, (v))) -#define atomic_dec_return(v) (atomic_add_return( -1, (v))) - -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - #endif /* !(__ARCH_SPARC_ATOMIC__) */ diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 28db058d471b..6963482c81d8 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -50,38 +50,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_dec_return(v) atomic_sub_return(1, v) -#define atomic64_dec_return(v) atomic64_sub_return(1, v) - -#define atomic_inc_return(v) atomic_add_return(1, v) -#define atomic64_inc_return(v) atomic64_add_return(1, v) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) -#define atomic64_sub_and_test(i, v) (atomic64_sub_return(i, v) == 0) - -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, v) == 0) - -#define atomic_inc(v) atomic_add(1, v) -#define atomic64_inc(v) atomic64_add(1, v) - -#define atomic_dec(v) atomic_sub(1, v) -#define atomic64_dec(v) atomic64_sub(1, v) - -#define atomic_add_negative(i, v) (atomic_add_return(i, v) < 0) -#define atomic64_add_negative(i, v) (atomic64_add_return(i, v) < 0) - #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) static inline int atomic_xchg(atomic_t *v, int new) @@ -89,42 +57,11 @@ static inline int atomic_xchg(atomic_t *v, int new) return xchg(&v->counter, new); } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define atomic64_cmpxchg(v, o, n) \ ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static inline long atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - long atomic64_dec_if_positive(atomic64_t *v); +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* !(__ARCH_SPARC64_ATOMIC__) */ diff --git a/arch/sparc/include/asm/kprobes.h b/arch/sparc/include/asm/kprobes.h index 3704490b4488..bfcaa6326c20 100644 --- a/arch/sparc/include/asm/kprobes.h +++ b/arch/sparc/include/asm/kprobes.h @@ -44,7 +44,6 @@ struct kprobe_ctlblk { unsigned long kprobe_status; unsigned long kprobe_orig_tnpc; unsigned long kprobe_orig_tstate_pil; - struct pt_regs jprobe_saved_regs; struct prev_kprobe prev_kprobe; }; diff --git a/arch/sparc/include/asm/msi.h b/arch/sparc/include/asm/msi.h deleted file mode 100644 index 3c17c1074431..000000000000 --- a/arch/sparc/include/asm/msi.h +++ /dev/null @@ -1,32 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * msi.h: Defines specific to the MBus - Sbus - Interface. - * - * Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu) - * Copyright (C) 1996 Eddie C. Dost (ecd@skynet.be) - */ - -#ifndef _SPARC_MSI_H -#define _SPARC_MSI_H - -/* - * Locations of MSI Registers. - */ -#define MSI_MBUS_ARBEN 0xe0001008 /* MBus Arbiter Enable register */ - -/* - * Useful bits in the MSI Registers. - */ -#define MSI_ASYNC_MODE 0x80000000 /* Operate the MSI asynchronously */ - - -static inline void msi_set_sync(void) -{ - __asm__ __volatile__ ("lda [%0] %1, %%g3\n\t" - "andn %%g3, %2, %%g3\n\t" - "sta %%g3, [%0] %1\n\t" : : - "r" (MSI_MBUS_ARBEN), - "i" (ASI_M_CTL), "r" (MSI_ASYNC_MODE) : "g3"); -} - -#endif /* !(_SPARC_MSI_H) */ diff --git a/arch/sparc/kernel/kprobes.c b/arch/sparc/kernel/kprobes.c index ab4ba4347941..dfbca2470536 100644 --- a/arch/sparc/kernel/kprobes.c +++ b/arch/sparc/kernel/kprobes.c @@ -147,18 +147,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) kcb->kprobe_status = KPROBE_REENTER; prepare_singlestep(p, regs, kcb); return 1; - } else { - if (*(u32 *)addr != BREAKPOINT_INSTRUCTION) { + } else if (*(u32 *)addr != BREAKPOINT_INSTRUCTION) { /* The breakpoint instruction was removed by * another cpu right after we hit, no further * handling of this interrupt is appropriate */ - ret = 1; - goto no_kprobe; - } - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) - goto ss_probe; + ret = 1; } goto no_kprobe; } @@ -181,10 +175,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) set_current_kprobe(p, regs, kcb); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (p->pre_handler && p->pre_handler(p, regs)) + if (p->pre_handler && p->pre_handler(p, regs)) { + reset_current_kprobe(); + preempt_enable_no_resched(); return 1; + } -ss_probe: prepare_singlestep(p, regs, kcb); kcb->kprobe_status = KPROBE_HIT_SS; return 1; @@ -441,53 +437,6 @@ out: exception_exit(prev_state); } -/* Jprobes support. */ -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - memcpy(&(kcb->jprobe_saved_regs), regs, sizeof(*regs)); - - regs->tpc = (unsigned long) jp->entry; - regs->tnpc = ((unsigned long) jp->entry) + 0x4UL; - regs->tstate |= TSTATE_PIL; - - return 1; -} - -void __kprobes jprobe_return(void) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - register unsigned long orig_fp asm("g1"); - - orig_fp = kcb->jprobe_saved_regs.u_regs[UREG_FP]; - __asm__ __volatile__("\n" -"1: cmp %%sp, %0\n\t" - "blu,a,pt %%xcc, 1b\n\t" - " restore\n\t" - ".globl jprobe_return_trap_instruction\n" -"jprobe_return_trap_instruction:\n\t" - "ta 0x70" - : /* no outputs */ - : "r" (orig_fp)); -} - -extern void jprobe_return_trap_instruction(void); - -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - u32 *addr = (u32 *) regs->tpc; - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - if (addr == (u32 *) jprobe_return_trap_instruction) { - memcpy(regs, &(kcb->jprobe_saved_regs), sizeof(*regs)); - preempt_enable_no_resched(); - return 1; - } - return 0; -} - /* The value stored in the return address register is actually 2 * instructions before where the callee will return to. * Sequences usually look something like this @@ -562,9 +511,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, regs->tpc = orig_ret_address; regs->tnpc = orig_ret_address + 4; - reset_current_kprobe(); kretprobe_hash_unlock(current, &flags); - preempt_enable_no_resched(); hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_del(&ri->hlist); diff --git a/arch/sparc/kernel/time_64.c b/arch/sparc/kernel/time_64.c index 2ef8cfa9677e..f0eba72aa1ad 100644 --- a/arch/sparc/kernel/time_64.c +++ b/arch/sparc/kernel/time_64.c @@ -814,7 +814,7 @@ static void __init get_tick_patch(void) } } -static void init_tick_ops(struct sparc64_tick_ops *ops) +static void __init init_tick_ops(struct sparc64_tick_ops *ops) { unsigned long freq, quotient, tick; diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c index 465a901a0ada..281fa634bb1a 100644 --- a/arch/sparc/lib/atomic32.c +++ b/arch/sparc/lib/atomic32.c @@ -95,7 +95,7 @@ int atomic_cmpxchg(atomic_t *v, int old, int new) } EXPORT_SYMBOL(atomic_cmpxchg); -int __atomic_add_unless(atomic_t *v, int a, int u) +int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int ret; unsigned long flags; @@ -107,7 +107,7 @@ int __atomic_add_unless(atomic_t *v, int a, int u) spin_unlock_irqrestore(ATOMIC_HASH(v), flags); return ret; } -EXPORT_SYMBOL(__atomic_add_unless); +EXPORT_SYMBOL(atomic_fetch_add_unless); /* Atomic operations are already serializing */ void atomic_set(atomic_t *v, int i) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 8aeb1aabe76e..f396048a0d68 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -1620,7 +1620,7 @@ static void __init bootmem_init_nonnuma(void) (top_of_ram - total_ram) >> 20); init_node_masks_nonnuma(); - memblock_set_node(0, (phys_addr_t)ULLONG_MAX, &memblock.memory, 0); + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); allocate_node_data(0); node_set_online(0); } diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 1d70c3f6d986..be9cb0065179 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include @@ -116,6 +115,25 @@ static inline void srmmu_ctxd_set(ctxd_t *ctxp, pgd_t *pgdp) set_pte((pte_t *)ctxp, pte); } +/* + * Locations of MSI Registers. + */ +#define MSI_MBUS_ARBEN 0xe0001008 /* MBus Arbiter Enable register */ + +/* + * Useful bits in the MSI Registers. + */ +#define MSI_ASYNC_MODE 0x80000000 /* Operate the MSI asynchronously */ + +static void msi_set_sync(void) +{ + __asm__ __volatile__ ("lda [%0] %1, %%g3\n\t" + "andn %%g3, %2, %%g3\n\t" + "sta %%g3, [%0] %1\n\t" : : + "r" (MSI_MBUS_ARBEN), + "i" (ASI_M_CTL), "r" (MSI_ASYNC_MODE) : "g3"); +} + void pmd_set(pmd_t *pmdp, pte_t *ptep) { unsigned long ptp; /* Physical address, shifted right by 4 */ diff --git a/arch/um/Kconfig.um b/arch/um/Kconfig.um index 3e7f228b22e1..20da5a8ca949 100644 --- a/arch/um/Kconfig.um +++ b/arch/um/Kconfig.um @@ -80,7 +80,7 @@ config MAGIC_SYSRQ On UML, this is accomplished by sending a "sysrq" command with mconsole, followed by the letter for the requested command. - The keys are documented in . Don't say Y + The keys are documented in . Don't say Y unless you really know what this hack does. config KERNEL_STACK_ORDER diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c index 627075e6d875..50ee3bb5a63a 100644 --- a/arch/um/drivers/vector_kern.c +++ b/arch/um/drivers/vector_kern.c @@ -188,7 +188,7 @@ static int get_transport_options(struct arglist *def) if (strncmp(transport, TRANS_TAP, TRANS_TAP_LEN) == 0) return (vec_rx | VECTOR_BPF); if (strncmp(transport, TRANS_RAW, TRANS_RAW_LEN) == 0) - return (vec_rx | vec_tx); + return (vec_rx | vec_tx | VECTOR_QDISC_BYPASS); return (vec_rx | vec_tx); } @@ -504,15 +504,19 @@ static struct vector_queue *create_queue( result = kmalloc(sizeof(struct vector_queue), GFP_KERNEL); if (result == NULL) - goto out_fail; + return NULL; result->max_depth = max_size; result->dev = vp->dev; result->mmsg_vector = kmalloc( (sizeof(struct mmsghdr) * max_size), GFP_KERNEL); + if (result->mmsg_vector == NULL) + goto out_mmsg_fail; result->skbuff_vector = kmalloc( (sizeof(void *) * max_size), GFP_KERNEL); - if (result->mmsg_vector == NULL || result->skbuff_vector == NULL) - goto out_fail; + if (result->skbuff_vector == NULL) + goto out_skb_fail; + + /* further failures can be handled safely by destroy_queue*/ mmsg_vector = result->mmsg_vector; for (i = 0; i < max_size; i++) { @@ -563,6 +567,11 @@ static struct vector_queue *create_queue( result->head = 0; result->tail = 0; return result; +out_skb_fail: + kfree(result->mmsg_vector); +out_mmsg_fail: + kfree(result); + return NULL; out_fail: destroy_queue(result); return NULL; @@ -1232,9 +1241,8 @@ static int vector_net_open(struct net_device *dev) if ((vp->options & VECTOR_QDISC_BYPASS) != 0) { if (!uml_raw_enable_qdisc_bypass(vp->fds->rx_fd)) - vp->options = vp->options | VECTOR_BPF; + vp->options |= VECTOR_BPF; } - if ((vp->options & VECTOR_BPF) != 0) vp->bpf = uml_vector_default_bpf(vp->fds->rx_fd, dev->dev_addr); diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S index b30d73ca29d0..7adb4e6b658a 100644 --- a/arch/um/include/asm/common.lds.S +++ b/arch/um/include/asm/common.lds.S @@ -53,12 +53,6 @@ CON_INITCALL } - .uml.initcall.init : { - __uml_initcall_start = .; - *(.uml.initcall.init) - __uml_initcall_end = .; - } - SECURITY_INIT .exitcall : { diff --git a/arch/um/include/shared/init.h b/arch/um/include/shared/init.h index b3f5865a92c9..c66de434a983 100644 --- a/arch/um/include/shared/init.h +++ b/arch/um/include/shared/init.h @@ -64,14 +64,10 @@ struct uml_param { int (*setup_func)(char *, int *); }; -extern initcall_t __uml_initcall_start, __uml_initcall_end; extern initcall_t __uml_postsetup_start, __uml_postsetup_end; extern const char *__uml_help_start, *__uml_help_end; #endif -#define __uml_initcall(fn) \ - static initcall_t __uml_initcall_##fn __uml_init_call = fn - #define __uml_exitcall(fn) \ static exitcall_t __uml_exitcall_##fn __uml_exit_call = fn @@ -108,7 +104,6 @@ extern struct uml_param __uml_setup_start, __uml_setup_end; */ #define __uml_init_setup __used __section(.uml.setup.init) #define __uml_setup_help __used __section(.uml.help.init) -#define __uml_init_call __used __section(.uml.initcall.init) #define __uml_postsetup_call __used __section(.uml.postsetup.init) #define __uml_exit_call __used __section(.uml.exitcall.exit) diff --git a/arch/um/os-Linux/main.c b/arch/um/os-Linux/main.c index 5f970ece5ac3..f1fee2b91239 100644 --- a/arch/um/os-Linux/main.c +++ b/arch/um/os-Linux/main.c @@ -40,17 +40,6 @@ static void set_stklim(void) } } -static __init void do_uml_initcalls(void) -{ - initcall_t *call; - - call = &__uml_initcall_start; - while (call < &__uml_initcall_end) { - (*call)(); - call++; - } -} - static void last_ditch_exit(int sig) { uml_cleanup(); @@ -151,7 +140,6 @@ int __init main(int argc, char **argv, char **envp) scan_elf_aux(envp); #endif - do_uml_initcalls(); change_sig(SIGPIPE, 0); ret = linux_main(argc, argv); diff --git a/arch/unicore32/include/asm/cacheflush.h b/arch/unicore32/include/asm/cacheflush.h index 1d9132b66039..1c8b9f13a9e1 100644 --- a/arch/unicore32/include/asm/cacheflush.h +++ b/arch/unicore32/include/asm/cacheflush.h @@ -33,7 +33,7 @@ * Start addresses are inclusive and end addresses are exclusive; * start addresses should be rounded down, end addresses up. * - * See Documentation/cachetlb.txt for more information. + * See Documentation/core-api/cachetlb.rst for more information. * Please note that the implementation of these, and the required * effects are cache-type (VIVT/VIPT/PIPT) specific. * diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 455a670ab239..6d4774f203d0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -63,7 +63,7 @@ config X86 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_REFCOUNT select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 - select ARCH_HAS_UACCESS_MCSAFE if X86_64 + select ARCH_HAS_UACCESS_MCSAFE if X86_64 && X86_MCE select ARCH_HAS_SET_MEMORY select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX @@ -130,7 +130,6 @@ config X86 select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64 select HAVE_ARCH_VMAP_STACK if X86_64 select HAVE_ARCH_WITHIN_STACK_FRAMES - select HAVE_CC_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR select HAVE_CMPXCHG_DOUBLE select HAVE_CMPXCHG_LOCAL select HAVE_CONTEXT_TRACKING if X86_64 @@ -181,7 +180,8 @@ config X86 select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API - select HAVE_RELIABLE_STACKTRACE if X86_64 && UNWINDER_FRAME_POINTER && STACK_VALIDATION + select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION + select HAVE_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR select HAVE_STACK_VALIDATION if X86_64 select HAVE_RSEQ select HAVE_SYSCALL_TRACEPOINTS @@ -327,7 +327,7 @@ config X86_64_SMP config X86_32_LAZY_GS def_bool y - depends on X86_32 && CC_STACKPROTECTOR_NONE + depends on X86_32 && !STACKPROTECTOR config ARCH_SUPPORTS_UPROBES def_bool y diff --git a/arch/x86/Makefile b/arch/x86/Makefile index f0a6ea22429d..7e3c07d6ad42 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -80,11 +80,6 @@ ifeq ($(CONFIG_X86_32),y) # alignment instructions. KBUILD_CFLAGS += $(call cc-option,$(cc_stack_align4)) - # Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use - # a lot more stack due to the lack of sharing of stacklots: - KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0400, \ - $(call cc-option,-fno-unit-at-a-time)) - # CPU-specific tuning. Anything which can be shared with UML should go here. include arch/x86/Makefile_32.cpu KBUILD_CFLAGS += $(cflags-y) @@ -258,11 +253,6 @@ archscripts: scripts_basic archheaders: $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all -archprepare: -ifeq ($(CONFIG_KEXEC_FILE),y) - $(Q)$(MAKE) $(build)=arch/x86/purgatory arch/x86/purgatory/kexec-purgatory.c -endif - ### # Kernel objects @@ -327,7 +317,6 @@ archclean: $(Q)rm -rf $(objtree)/arch/x86_64 $(Q)$(MAKE) $(clean)=$(boot) $(Q)$(MAKE) $(clean)=arch/x86/tools - $(Q)$(MAKE) $(clean)=arch/x86/purgatory define archhelp echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)' diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h index 0d41d68131cc..2e1382486e91 100644 --- a/arch/x86/boot/bitops.h +++ b/arch/x86/boot/bitops.h @@ -17,6 +17,7 @@ #define _LINUX_BITOPS_H /* Inhibit inclusion of */ #include +#include static inline bool constant_test_bit(int nr, const void *addr) { @@ -28,7 +29,7 @@ static inline bool variable_test_bit(int nr, const void *addr) bool v; const u32 *p = (const u32 *)addr; - asm("btl %2,%1; setc %0" : "=qm" (v) : "m" (*p), "Ir" (nr)); + asm("btl %2,%1" CC_SET(c) : CC_OUT(c) (v) : "m" (*p), "Ir" (nr)); return v; } diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index fa42f895fdde..169c2feda14a 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -106,9 +106,13 @@ define cmd_check_data_rel done endef +# We need to run two commands under "if_changed", so merge them into a +# single invocation. +quiet_cmd_check-and-link-vmlinux = LD $@ + cmd_check-and-link-vmlinux = $(cmd_check_data_rel); $(cmd_ld) + $(obj)/vmlinux: $(vmlinux-objs-y) FORCE - $(call if_changed,check_data_rel) - $(call if_changed,ld) + $(call if_changed,check-and-link-vmlinux) OBJCOPYFLAGS_vmlinux.bin := -R .comment -S $(obj)/vmlinux.bin: vmlinux FORCE diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c index a8a8642d2b0b..1458b1700fc7 100644 --- a/arch/x86/boot/compressed/eboot.c +++ b/arch/x86/boot/compressed/eboot.c @@ -34,74 +34,13 @@ static void setup_boot_services##bits(struct efi_config *c) \ \ table = (typeof(table))sys_table; \ \ - c->runtime_services = table->runtime; \ - c->boot_services = table->boottime; \ - c->text_output = table->con_out; \ + c->runtime_services = table->runtime; \ + c->boot_services = table->boottime; \ + c->text_output = table->con_out; \ } BOOT_SERVICES(32); BOOT_SERVICES(64); -static inline efi_status_t __open_volume32(void *__image, void **__fh) -{ - efi_file_io_interface_t *io; - efi_loaded_image_32_t *image = __image; - efi_file_handle_32_t *fh; - efi_guid_t fs_proto = EFI_FILE_SYSTEM_GUID; - efi_status_t status; - void *handle = (void *)(unsigned long)image->device_handle; - unsigned long func; - - status = efi_call_early(handle_protocol, handle, - &fs_proto, (void **)&io); - if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to handle fs_proto\n"); - return status; - } - - func = (unsigned long)io->open_volume; - status = efi_early->call(func, io, &fh); - if (status != EFI_SUCCESS) - efi_printk(sys_table, "Failed to open volume\n"); - - *__fh = fh; - return status; -} - -static inline efi_status_t __open_volume64(void *__image, void **__fh) -{ - efi_file_io_interface_t *io; - efi_loaded_image_64_t *image = __image; - efi_file_handle_64_t *fh; - efi_guid_t fs_proto = EFI_FILE_SYSTEM_GUID; - efi_status_t status; - void *handle = (void *)(unsigned long)image->device_handle; - unsigned long func; - - status = efi_call_early(handle_protocol, handle, - &fs_proto, (void **)&io); - if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to handle fs_proto\n"); - return status; - } - - func = (unsigned long)io->open_volume; - status = efi_early->call(func, io, &fh); - if (status != EFI_SUCCESS) - efi_printk(sys_table, "Failed to open volume\n"); - - *__fh = fh; - return status; -} - -efi_status_t -efi_open_volume(efi_system_table_t *sys_table, void *__image, void **__fh) -{ - if (efi_early->is64) - return __open_volume64(__image, __fh); - - return __open_volume32(__image, __fh); -} - void efi_char16_printk(efi_system_table_t *table, efi_char16_t *str) { efi_call_proto(efi_simple_text_output_protocol, output_string, @@ -109,23 +48,17 @@ void efi_char16_printk(efi_system_table_t *table, efi_char16_t *str) } static efi_status_t -__setup_efi_pci(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom) +preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom) { struct pci_setup_rom *rom = NULL; efi_status_t status; unsigned long size; - uint64_t attributes, romsize; + uint64_t romsize; void *romimage; - status = efi_call_proto(efi_pci_io_protocol, attributes, pci, - EfiPciIoAttributeOperationGet, 0, 0, - &attributes); - if (status != EFI_SUCCESS) - return status; - /* - * Some firmware images contain EFI function pointers at the place where the - * romimage and romsize fields are supposed to be. Typically the EFI + * Some firmware images contain EFI function pointers at the place where + * the romimage and romsize fields are supposed to be. Typically the EFI * code is mapped at high addresses, translating to an unrealistically * large romsize. The UEFI spec limits the size of option ROMs to 16 * MiB so we reject any ROMs over 16 MiB in size to catch this. @@ -140,16 +73,16 @@ __setup_efi_pci(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom) status = efi_call_early(allocate_pool, EFI_LOADER_DATA, size, &rom); if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to alloc mem for rom\n"); + efi_printk(sys_table, "Failed to allocate memory for 'rom'\n"); return status; } memset(rom, 0, sizeof(*rom)); - rom->data.type = SETUP_PCI; - rom->data.len = size - sizeof(struct setup_data); - rom->data.next = 0; - rom->pcilen = pci->romsize; + rom->data.type = SETUP_PCI; + rom->data.len = size - sizeof(struct setup_data); + rom->data.next = 0; + rom->pcilen = pci->romsize; *__rom = rom; status = efi_call_proto(efi_pci_io_protocol, pci.read, pci, @@ -185,96 +118,6 @@ free_struct: return status; } -static void -setup_efi_pci32(struct boot_params *params, void **pci_handle, - unsigned long size) -{ - efi_pci_io_protocol_t *pci = NULL; - efi_guid_t pci_proto = EFI_PCI_IO_PROTOCOL_GUID; - u32 *handles = (u32 *)(unsigned long)pci_handle; - efi_status_t status; - unsigned long nr_pci; - struct setup_data *data; - int i; - - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; - - while (data && data->next) - data = (struct setup_data *)(unsigned long)data->next; - - nr_pci = size / sizeof(u32); - for (i = 0; i < nr_pci; i++) { - struct pci_setup_rom *rom = NULL; - u32 h = handles[i]; - - status = efi_call_early(handle_protocol, h, - &pci_proto, (void **)&pci); - - if (status != EFI_SUCCESS) - continue; - - if (!pci) - continue; - - status = __setup_efi_pci(pci, &rom); - if (status != EFI_SUCCESS) - continue; - - if (data) - data->next = (unsigned long)rom; - else - params->hdr.setup_data = (unsigned long)rom; - - data = (struct setup_data *)rom; - - } -} - -static void -setup_efi_pci64(struct boot_params *params, void **pci_handle, - unsigned long size) -{ - efi_pci_io_protocol_t *pci = NULL; - efi_guid_t pci_proto = EFI_PCI_IO_PROTOCOL_GUID; - u64 *handles = (u64 *)(unsigned long)pci_handle; - efi_status_t status; - unsigned long nr_pci; - struct setup_data *data; - int i; - - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; - - while (data && data->next) - data = (struct setup_data *)(unsigned long)data->next; - - nr_pci = size / sizeof(u64); - for (i = 0; i < nr_pci; i++) { - struct pci_setup_rom *rom = NULL; - u64 h = handles[i]; - - status = efi_call_early(handle_protocol, h, - &pci_proto, (void **)&pci); - - if (status != EFI_SUCCESS) - continue; - - if (!pci) - continue; - - status = __setup_efi_pci(pci, &rom); - if (status != EFI_SUCCESS) - continue; - - if (data) - data->next = (unsigned long)rom; - else - params->hdr.setup_data = (unsigned long)rom; - - data = (struct setup_data *)rom; - - } -} - /* * There's no way to return an informative status from this function, * because any analysis (and printing of error messages) needs to be @@ -290,6 +133,9 @@ static void setup_efi_pci(struct boot_params *params) void **pci_handle = NULL; efi_guid_t pci_proto = EFI_PCI_IO_PROTOCOL_GUID; unsigned long size = 0; + unsigned long nr_pci; + struct setup_data *data; + int i; status = efi_call_early(locate_handle, EFI_LOCATE_BY_PROTOCOL, @@ -301,7 +147,7 @@ static void setup_efi_pci(struct boot_params *params) size, (void **)&pci_handle); if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to alloc mem for pci_handle\n"); + efi_printk(sys_table, "Failed to allocate memory for 'pci_handle'\n"); return; } @@ -313,10 +159,34 @@ static void setup_efi_pci(struct boot_params *params) if (status != EFI_SUCCESS) goto free_handle; - if (efi_early->is64) - setup_efi_pci64(params, pci_handle, size); - else - setup_efi_pci32(params, pci_handle, size); + data = (struct setup_data *)(unsigned long)params->hdr.setup_data; + + while (data && data->next) + data = (struct setup_data *)(unsigned long)data->next; + + nr_pci = size / (efi_is_64bit() ? sizeof(u64) : sizeof(u32)); + for (i = 0; i < nr_pci; i++) { + efi_pci_io_protocol_t *pci = NULL; + struct pci_setup_rom *rom; + + status = efi_call_early(handle_protocol, + efi_is_64bit() ? ((u64 *)pci_handle)[i] + : ((u32 *)pci_handle)[i], + &pci_proto, (void **)&pci); + if (status != EFI_SUCCESS || !pci) + continue; + + status = preserve_pci_rom_image(pci, &rom); + if (status != EFI_SUCCESS) + continue; + + if (data) + data->next = (unsigned long)rom; + else + params->hdr.setup_data = (unsigned long)rom; + + data = (struct setup_data *)rom; + } free_handle: efi_call_early(free_pool, pci_handle); @@ -347,8 +217,7 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params) status = efi_call_early(allocate_pool, EFI_LOADER_DATA, size + sizeof(struct setup_data), &new); if (status != EFI_SUCCESS) { - efi_printk(sys_table, - "Failed to alloc mem for properties\n"); + efi_printk(sys_table, "Failed to allocate memory for 'properties'\n"); return; } @@ -364,9 +233,9 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params) new->next = 0; data = (struct setup_data *)(unsigned long)boot_params->hdr.setup_data; - if (!data) + if (!data) { boot_params->hdr.setup_data = (unsigned long)new; - else { + } else { while (data->next) data = (struct setup_data *)(unsigned long)data->next; data->next = (unsigned long)new; @@ -386,81 +255,55 @@ static void setup_quirks(struct boot_params *boot_params) } } +/* + * See if we have Universal Graphics Adapter (UGA) protocol + */ static efi_status_t -setup_uga32(void **uga_handle, unsigned long size, u32 *width, u32 *height) +setup_uga(struct screen_info *si, efi_guid_t *uga_proto, unsigned long size) { - struct efi_uga_draw_protocol *uga = NULL, *first_uga; - efi_guid_t uga_proto = EFI_UGA_PROTOCOL_GUID; + efi_status_t status; + u32 width, height; + void **uga_handle = NULL; + efi_uga_draw_protocol_t *uga = NULL, *first_uga; unsigned long nr_ugas; - u32 *handles = (u32 *)uga_handle; - efi_status_t status = EFI_INVALID_PARAMETER; int i; - first_uga = NULL; - nr_ugas = size / sizeof(u32); - for (i = 0; i < nr_ugas; i++) { - efi_guid_t pciio_proto = EFI_PCI_IO_PROTOCOL_GUID; - u32 w, h, depth, refresh; - void *pciio; - u32 handle = handles[i]; - - status = efi_call_early(handle_protocol, handle, - &uga_proto, (void **)&uga); - if (status != EFI_SUCCESS) - continue; - - efi_call_early(handle_protocol, handle, &pciio_proto, &pciio); - - status = efi_early->call((unsigned long)uga->get_mode, uga, - &w, &h, &depth, &refresh); - if (status == EFI_SUCCESS && (!first_uga || pciio)) { - *width = w; - *height = h; - - /* - * Once we've found a UGA supporting PCIIO, - * don't bother looking any further. - */ - if (pciio) - break; - - first_uga = uga; - } - } + status = efi_call_early(allocate_pool, EFI_LOADER_DATA, + size, (void **)&uga_handle); + if (status != EFI_SUCCESS) + return status; - return status; -} + status = efi_call_early(locate_handle, + EFI_LOCATE_BY_PROTOCOL, + uga_proto, NULL, &size, uga_handle); + if (status != EFI_SUCCESS) + goto free_handle; -static efi_status_t -setup_uga64(void **uga_handle, unsigned long size, u32 *width, u32 *height) -{ - struct efi_uga_draw_protocol *uga = NULL, *first_uga; - efi_guid_t uga_proto = EFI_UGA_PROTOCOL_GUID; - unsigned long nr_ugas; - u64 *handles = (u64 *)uga_handle; - efi_status_t status = EFI_INVALID_PARAMETER; - int i; + height = 0; + width = 0; first_uga = NULL; - nr_ugas = size / sizeof(u64); + nr_ugas = size / (efi_is_64bit() ? sizeof(u64) : sizeof(u32)); for (i = 0; i < nr_ugas; i++) { efi_guid_t pciio_proto = EFI_PCI_IO_PROTOCOL_GUID; u32 w, h, depth, refresh; void *pciio; - u64 handle = handles[i]; + unsigned long handle = efi_is_64bit() ? ((u64 *)uga_handle)[i] + : ((u32 *)uga_handle)[i]; status = efi_call_early(handle_protocol, handle, - &uga_proto, (void **)&uga); + uga_proto, (void **)&uga); if (status != EFI_SUCCESS) continue; + pciio = NULL; efi_call_early(handle_protocol, handle, &pciio_proto, &pciio); - status = efi_early->call((unsigned long)uga->get_mode, uga, - &w, &h, &depth, &refresh); + status = efi_call_proto(efi_uga_draw_protocol, get_mode, uga, + &w, &h, &depth, &refresh); if (status == EFI_SUCCESS && (!first_uga || pciio)) { - *width = w; - *height = h; + width = w; + height = h; /* * Once we've found a UGA supporting PCIIO, @@ -473,59 +316,28 @@ setup_uga64(void **uga_handle, unsigned long size, u32 *width, u32 *height) } } - return status; -} - -/* - * See if we have Universal Graphics Adapter (UGA) protocol - */ -static efi_status_t setup_uga(struct screen_info *si, efi_guid_t *uga_proto, - unsigned long size) -{ - efi_status_t status; - u32 width, height; - void **uga_handle = NULL; - - status = efi_call_early(allocate_pool, EFI_LOADER_DATA, - size, (void **)&uga_handle); - if (status != EFI_SUCCESS) - return status; - - status = efi_call_early(locate_handle, - EFI_LOCATE_BY_PROTOCOL, - uga_proto, NULL, &size, uga_handle); - if (status != EFI_SUCCESS) - goto free_handle; - - height = 0; - width = 0; - - if (efi_early->is64) - status = setup_uga64(uga_handle, size, &width, &height); - else - status = setup_uga32(uga_handle, size, &width, &height); - if (!width && !height) goto free_handle; /* EFI framebuffer */ - si->orig_video_isVGA = VIDEO_TYPE_EFI; + si->orig_video_isVGA = VIDEO_TYPE_EFI; - si->lfb_depth = 32; - si->lfb_width = width; - si->lfb_height = height; + si->lfb_depth = 32; + si->lfb_width = width; + si->lfb_height = height; - si->red_size = 8; - si->red_pos = 16; - si->green_size = 8; - si->green_pos = 8; - si->blue_size = 8; - si->blue_pos = 0; - si->rsvd_size = 8; - si->rsvd_pos = 24; + si->red_size = 8; + si->red_pos = 16; + si->green_size = 8; + si->green_pos = 8; + si->blue_size = 8; + si->blue_pos = 0; + si->rsvd_size = 8; + si->rsvd_pos = 24; free_handle: efi_call_early(free_pool, uga_handle); + return status; } @@ -592,7 +404,7 @@ struct boot_params *make_boot_params(struct efi_config *c) if (sys_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE) return NULL; - if (efi_early->is64) + if (efi_is_64bit()) setup_boot_services64(efi_early); else setup_boot_services32(efi_early); @@ -607,7 +419,7 @@ struct boot_params *make_boot_params(struct efi_config *c) status = efi_low_alloc(sys_table, 0x4000, 1, (unsigned long *)&boot_params); if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to alloc lowmem for boot params\n"); + efi_printk(sys_table, "Failed to allocate lowmem for boot params\n"); return NULL; } @@ -623,9 +435,9 @@ struct boot_params *make_boot_params(struct efi_config *c) * Fill out some of the header fields ourselves because the * EFI firmware loader doesn't load the first sector. */ - hdr->root_flags = 1; - hdr->vid_mode = 0xffff; - hdr->boot_flag = 0xAA55; + hdr->root_flags = 1; + hdr->vid_mode = 0xffff; + hdr->boot_flag = 0xAA55; hdr->type_of_loader = 0x21; @@ -633,6 +445,7 @@ struct boot_params *make_boot_params(struct efi_config *c) cmdline_ptr = efi_convert_cmdline(sys_table, image, &options_size); if (!cmdline_ptr) goto fail; + hdr->cmd_line_ptr = (unsigned long)cmdline_ptr; /* Fill in upper bits of command line address, NOP on 32 bit */ boot_params->ext_cmd_line_ptr = (u64)(unsigned long)cmdline_ptr >> 32; @@ -669,10 +482,12 @@ struct boot_params *make_boot_params(struct efi_config *c) boot_params->ext_ramdisk_size = (u64)ramdisk_size >> 32; return boot_params; + fail2: efi_free(sys_table, options_size, hdr->cmd_line_ptr); fail: efi_free(sys_table, 0x4000, (unsigned long)boot_params); + return NULL; } @@ -684,7 +499,7 @@ static void add_e820ext(struct boot_params *params, unsigned long size; e820ext->type = SETUP_E820_EXT; - e820ext->len = nr_entries * sizeof(struct boot_e820_entry); + e820ext->len = nr_entries * sizeof(struct boot_e820_entry); e820ext->next = 0; data = (struct setup_data *)(unsigned long)params->hdr.setup_data; @@ -698,8 +513,8 @@ static void add_e820ext(struct boot_params *params, params->hdr.setup_data = (unsigned long)e820ext; } -static efi_status_t setup_e820(struct boot_params *params, - struct setup_data *e820ext, u32 e820ext_size) +static efi_status_t +setup_e820(struct boot_params *params, struct setup_data *e820ext, u32 e820ext_size) { struct boot_e820_entry *entry = params->e820_table; struct efi_info *efi = ¶ms->efi_info; @@ -820,11 +635,10 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, } struct exit_boot_struct { - struct boot_params *boot_params; - struct efi_info *efi; - struct setup_data *e820ext; - __u32 e820ext_size; - bool is64; + struct boot_params *boot_params; + struct efi_info *efi; + struct setup_data *e820ext; + __u32 e820ext_size; }; static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg, @@ -851,25 +665,25 @@ static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg, first = false; } - signature = p->is64 ? EFI64_LOADER_SIGNATURE : EFI32_LOADER_SIGNATURE; + signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE + : EFI32_LOADER_SIGNATURE; memcpy(&p->efi->efi_loader_signature, signature, sizeof(__u32)); - p->efi->efi_systab = (unsigned long)sys_table_arg; - p->efi->efi_memdesc_size = *map->desc_size; - p->efi->efi_memdesc_version = *map->desc_ver; - p->efi->efi_memmap = (unsigned long)*map->map; - p->efi->efi_memmap_size = *map->map_size; + p->efi->efi_systab = (unsigned long)sys_table_arg; + p->efi->efi_memdesc_size = *map->desc_size; + p->efi->efi_memdesc_version = *map->desc_ver; + p->efi->efi_memmap = (unsigned long)*map->map; + p->efi->efi_memmap_size = *map->map_size; #ifdef CONFIG_X86_64 - p->efi->efi_systab_hi = (unsigned long)sys_table_arg >> 32; - p->efi->efi_memmap_hi = (unsigned long)*map->map >> 32; + p->efi->efi_systab_hi = (unsigned long)sys_table_arg >> 32; + p->efi->efi_memmap_hi = (unsigned long)*map->map >> 32; #endif return EFI_SUCCESS; } -static efi_status_t exit_boot(struct boot_params *boot_params, - void *handle, bool is64) +static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) { unsigned long map_sz, key, desc_size, buff_size; efi_memory_desc_t *mem_map; @@ -880,17 +694,16 @@ static efi_status_t exit_boot(struct boot_params *boot_params, struct efi_boot_memmap map; struct exit_boot_struct priv; - map.map = &mem_map; - map.map_size = &map_sz; - map.desc_size = &desc_size; - map.desc_ver = &desc_version; - map.key_ptr = &key; - map.buff_size = &buff_size; - priv.boot_params = boot_params; - priv.efi = &boot_params->efi_info; - priv.e820ext = NULL; - priv.e820ext_size = 0; - priv.is64 = is64; + map.map = &mem_map; + map.map_size = &map_sz; + map.desc_size = &desc_size; + map.desc_ver = &desc_version; + map.key_ptr = &key; + map.buff_size = &buff_size; + priv.boot_params = boot_params; + priv.efi = &boot_params->efi_info; + priv.e820ext = NULL; + priv.e820ext_size = 0; /* Might as well exit boot services now */ status = efi_exit_boot_services(sys_table, handle, &map, &priv, @@ -898,10 +711,11 @@ static efi_status_t exit_boot(struct boot_params *boot_params, if (status != EFI_SUCCESS) return status; - e820ext = priv.e820ext; - e820ext_size = priv.e820ext_size; + e820ext = priv.e820ext; + e820ext_size = priv.e820ext_size; + /* Historic? */ - boot_params->alt_mem_k = 32 * 1024; + boot_params->alt_mem_k = 32 * 1024; status = setup_e820(boot_params, e820ext, e820ext_size); if (status != EFI_SUCCESS) @@ -914,8 +728,8 @@ static efi_status_t exit_boot(struct boot_params *boot_params, * On success we return a pointer to a boot_params structure, and NULL * on failure. */ -struct boot_params *efi_main(struct efi_config *c, - struct boot_params *boot_params) +struct boot_params * +efi_main(struct efi_config *c, struct boot_params *boot_params) { struct desc_ptr *gdt = NULL; efi_loaded_image_t *image; @@ -924,13 +738,11 @@ struct boot_params *efi_main(struct efi_config *c, struct desc_struct *desc; void *handle; efi_system_table_t *_table; - bool is64; efi_early = c; _table = (efi_system_table_t *)(unsigned long)efi_early->table; handle = (void *)(unsigned long)efi_early->image_handle; - is64 = efi_early->is64; sys_table = _table; @@ -938,7 +750,7 @@ struct boot_params *efi_main(struct efi_config *c, if (sys_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE) goto fail; - if (is64) + if (efi_is_64bit()) setup_boot_services64(efi_early); else setup_boot_services32(efi_early); @@ -963,7 +775,7 @@ struct boot_params *efi_main(struct efi_config *c, status = efi_call_early(allocate_pool, EFI_LOADER_DATA, sizeof(*gdt), (void **)&gdt); if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to alloc mem for gdt structure\n"); + efi_printk(sys_table, "Failed to allocate memory for 'gdt' structure\n"); goto fail; } @@ -971,7 +783,7 @@ struct boot_params *efi_main(struct efi_config *c, status = efi_low_alloc(sys_table, gdt->size, 8, (unsigned long *)&gdt->address); if (status != EFI_SUCCESS) { - efi_printk(sys_table, "Failed to alloc mem for gdt\n"); + efi_printk(sys_table, "Failed to allocate memory for 'gdt'\n"); goto fail; } @@ -994,7 +806,7 @@ struct boot_params *efi_main(struct efi_config *c, hdr->code32_start = bzimage_addr; } - status = exit_boot(boot_params, handle, is64); + status = exit_boot(boot_params, handle); if (status != EFI_SUCCESS) { efi_printk(sys_table, "exit_boot() failed!\n"); goto fail; @@ -1008,19 +820,20 @@ struct boot_params *efi_main(struct efi_config *c, if (IS_ENABLED(CONFIG_X86_64)) { /* __KERNEL32_CS */ - desc->limit0 = 0xffff; - desc->base0 = 0x0000; - desc->base1 = 0x0000; - desc->type = SEG_TYPE_CODE | SEG_TYPE_EXEC_READ; - desc->s = DESC_TYPE_CODE_DATA; - desc->dpl = 0; - desc->p = 1; - desc->limit1 = 0xf; - desc->avl = 0; - desc->l = 0; - desc->d = SEG_OP_SIZE_32BIT; - desc->g = SEG_GRANULARITY_4KB; - desc->base2 = 0x00; + desc->limit0 = 0xffff; + desc->base0 = 0x0000; + desc->base1 = 0x0000; + desc->type = SEG_TYPE_CODE | SEG_TYPE_EXEC_READ; + desc->s = DESC_TYPE_CODE_DATA; + desc->dpl = 0; + desc->p = 1; + desc->limit1 = 0xf; + desc->avl = 0; + desc->l = 0; + desc->d = SEG_OP_SIZE_32BIT; + desc->g = SEG_GRANULARITY_4KB; + desc->base2 = 0x00; + desc++; } else { /* Second entry is unused on 32-bit */ @@ -1028,15 +841,16 @@ struct boot_params *efi_main(struct efi_config *c, } /* __KERNEL_CS */ - desc->limit0 = 0xffff; - desc->base0 = 0x0000; - desc->base1 = 0x0000; - desc->type = SEG_TYPE_CODE | SEG_TYPE_EXEC_READ; - desc->s = DESC_TYPE_CODE_DATA; - desc->dpl = 0; - desc->p = 1; - desc->limit1 = 0xf; - desc->avl = 0; + desc->limit0 = 0xffff; + desc->base0 = 0x0000; + desc->base1 = 0x0000; + desc->type = SEG_TYPE_CODE | SEG_TYPE_EXEC_READ; + desc->s = DESC_TYPE_CODE_DATA; + desc->dpl = 0; + desc->p = 1; + desc->limit1 = 0xf; + desc->avl = 0; + if (IS_ENABLED(CONFIG_X86_64)) { desc->l = 1; desc->d = 0; @@ -1044,41 +858,41 @@ struct boot_params *efi_main(struct efi_config *c, desc->l = 0; desc->d = SEG_OP_SIZE_32BIT; } - desc->g = SEG_GRANULARITY_4KB; - desc->base2 = 0x00; + desc->g = SEG_GRANULARITY_4KB; + desc->base2 = 0x00; desc++; /* __KERNEL_DS */ - desc->limit0 = 0xffff; - desc->base0 = 0x0000; - desc->base1 = 0x0000; - desc->type = SEG_TYPE_DATA | SEG_TYPE_READ_WRITE; - desc->s = DESC_TYPE_CODE_DATA; - desc->dpl = 0; - desc->p = 1; - desc->limit1 = 0xf; - desc->avl = 0; - desc->l = 0; - desc->d = SEG_OP_SIZE_32BIT; - desc->g = SEG_GRANULARITY_4KB; - desc->base2 = 0x00; + desc->limit0 = 0xffff; + desc->base0 = 0x0000; + desc->base1 = 0x0000; + desc->type = SEG_TYPE_DATA | SEG_TYPE_READ_WRITE; + desc->s = DESC_TYPE_CODE_DATA; + desc->dpl = 0; + desc->p = 1; + desc->limit1 = 0xf; + desc->avl = 0; + desc->l = 0; + desc->d = SEG_OP_SIZE_32BIT; + desc->g = SEG_GRANULARITY_4KB; + desc->base2 = 0x00; desc++; if (IS_ENABLED(CONFIG_X86_64)) { /* Task segment value */ - desc->limit0 = 0x0000; - desc->base0 = 0x0000; - desc->base1 = 0x0000; - desc->type = SEG_TYPE_TSS; - desc->s = 0; - desc->dpl = 0; - desc->p = 1; - desc->limit1 = 0x0; - desc->avl = 0; - desc->l = 0; - desc->d = 0; - desc->g = SEG_GRANULARITY_4KB; - desc->base2 = 0x00; + desc->limit0 = 0x0000; + desc->base0 = 0x0000; + desc->base1 = 0x0000; + desc->type = SEG_TYPE_TSS; + desc->s = 0; + desc->dpl = 0; + desc->p = 1; + desc->limit1 = 0x0; + desc->avl = 0; + desc->l = 0; + desc->d = 0; + desc->g = SEG_GRANULARITY_4KB; + desc->base2 = 0x00; desc++; } @@ -1088,5 +902,6 @@ struct boot_params *efi_main(struct efi_config *c, return boot_params; fail: efi_printk(sys_table, "efi_main() failed!\n"); + return NULL; } diff --git a/arch/x86/boot/compressed/eboot.h b/arch/x86/boot/compressed/eboot.h index e799dc5c6448..8297387c4676 100644 --- a/arch/x86/boot/compressed/eboot.h +++ b/arch/x86/boot/compressed/eboot.h @@ -12,22 +12,22 @@ #define DESC_TYPE_CODE_DATA (1 << 0) -struct efi_uga_draw_protocol_32 { +typedef struct { u32 get_mode; u32 set_mode; u32 blt; -}; +} efi_uga_draw_protocol_32_t; -struct efi_uga_draw_protocol_64 { +typedef struct { u64 get_mode; u64 set_mode; u64 blt; -}; +} efi_uga_draw_protocol_64_t; -struct efi_uga_draw_protocol { +typedef struct { void *get_mode; void *set_mode; void *blt; -}; +} efi_uga_draw_protocol_t; #endif /* BOOT_COMPRESSED_EBOOT_H */ diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index b87a7582853d..302517929932 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -102,7 +102,7 @@ static bool memmap_too_large; /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */ -unsigned long long mem_limit = ULLONG_MAX; +static unsigned long long mem_limit = ULLONG_MAX; enum mem_avoid_index { @@ -215,7 +215,36 @@ static void mem_avoid_memmap(char *str) memmap_too_large = true; } -static int handle_mem_memmap(void) +/* Store the number of 1GB huge pages which users specified: */ +static unsigned long max_gb_huge_pages; + +static void parse_gb_huge_pages(char *param, char *val) +{ + static bool gbpage_sz; + char *p; + + if (!strcmp(param, "hugepagesz")) { + p = val; + if (memparse(p, &p) != PUD_SIZE) { + gbpage_sz = false; + return; + } + + if (gbpage_sz) + warn("Repeatedly set hugeTLB page size of 1G!\n"); + gbpage_sz = true; + return; + } + + if (!strcmp(param, "hugepages") && gbpage_sz) { + p = val; + max_gb_huge_pages = simple_strtoull(p, &p, 0); + return; + } +} + + +static int handle_mem_options(void) { char *args = (char *)get_cmd_line_ptr(); size_t len = strlen((char *)args); @@ -223,7 +252,8 @@ static int handle_mem_memmap(void) char *param, *val; u64 mem_size; - if (!strstr(args, "memmap=") && !strstr(args, "mem=")) + if (!strstr(args, "memmap=") && !strstr(args, "mem=") && + !strstr(args, "hugepages")) return 0; tmp_cmdline = malloc(len + 1); @@ -248,6 +278,8 @@ static int handle_mem_memmap(void) if (!strcmp(param, "memmap")) { mem_avoid_memmap(val); + } else if (strstr(param, "hugepages")) { + parse_gb_huge_pages(param, val); } else if (!strcmp(param, "mem")) { char *p = val; @@ -387,7 +419,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size, /* We don't need to set a mapping for setup_data. */ /* Mark the memmap regions we need to avoid */ - handle_mem_memmap(); + handle_mem_options(); #ifdef CONFIG_X86_VERBOSE_BOOTUP /* Make sure video RAM can be used. */ @@ -466,6 +498,60 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size) } } +/* + * Skip as many 1GB huge pages as possible in the passed region + * according to the number which users specified: + */ +static void +process_gb_huge_pages(struct mem_vector *region, unsigned long image_size) +{ + unsigned long addr, size = 0; + struct mem_vector tmp; + int i = 0; + + if (!max_gb_huge_pages) { + store_slot_info(region, image_size); + return; + } + + addr = ALIGN(region->start, PUD_SIZE); + /* Did we raise the address above the passed in memory entry? */ + if (addr < region->start + region->size) + size = region->size - (addr - region->start); + + /* Check how many 1GB huge pages can be filtered out: */ + while (size > PUD_SIZE && max_gb_huge_pages) { + size -= PUD_SIZE; + max_gb_huge_pages--; + i++; + } + + /* No good 1GB huge pages found: */ + if (!i) { + store_slot_info(region, image_size); + return; + } + + /* + * Skip those 'i'*1GB good huge pages, and continue checking and + * processing the remaining head or tail part of the passed region + * if available. + */ + + if (addr >= region->start + image_size) { + tmp.start = region->start; + tmp.size = addr - region->start; + store_slot_info(&tmp, image_size); + } + + size = region->size - (addr - region->start) - i * PUD_SIZE; + if (size >= image_size) { + tmp.start = addr + i * PUD_SIZE; + tmp.size = size; + store_slot_info(&tmp, image_size); + } +} + static unsigned long slots_fetch_random(void) { unsigned long slot; @@ -546,7 +632,7 @@ static void process_mem_region(struct mem_vector *entry, /* If nothing overlaps, store the region and return. */ if (!mem_avoid_overlap(®ion, &overlap)) { - store_slot_info(®ion, image_size); + process_gb_huge_pages(®ion, image_size); return; } @@ -556,7 +642,7 @@ static void process_mem_region(struct mem_vector *entry, beginning.start = region.start; beginning.size = overlap.start - region.start; - store_slot_info(&beginning, image_size); + process_gb_huge_pages(&beginning, image_size); } /* Return if overlap extends to or past end of region. */ diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index 8c5107545251..9e2157371491 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -1,3 +1,4 @@ +#include #include #include "pgtable.h" #include "../string.h" @@ -34,10 +35,62 @@ unsigned long *trampoline_32bit __section(.data); extern struct boot_params *boot_params; int cmdline_find_option_bool(const char *option); +static unsigned long find_trampoline_placement(void) +{ + unsigned long bios_start, ebda_start; + unsigned long trampoline_start; + struct boot_e820_entry *entry; + int i; + + /* + * Find a suitable spot for the trampoline. + * This code is based on reserve_bios_regions(). + */ + + ebda_start = *(unsigned short *)0x40e << 4; + bios_start = *(unsigned short *)0x413 << 10; + + if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX) + bios_start = BIOS_START_MAX; + + if (ebda_start > BIOS_START_MIN && ebda_start < bios_start) + bios_start = ebda_start; + + bios_start = round_down(bios_start, PAGE_SIZE); + + /* Find the first usable memory region under bios_start. */ + for (i = boot_params->e820_entries - 1; i >= 0; i--) { + entry = &boot_params->e820_table[i]; + + /* Skip all entries above bios_start. */ + if (bios_start <= entry->addr) + continue; + + /* Skip non-RAM entries. */ + if (entry->type != E820_TYPE_RAM) + continue; + + /* Adjust bios_start to the end of the entry if needed. */ + if (bios_start > entry->addr + entry->size) + bios_start = entry->addr + entry->size; + + /* Keep bios_start page-aligned. */ + bios_start = round_down(bios_start, PAGE_SIZE); + + /* Skip the entry if it's too small. */ + if (bios_start - TRAMPOLINE_32BIT_SIZE < entry->addr) + continue; + + break; + } + + /* Place the trampoline just below the end of low memory */ + return bios_start - TRAMPOLINE_32BIT_SIZE; +} + struct paging_config paging_prepare(void *rmode) { struct paging_config paging_config = {}; - unsigned long bios_start, ebda_start; /* Initialize boot_params. Required for cmdline_find_option_bool(). */ boot_params = rmode; @@ -61,23 +114,7 @@ struct paging_config paging_prepare(void *rmode) paging_config.l5_required = 1; } - /* - * Find a suitable spot for the trampoline. - * This code is based on reserve_bios_regions(). - */ - - ebda_start = *(unsigned short *)0x40e << 4; - bios_start = *(unsigned short *)0x413 << 10; - - if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX) - bios_start = BIOS_START_MAX; - - if (ebda_start > BIOS_START_MIN && ebda_start < bios_start) - bios_start = ebda_start; - - /* Place the trampoline just below the end of low memory, aligned to 4k */ - paging_config.trampoline_start = bios_start - TRAMPOLINE_32BIT_SIZE; - paging_config.trampoline_start = round_down(paging_config.trampoline_start, PAGE_SIZE); + paging_config.trampoline_start = find_trampoline_placement(); trampoline_32bit = (unsigned long *)paging_config.trampoline_start; diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c index 16f49123d747..c4428a176973 100644 --- a/arch/x86/boot/string.c +++ b/arch/x86/boot/string.c @@ -13,6 +13,7 @@ */ #include +#include #include "ctype.h" #include "string.h" @@ -28,8 +29,8 @@ int memcmp(const void *s1, const void *s2, size_t len) { bool diff; - asm("repe; cmpsb; setnz %0" - : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len)); + asm("repe; cmpsb" CC_SET(nz) + : CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len)); return diff; } diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S index 9254e0b6cc06..5f7e43d4f64a 100644 --- a/arch/x86/crypto/aegis128-aesni-asm.S +++ b/arch/x86/crypto/aegis128-aesni-asm.S @@ -75,7 +75,7 @@ * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d pxor MSG, MSG mov LEN, %r8 @@ -535,6 +535,7 @@ ENTRY(crypto_aegis128_aesni_enc_tail) movdqu STATE3, 0x40(STATEP) FRAME_END + ret ENDPROC(crypto_aegis128_aesni_enc_tail) .macro decrypt_block a s0 s1 s2 s3 s4 i diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index 5de7c0d46edf..acd11b3bf639 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -375,16 +375,12 @@ static struct aead_alg crypto_aegis128_aesni_alg[] = { } }; -static const struct x86_cpu_id aesni_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_AES), - X86_FEATURE_MATCH(X86_FEATURE_XMM2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, aesni_cpu_id); - static int __init crypto_aegis128_aesni_module_init(void) { - if (!x86_match_cpu(aesni_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_XMM2) || + !boot_cpu_has(X86_FEATURE_AES) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; return crypto_register_aeads(crypto_aegis128_aesni_alg, diff --git a/arch/x86/crypto/aegis128l-aesni-asm.S b/arch/x86/crypto/aegis128l-aesni-asm.S index 9263c344f2c7..491dd61c845c 100644 --- a/arch/x86/crypto/aegis128l-aesni-asm.S +++ b/arch/x86/crypto/aegis128l-aesni-asm.S @@ -66,7 +66,7 @@ * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d pxor MSG0, MSG0 pxor MSG1, MSG1 @@ -645,6 +645,7 @@ ENTRY(crypto_aegis128l_aesni_enc_tail) state_store0 FRAME_END + ret ENDPROC(crypto_aegis128l_aesni_enc_tail) /* diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c index 876e4866e633..2071c3d1ae07 100644 --- a/arch/x86/crypto/aegis128l-aesni-glue.c +++ b/arch/x86/crypto/aegis128l-aesni-glue.c @@ -375,16 +375,12 @@ static struct aead_alg crypto_aegis128l_aesni_alg[] = { } }; -static const struct x86_cpu_id aesni_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_AES), - X86_FEATURE_MATCH(X86_FEATURE_XMM2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, aesni_cpu_id); - static int __init crypto_aegis128l_aesni_module_init(void) { - if (!x86_match_cpu(aesni_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_XMM2) || + !boot_cpu_has(X86_FEATURE_AES) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; return crypto_register_aeads(crypto_aegis128l_aesni_alg, diff --git a/arch/x86/crypto/aegis256-aesni-asm.S b/arch/x86/crypto/aegis256-aesni-asm.S index 1d977d515bf9..8870c7c5d9a4 100644 --- a/arch/x86/crypto/aegis256-aesni-asm.S +++ b/arch/x86/crypto/aegis256-aesni-asm.S @@ -59,7 +59,7 @@ * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d pxor MSG, MSG mov LEN, %r8 @@ -543,6 +543,7 @@ ENTRY(crypto_aegis256_aesni_enc_tail) state_store0 FRAME_END + ret ENDPROC(crypto_aegis256_aesni_enc_tail) /* diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c index 2b5dd3af8f4d..b5f2a8fd5a71 100644 --- a/arch/x86/crypto/aegis256-aesni-glue.c +++ b/arch/x86/crypto/aegis256-aesni-glue.c @@ -375,16 +375,12 @@ static struct aead_alg crypto_aegis256_aesni_alg[] = { } }; -static const struct x86_cpu_id aesni_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_AES), - X86_FEATURE_MATCH(X86_FEATURE_XMM2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, aesni_cpu_id); - static int __init crypto_aegis256_aesni_module_init(void) { - if (!x86_match_cpu(aesni_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_XMM2) || + !boot_cpu_has(X86_FEATURE_AES) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; return crypto_register_aeads(crypto_aegis256_aesni_alg, diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index e762ef417562..9bd139569b41 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -258,7 +258,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff .macro GCM_INIT Iv SUBKEY AAD AADLEN mov \AADLEN, %r11 mov %r11, AadLen(%arg2) # ctx_data.aad_length = aad_length - xor %r11, %r11 + xor %r11d, %r11d mov %r11, InLen(%arg2) # ctx_data.in_length = 0 mov %r11, PBlockLen(%arg2) # ctx_data.partial_block_length = 0 mov %r11, PBlockEncKey(%arg2) # ctx_data.partial_block_enc_key = 0 @@ -286,7 +286,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff movdqu HashKey(%arg2), %xmm13 add %arg5, InLen(%arg2) - xor %r11, %r11 # initialise the data pointer offset as zero + xor %r11d, %r11d # initialise the data pointer offset as zero PARTIAL_BLOCK %arg3 %arg4 %arg5 %r11 %xmm8 \operation sub %r11, %arg5 # sub partial block data used @@ -702,7 +702,7 @@ _no_extra_mask_1_\@: # GHASH computation for the last <16 Byte block GHASH_MUL \AAD_HASH, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - xor %rax,%rax + xor %eax, %eax mov %rax, PBlockLen(%arg2) jmp _dec_done_\@ @@ -737,7 +737,7 @@ _no_extra_mask_2_\@: # GHASH computation for the last <16 Byte block GHASH_MUL \AAD_HASH, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - xor %rax,%rax + xor %eax, %eax mov %rax, PBlockLen(%arg2) jmp _encode_done_\@ diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index faecb1518bf8..1985ea0b551b 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -463,7 +463,7 @@ _get_AAD_rest_final\@: _get_AAD_done\@: # initialize the data pointer offset as zero - xor %r11, %r11 + xor %r11d, %r11d # start AES for num_initial_blocks blocks mov arg5, %rax # rax = *Y0 @@ -1770,7 +1770,7 @@ _get_AAD_rest_final\@: _get_AAD_done\@: # initialize the data pointer offset as zero - xor %r11, %r11 + xor %r11d, %r11d # start AES for num_initial_blocks blocks mov arg5, %rax # rax = *Y0 diff --git a/arch/x86/crypto/morus1280-avx2-asm.S b/arch/x86/crypto/morus1280-avx2-asm.S index 37d422e77931..de182c460f82 100644 --- a/arch/x86/crypto/morus1280-avx2-asm.S +++ b/arch/x86/crypto/morus1280-avx2-asm.S @@ -113,7 +113,7 @@ ENDPROC(__morus1280_update_zero) * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d vpxor MSG, MSG, MSG mov %rcx, %r8 @@ -453,6 +453,7 @@ ENTRY(crypto_morus1280_avx2_enc_tail) vmovdqu STATE4, (4 * 32)(%rdi) FRAME_END + ret ENDPROC(crypto_morus1280_avx2_enc_tail) /* diff --git a/arch/x86/crypto/morus1280-avx2-glue.c b/arch/x86/crypto/morus1280-avx2-glue.c index f111f36d26dc..6634907d6ccd 100644 --- a/arch/x86/crypto/morus1280-avx2-glue.c +++ b/arch/x86/crypto/morus1280-avx2-glue.c @@ -37,15 +37,11 @@ asmlinkage void crypto_morus1280_avx2_final(void *state, void *tag_xor, MORUS1280_DECLARE_ALGS(avx2, "morus1280-avx2", 400); -static const struct x86_cpu_id avx2_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_AVX2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, avx2_cpu_id); - static int __init crypto_morus1280_avx2_module_init(void) { - if (!x86_match_cpu(avx2_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_AVX2) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) return -ENODEV; return crypto_register_aeads(crypto_morus1280_avx2_algs, diff --git a/arch/x86/crypto/morus1280-sse2-asm.S b/arch/x86/crypto/morus1280-sse2-asm.S index 1fe637c7be9d..da5d2905db60 100644 --- a/arch/x86/crypto/morus1280-sse2-asm.S +++ b/arch/x86/crypto/morus1280-sse2-asm.S @@ -235,7 +235,7 @@ ENDPROC(__morus1280_update_zero) * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d pxor MSG_LO, MSG_LO pxor MSG_HI, MSG_HI @@ -652,6 +652,7 @@ ENTRY(crypto_morus1280_sse2_enc_tail) movdqu STATE4_HI, (9 * 16)(%rdi) FRAME_END + ret ENDPROC(crypto_morus1280_sse2_enc_tail) /* diff --git a/arch/x86/crypto/morus1280-sse2-glue.c b/arch/x86/crypto/morus1280-sse2-glue.c index 839270aa713c..95cf857d2cbb 100644 --- a/arch/x86/crypto/morus1280-sse2-glue.c +++ b/arch/x86/crypto/morus1280-sse2-glue.c @@ -37,15 +37,11 @@ asmlinkage void crypto_morus1280_sse2_final(void *state, void *tag_xor, MORUS1280_DECLARE_ALGS(sse2, "morus1280-sse2", 350); -static const struct x86_cpu_id sse2_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_XMM2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, sse2_cpu_id); - static int __init crypto_morus1280_sse2_module_init(void) { - if (!x86_match_cpu(sse2_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_XMM2) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; return crypto_register_aeads(crypto_morus1280_sse2_algs, diff --git a/arch/x86/crypto/morus640-sse2-asm.S b/arch/x86/crypto/morus640-sse2-asm.S index 71c72a0a0862..414db480250e 100644 --- a/arch/x86/crypto/morus640-sse2-asm.S +++ b/arch/x86/crypto/morus640-sse2-asm.S @@ -113,7 +113,7 @@ ENDPROC(__morus640_update_zero) * %r9 */ __load_partial: - xor %r9, %r9 + xor %r9d, %r9d pxor MSG, MSG mov %rcx, %r8 @@ -437,6 +437,7 @@ ENTRY(crypto_morus640_sse2_enc_tail) movdqu STATE4, (4 * 16)(%rdi) FRAME_END + ret ENDPROC(crypto_morus640_sse2_enc_tail) /* diff --git a/arch/x86/crypto/morus640-sse2-glue.c b/arch/x86/crypto/morus640-sse2-glue.c index 26b47e2db8d2..615fb7bc9a32 100644 --- a/arch/x86/crypto/morus640-sse2-glue.c +++ b/arch/x86/crypto/morus640-sse2-glue.c @@ -37,15 +37,11 @@ asmlinkage void crypto_morus640_sse2_final(void *state, void *tag_xor, MORUS640_DECLARE_ALGS(sse2, "morus640-sse2", 400); -static const struct x86_cpu_id sse2_cpu_id[] = { - X86_FEATURE_MATCH(X86_FEATURE_XMM2), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, sse2_cpu_id); - static int __init crypto_morus640_sse2_module_init(void) { - if (!x86_match_cpu(sse2_cpu_id)) + if (!boot_cpu_has(X86_FEATURE_XMM2) || + !boot_cpu_has(X86_FEATURE_OSXSAVE) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; return crypto_register_aeads(crypto_morus640_sse2_algs, diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S index 6204bd53528c..613d0bfc3d84 100644 --- a/arch/x86/crypto/sha1_ssse3_asm.S +++ b/arch/x86/crypto/sha1_ssse3_asm.S @@ -96,7 +96,7 @@ # cleanup workspace mov $8, %ecx mov %rsp, %rdi - xor %rax, %rax + xor %eax, %eax rep stosq mov %rbp, %rsp # deallocate workspace diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 92190879b228..3b2490b81918 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -164,7 +164,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) if (cached_flags & _TIF_NOTIFY_RESUME) { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); - rseq_handle_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } if (cached_flags & _TIF_USER_RETURN_NOTIFY) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 2582881d19ce..2767c625a52c 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -65,7 +65,7 @@ # define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF #else # define preempt_stop(clobbers) -# define resume_kernel restore_all +# define resume_kernel restore_all_kernel #endif .macro TRACE_IRQS_IRET @@ -77,6 +77,8 @@ #endif .endm +#define PTI_SWITCH_MASK (1 << PAGE_SHIFT) + /* * User gs save/restore * @@ -154,7 +156,52 @@ #endif /* CONFIG_X86_32_LAZY_GS */ -.macro SAVE_ALL pt_regs_ax=%eax +/* Unconditionally switch to user cr3 */ +.macro SWITCH_TO_USER_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + + movl %cr3, \scratch_reg + orl $PTI_SWITCH_MASK, \scratch_reg + movl \scratch_reg, %cr3 +.Lend_\@: +.endm + +.macro BUG_IF_WRONG_CR3 no_user_check=0 +#ifdef CONFIG_DEBUG_ENTRY + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + .if \no_user_check == 0 + /* coming from usermode? */ + testl $SEGMENT_RPL_MASK, PT_CS(%esp) + jz .Lend_\@ + .endif + /* On user-cr3? */ + movl %cr3, %eax + testl $PTI_SWITCH_MASK, %eax + jnz .Lend_\@ + /* From userspace with kernel cr3 - BUG */ + ud2 +.Lend_\@: +#endif +.endm + +/* + * Switch to kernel cr3 if not already loaded and return current cr3 in + * \scratch_reg + */ +.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + movl %cr3, \scratch_reg + /* Test if we are already on kernel CR3 */ + testl $PTI_SWITCH_MASK, \scratch_reg + jz .Lend_\@ + andl $(~PTI_SWITCH_MASK), \scratch_reg + movl \scratch_reg, %cr3 + /* Return original CR3 in \scratch_reg */ + orl $PTI_SWITCH_MASK, \scratch_reg +.Lend_\@: +.endm + +.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 cld PUSH_GS pushl %fs @@ -173,6 +220,29 @@ movl $(__KERNEL_PERCPU), %edx movl %edx, %fs SET_KERNEL_GS %edx + + /* Switch to kernel stack if necessary */ +.if \switch_stacks > 0 + SWITCH_TO_KERNEL_STACK +.endif + +.endm + +.macro SAVE_ALL_NMI cr3_reg:req + SAVE_ALL + + BUG_IF_WRONG_CR3 + + /* + * Now switch the CR3 when PTI is enabled. + * + * We can enter with either user or kernel cr3, the code will + * store the old cr3 in \cr3_reg and switches to the kernel cr3 + * if necessary. + */ + SWITCH_TO_KERNEL_CR3 scratch_reg=\cr3_reg + +.Lend_\@: .endm /* @@ -221,6 +291,349 @@ POP_GS_EX .endm +.macro RESTORE_ALL_NMI cr3_reg:req pop=0 + /* + * Now switch the CR3 when PTI is enabled. + * + * We enter with kernel cr3 and switch the cr3 to the value + * stored on \cr3_reg, which is either a user or a kernel cr3. + */ + ALTERNATIVE "jmp .Lswitched_\@", "", X86_FEATURE_PTI + + testl $PTI_SWITCH_MASK, \cr3_reg + jz .Lswitched_\@ + + /* User cr3 in \cr3_reg - write it to hardware cr3 */ + movl \cr3_reg, %cr3 + +.Lswitched_\@: + + BUG_IF_WRONG_CR3 + + RESTORE_REGS pop=\pop +.endm + +.macro CHECK_AND_APPLY_ESPFIX +#ifdef CONFIG_X86_ESPFIX32 +#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) + + ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX + + movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS + /* + * Warning: PT_OLDSS(%esp) contains the wrong/random values if we + * are returning to the kernel. + * See comments in process.c:copy_thread() for details. + */ + movb PT_OLDSS(%esp), %ah + movb PT_CS(%esp), %al + andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax + cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax + jne .Lend_\@ # returning to user-space with LDT SS + + /* + * Setup and switch to ESPFIX stack + * + * We're returning to userspace with a 16 bit stack. The CPU will not + * restore the high word of ESP for us on executing iret... This is an + * "official" bug of all the x86-compatible CPUs, which we can work + * around to make dosemu and wine happy. We do this by preloading the + * high word of ESP with the high word of the userspace ESP while + * compensating for the offset by changing to the ESPFIX segment with + * a base address that matches for the difference. + */ + mov %esp, %edx /* load kernel esp */ + mov PT_OLDESP(%esp), %eax /* load userspace esp */ + mov %dx, %ax /* eax: new kernel esp */ + sub %eax, %edx /* offset (low word is 0) */ + shr $16, %edx + mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ + mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ + pushl $__ESPFIX_SS + pushl %eax /* new kernel esp */ + /* + * Disable interrupts, but do not irqtrace this section: we + * will soon execute iret and the tracer was already set to + * the irqstate after the IRET: + */ + DISABLE_INTERRUPTS(CLBR_ANY) + lss (%esp), %esp /* switch to espfix segment */ +.Lend_\@: +#endif /* CONFIG_X86_ESPFIX32 */ +.endm + +/* + * Called with pt_regs fully populated and kernel segments loaded, + * so we can access PER_CPU and use the integer registers. + * + * We need to be very careful here with the %esp switch, because an NMI + * can happen everywhere. If the NMI handler finds itself on the + * entry-stack, it will overwrite the task-stack and everything we + * copied there. So allocate the stack-frame on the task-stack and + * switch to it before we do any copying. + */ + +#define CS_FROM_ENTRY_STACK (1 << 31) +#define CS_FROM_USER_CR3 (1 << 30) + +.macro SWITCH_TO_KERNEL_STACK + + ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + + BUG_IF_WRONG_CR3 + + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + + /* + * %eax now contains the entry cr3 and we carry it forward in + * that register for the time this macro runs + */ + + /* Are we on the entry stack? Bail out if not! */ + movl PER_CPU_VAR(cpu_entry_area), %ecx + addl $CPU_ENTRY_AREA_entry_stack + SIZEOF_entry_stack, %ecx + subl %esp, %ecx /* ecx = (end of entry_stack) - esp */ + cmpl $SIZEOF_entry_stack, %ecx + jae .Lend_\@ + + /* Load stack pointer into %esi and %edi */ + movl %esp, %esi + movl %esi, %edi + + /* Move %edi to the top of the entry stack */ + andl $(MASK_entry_stack), %edi + addl $(SIZEOF_entry_stack), %edi + + /* Load top of task-stack into %edi */ + movl TSS_entry2task_stack(%edi), %edi + + /* + * Clear unused upper bits of the dword containing the word-sized CS + * slot in pt_regs in case hardware didn't clear it for us. + */ + andl $(0x0000ffff), PT_CS(%esp) + + /* Special case - entry from kernel mode via entry stack */ +#ifdef CONFIG_VM86 + movl PT_EFLAGS(%esp), %ecx # mix EFLAGS and CS + movb PT_CS(%esp), %cl + andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx +#else + movl PT_CS(%esp), %ecx + andl $SEGMENT_RPL_MASK, %ecx +#endif + cmpl $USER_RPL, %ecx + jb .Lentry_from_kernel_\@ + + /* Bytes to copy */ + movl $PTREGS_SIZE, %ecx + +#ifdef CONFIG_VM86 + testl $X86_EFLAGS_VM, PT_EFLAGS(%esi) + jz .Lcopy_pt_regs_\@ + + /* + * Stack-frame contains 4 additional segment registers when + * coming from VM86 mode + */ + addl $(4 * 4), %ecx + +#endif +.Lcopy_pt_regs_\@: + + /* Allocate frame on task-stack */ + subl %ecx, %edi + + /* Switch to task-stack */ + movl %edi, %esp + + /* + * We are now on the task-stack and can safely copy over the + * stack-frame + */ + shrl $2, %ecx + cld + rep movsl + + jmp .Lend_\@ + +.Lentry_from_kernel_\@: + + /* + * This handles the case when we enter the kernel from + * kernel-mode and %esp points to the entry-stack. When this + * happens we need to switch to the task-stack to run C code, + * but switch back to the entry-stack again when we approach + * iret and return to the interrupted code-path. This usually + * happens when we hit an exception while restoring user-space + * segment registers on the way back to user-space or when the + * sysenter handler runs with eflags.tf set. + * + * When we switch to the task-stack here, we can't trust the + * contents of the entry-stack anymore, as the exception handler + * might be scheduled out or moved to another CPU. Therefore we + * copy the complete entry-stack to the task-stack and set a + * marker in the iret-frame (bit 31 of the CS dword) to detect + * what we've done on the iret path. + * + * On the iret path we copy everything back and switch to the + * entry-stack, so that the interrupted kernel code-path + * continues on the same stack it was interrupted with. + * + * Be aware that an NMI can happen anytime in this code. + * + * %esi: Entry-Stack pointer (same as %esp) + * %edi: Top of the task stack + * %eax: CR3 on kernel entry + */ + + /* Calculate number of bytes on the entry stack in %ecx */ + movl %esi, %ecx + + /* %ecx to the top of entry-stack */ + andl $(MASK_entry_stack), %ecx + addl $(SIZEOF_entry_stack), %ecx + + /* Number of bytes on the entry stack to %ecx */ + sub %esi, %ecx + + /* Mark stackframe as coming from entry stack */ + orl $CS_FROM_ENTRY_STACK, PT_CS(%esp) + + /* + * Test the cr3 used to enter the kernel and add a marker + * so that we can switch back to it before iret. + */ + testl $PTI_SWITCH_MASK, %eax + jz .Lcopy_pt_regs_\@ + orl $CS_FROM_USER_CR3, PT_CS(%esp) + + /* + * %esi and %edi are unchanged, %ecx contains the number of + * bytes to copy. The code at .Lcopy_pt_regs_\@ will allocate + * the stack-frame on task-stack and copy everything over + */ + jmp .Lcopy_pt_regs_\@ + +.Lend_\@: +.endm + +/* + * Switch back from the kernel stack to the entry stack. + * + * The %esp register must point to pt_regs on the task stack. It will + * first calculate the size of the stack-frame to copy, depending on + * whether we return to VM86 mode or not. With that it uses 'rep movsl' + * to copy the contents of the stack over to the entry stack. + * + * We must be very careful here, as we can't trust the contents of the + * task-stack once we switched to the entry-stack. When an NMI happens + * while on the entry-stack, the NMI handler will switch back to the top + * of the task stack, overwriting our stack-frame we are about to copy. + * Therefore we switch the stack only after everything is copied over. + */ +.macro SWITCH_TO_ENTRY_STACK + + ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + + /* Bytes to copy */ + movl $PTREGS_SIZE, %ecx + +#ifdef CONFIG_VM86 + testl $(X86_EFLAGS_VM), PT_EFLAGS(%esp) + jz .Lcopy_pt_regs_\@ + + /* Additional 4 registers to copy when returning to VM86 mode */ + addl $(4 * 4), %ecx + +.Lcopy_pt_regs_\@: +#endif + + /* Initialize source and destination for movsl */ + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %edi + subl %ecx, %edi + movl %esp, %esi + + /* Save future stack pointer in %ebx */ + movl %edi, %ebx + + /* Copy over the stack-frame */ + shrl $2, %ecx + cld + rep movsl + + /* + * Switch to entry-stack - needs to happen after everything is + * copied because the NMI handler will overwrite the task-stack + * when on entry-stack + */ + movl %ebx, %esp + +.Lend_\@: +.endm + +/* + * This macro handles the case when we return to kernel-mode on the iret + * path and have to switch back to the entry stack and/or user-cr3 + * + * See the comments below the .Lentry_from_kernel_\@ label in the + * SWITCH_TO_KERNEL_STACK macro for more details. + */ +.macro PARANOID_EXIT_TO_KERNEL_MODE + + /* + * Test if we entered the kernel with the entry-stack. Most + * likely we did not, because this code only runs on the + * return-to-kernel path. + */ + testl $CS_FROM_ENTRY_STACK, PT_CS(%esp) + jz .Lend_\@ + + /* Unlikely slow-path */ + + /* Clear marker from stack-frame */ + andl $(~CS_FROM_ENTRY_STACK), PT_CS(%esp) + + /* Copy the remaining task-stack contents to entry-stack */ + movl %esp, %esi + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %edi + + /* Bytes on the task-stack to ecx */ + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp1), %ecx + subl %esi, %ecx + + /* Allocate stack-frame on entry-stack */ + subl %ecx, %edi + + /* + * Save future stack-pointer, we must not switch until the + * copy is done, otherwise the NMI handler could destroy the + * contents of the task-stack we are about to copy. + */ + movl %edi, %ebx + + /* Do the copy */ + shrl $2, %ecx + cld + rep movsl + + /* Safe to switch to entry-stack now */ + movl %ebx, %esp + + /* + * We came from entry-stack and need to check if we also need to + * switch back to user cr3. + */ + testl $CS_FROM_USER_CR3, PT_CS(%esp) + jz .Lend_\@ + + /* Clear marker from stack-frame */ + andl $(~CS_FROM_USER_CR3), PT_CS(%esp) + + SWITCH_TO_USER_CR3 scratch_reg=%eax + +.Lend_\@: +.endm /* * %eax: prev task * %edx: next task @@ -351,9 +764,9 @@ ENTRY(resume_kernel) DISABLE_INTERRUPTS(CLBR_ANY) .Lneed_resched: cmpl $0, PER_CPU_VAR(__preempt_count) - jnz restore_all + jnz restore_all_kernel testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ? - jz restore_all + jz restore_all_kernel call preempt_schedule_irq jmp .Lneed_resched END(resume_kernel) @@ -412,7 +825,21 @@ ENTRY(xen_sysenter_target) * 0(%ebp) arg6 */ ENTRY(entry_SYSENTER_32) - movl TSS_sysenter_sp0(%esp), %esp + /* + * On entry-stack with all userspace-regs live - save and + * restore eflags and %eax to use it as scratch-reg for the cr3 + * switch. + */ + pushfl + pushl %eax + BUG_IF_WRONG_CR3 no_user_check=1 + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + popl %eax + popfl + + /* Stack empty again, switch to task stack */ + movl TSS_entry2task_stack(%esp), %esp + .Lsysenter_past_esp: pushl $__USER_DS /* pt_regs->ss */ pushl %ebp /* pt_regs->sp (stashed in bp) */ @@ -421,7 +848,7 @@ ENTRY(entry_SYSENTER_32) pushl $__USER_CS /* pt_regs->cs */ pushl $0 /* pt_regs->ip = 0 (placeholder) */ pushl %eax /* pt_regs->orig_ax */ - SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest */ + SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest, stack already switched */ /* * SYSENTER doesn't filter flags, so we need to clear NT, AC @@ -460,25 +887,49 @@ ENTRY(entry_SYSENTER_32) /* Opportunistic SYSEXIT */ TRACE_IRQS_ON /* User mode traces as IRQs on. */ + + /* + * Setup entry stack - we keep the pointer in %eax and do the + * switch after almost all user-state is restored. + */ + + /* Load entry stack pointer and allocate frame for eflags/eax */ + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %eax + subl $(2*4), %eax + + /* Copy eflags and eax to entry stack */ + movl PT_EFLAGS(%esp), %edi + movl PT_EAX(%esp), %esi + movl %edi, (%eax) + movl %esi, 4(%eax) + + /* Restore user registers and segments */ movl PT_EIP(%esp), %edx /* pt_regs->ip */ movl PT_OLDESP(%esp), %ecx /* pt_regs->sp */ 1: mov PT_FS(%esp), %fs PTGS_TO_GS + popl %ebx /* pt_regs->bx */ addl $2*4, %esp /* skip pt_regs->cx and pt_regs->dx */ popl %esi /* pt_regs->si */ popl %edi /* pt_regs->di */ popl %ebp /* pt_regs->bp */ - popl %eax /* pt_regs->ax */ + + /* Switch to entry stack */ + movl %eax, %esp + + /* Now ready to switch the cr3 */ + SWITCH_TO_USER_CR3 scratch_reg=%eax /* * Restore all flags except IF. (We restore IF separately because * STI gives a one-instruction window in which we won't be interrupted, * whereas POPF does not.) */ - addl $PT_EFLAGS-PT_DS, %esp /* point esp at pt_regs->flags */ - btr $X86_EFLAGS_IF_BIT, (%esp) + btrl $X86_EFLAGS_IF_BIT, (%esp) + BUG_IF_WRONG_CR3 no_user_check=1 popfl + popl %eax /* * Return back to the vDSO, which will pop ecx and edx. @@ -532,7 +983,8 @@ ENDPROC(entry_SYSENTER_32) ENTRY(entry_INT80_32) ASM_CLAC pushl %eax /* pt_regs->orig_ax */ - SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest */ + + SAVE_ALL pt_regs_ax=$-ENOSYS switch_stacks=1 /* save rest */ /* * User mode is traced as though IRQs are on, and the interrupt gate @@ -546,24 +998,17 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET + SWITCH_TO_ENTRY_STACK .Lrestore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX - - movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS - /* - * Warning: PT_OLDSS(%esp) contains the wrong/random values if we - * are returning to the kernel. - * See comments in process.c:copy_thread() for details. - */ - movb PT_OLDSS(%esp), %ah - movb PT_CS(%esp), %al - andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax - cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax - je .Lldt_ss # returning to user-space with LDT SS -#endif + CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: - RESTORE_REGS 4 # skip orig_eax/error_code + /* Switch back to user CR3 */ + SWITCH_TO_USER_CR3 scratch_reg=%eax + + BUG_IF_WRONG_CR3 + + /* Restore user state */ + RESTORE_REGS pop=4 # skip orig_eax/error_code .Lirq_return: /* * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization @@ -572,46 +1017,33 @@ restore_all: */ INTERRUPT_RETURN +restore_all_kernel: + TRACE_IRQS_IRET + PARANOID_EXIT_TO_KERNEL_MODE + BUG_IF_WRONG_CR3 + RESTORE_REGS 4 + jmp .Lirq_return + .section .fixup, "ax" ENTRY(iret_exc ) pushl $0 # no error code pushl $do_iret_error - jmp common_exception -.previous - _ASM_EXTABLE(.Lirq_return, iret_exc) -#ifdef CONFIG_X86_ESPFIX32 -.Lldt_ss: -/* - * Setup and switch to ESPFIX stack - * - * We're returning to userspace with a 16 bit stack. The CPU will not - * restore the high word of ESP for us on executing iret... This is an - * "official" bug of all the x86-compatible CPUs, which we can work - * around to make dosemu and wine happy. We do this by preloading the - * high word of ESP with the high word of the userspace ESP while - * compensating for the offset by changing to the ESPFIX segment with - * a base address that matches for the difference. - */ -#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) - mov %esp, %edx /* load kernel esp */ - mov PT_OLDESP(%esp), %eax /* load userspace esp */ - mov %dx, %ax /* eax: new kernel esp */ - sub %eax, %edx /* offset (low word is 0) */ - shr $16, %edx - mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ - mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ - pushl $__ESPFIX_SS - pushl %eax /* new kernel esp */ +#ifdef CONFIG_DEBUG_ENTRY /* - * Disable interrupts, but do not irqtrace this section: we - * will soon execute iret and the tracer was already set to - * the irqstate after the IRET: + * The stack-frame here is the one that iret faulted on, so its a + * return-to-user frame. We are on kernel-cr3 because we come here from + * the fixup code. This confuses the CR3 checker, so switch to user-cr3 + * as the checker expects it. */ - DISABLE_INTERRUPTS(CLBR_ANY) - lss (%esp), %esp /* switch to espfix segment */ - jmp .Lrestore_nocheck + pushl %eax + SWITCH_TO_USER_CR3 scratch_reg=%eax + popl %eax #endif + + jmp common_exception +.previous + _ASM_EXTABLE(.Lirq_return, iret_exc) ENDPROC(entry_INT80_32) .macro FIXUP_ESPFIX_STACK @@ -671,7 +1103,8 @@ END(irq_entries_start) common_interrupt: ASM_CLAC addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ - SAVE_ALL + + SAVE_ALL switch_stacks=1 ENCODE_FRAME_POINTER TRACE_IRQS_OFF movl %esp, %eax @@ -679,16 +1112,16 @@ common_interrupt: jmp ret_from_intr ENDPROC(common_interrupt) -#define BUILD_INTERRUPT3(name, nr, fn) \ -ENTRY(name) \ - ASM_CLAC; \ - pushl $~(nr); \ - SAVE_ALL; \ - ENCODE_FRAME_POINTER; \ - TRACE_IRQS_OFF \ - movl %esp, %eax; \ - call fn; \ - jmp ret_from_intr; \ +#define BUILD_INTERRUPT3(name, nr, fn) \ +ENTRY(name) \ + ASM_CLAC; \ + pushl $~(nr); \ + SAVE_ALL switch_stacks=1; \ + ENCODE_FRAME_POINTER; \ + TRACE_IRQS_OFF \ + movl %esp, %eax; \ + call fn; \ + jmp ret_from_intr; \ ENDPROC(name) #define BUILD_INTERRUPT(name, nr) \ @@ -920,16 +1353,20 @@ common_exception: pushl %es pushl %ds pushl %eax + movl $(__USER_DS), %eax + movl %eax, %ds + movl %eax, %es + movl $(__KERNEL_PERCPU), %eax + movl %eax, %fs pushl %ebp pushl %edi pushl %esi pushl %edx pushl %ecx pushl %ebx + SWITCH_TO_KERNEL_STACK ENCODE_FRAME_POINTER cld - movl $(__KERNEL_PERCPU), %ecx - movl %ecx, %fs UNWIND_ESPFIX_STACK GS_TO_REG %ecx movl PT_GS(%esp), %edi # get the function address @@ -937,9 +1374,6 @@ common_exception: movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart REG_TO_PTGS %ecx SET_KERNEL_GS %ecx - movl $(__USER_DS), %ecx - movl %ecx, %ds - movl %ecx, %es TRACE_IRQS_OFF movl %esp, %eax # pt_regs pointer CALL_NOSPEC %edi @@ -948,40 +1382,12 @@ END(common_exception) ENTRY(debug) /* - * #DB can happen at the first instruction of - * entry_SYSENTER_32 or in Xen's SYSENTER prologue. If this - * happens, then we will be running on a very small stack. We - * need to detect this condition and switch to the thread - * stack before calling any C code at all. - * - * If you edit this code, keep in mind that NMIs can happen in here. + * Entry from sysenter is now handled in common_exception */ ASM_CLAC pushl $-1 # mark this as an int - SAVE_ALL - ENCODE_FRAME_POINTER - xorl %edx, %edx # error code 0 - movl %esp, %eax # pt_regs pointer - - /* Are we currently on the SYSENTER stack? */ - movl PER_CPU_VAR(cpu_entry_area), %ecx - addl $CPU_ENTRY_AREA_entry_stack + SIZEOF_entry_stack, %ecx - subl %eax, %ecx /* ecx = (end of entry_stack) - esp */ - cmpl $SIZEOF_entry_stack, %ecx - jb .Ldebug_from_sysenter_stack - - TRACE_IRQS_OFF - call do_debug - jmp ret_from_exception - -.Ldebug_from_sysenter_stack: - /* We're on the SYSENTER stack. Switch off. */ - movl %esp, %ebx - movl PER_CPU_VAR(cpu_current_top_of_stack), %esp - TRACE_IRQS_OFF - call do_debug - movl %ebx, %esp - jmp ret_from_exception + pushl $do_debug + jmp common_exception END(debug) /* @@ -993,6 +1399,7 @@ END(debug) */ ENTRY(nmi) ASM_CLAC + #ifdef CONFIG_X86_ESPFIX32 pushl %eax movl %ss, %eax @@ -1002,7 +1409,7 @@ ENTRY(nmi) #endif pushl %eax # pt_regs->orig_ax - SAVE_ALL + SAVE_ALL_NMI cr3_reg=%edi ENCODE_FRAME_POINTER xorl %edx, %edx # zero error code movl %esp, %eax # pt_regs pointer @@ -1016,7 +1423,7 @@ ENTRY(nmi) /* Not on SYSENTER stack. */ call do_nmi - jmp .Lrestore_all_notrace + jmp .Lnmi_return .Lnmi_from_sysenter_stack: /* @@ -1027,7 +1434,11 @@ ENTRY(nmi) movl PER_CPU_VAR(cpu_current_top_of_stack), %esp call do_nmi movl %ebx, %esp - jmp .Lrestore_all_notrace + +.Lnmi_return: + CHECK_AND_APPLY_ESPFIX + RESTORE_ALL_NMI cr3_reg=%edi pop=4 + jmp .Lirq_return #ifdef CONFIG_X86_ESPFIX32 .Lnmi_espfix_stack: @@ -1042,12 +1453,12 @@ ENTRY(nmi) pushl 16(%esp) .endr pushl %eax - SAVE_ALL + SAVE_ALL_NMI cr3_reg=%edi ENCODE_FRAME_POINTER FIXUP_ESPFIX_STACK # %eax == %esp xorl %edx, %edx # zero error code call do_nmi - RESTORE_REGS + RESTORE_ALL_NMI cr3_reg=%edi lss 12+4(%esp), %esp # back to espfix stack jmp .Lirq_return #endif @@ -1056,7 +1467,8 @@ END(nmi) ENTRY(int3) ASM_CLAC pushl $-1 # mark this as an int - SAVE_ALL + + SAVE_ALL switch_stacks=1 ENCODE_FRAME_POINTER TRACE_IRQS_OFF xorl %edx, %edx # zero error code diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 73a522d53b53..957dfb693ecc 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -92,7 +92,7 @@ END(native_usergs_sysret64) .endm .macro TRACE_IRQS_IRETQ_DEBUG - bt $9, EFLAGS(%rsp) /* interrupts off? */ + btl $9, EFLAGS(%rsp) /* interrupts off? */ jnc 1f TRACE_IRQS_ON_DEBUG 1: @@ -408,6 +408,7 @@ ENTRY(ret_from_fork) 1: /* kernel thread */ + UNWIND_HINT_EMPTY movq %r12, %rdi CALL_NOSPEC %rbx /* @@ -701,7 +702,7 @@ retint_kernel: #ifdef CONFIG_PREEMPT /* Interrupts are off */ /* Check if we need preemption */ - bt $9, EFLAGS(%rsp) /* were interrupts off? */ + btl $9, EFLAGS(%rsp) /* were interrupts off? */ jnc 1f 0: cmpl $0, PER_CPU_VAR(__preempt_count) jnz 1f @@ -981,7 +982,7 @@ ENTRY(\sym) call \do_sym - jmp error_exit /* %ebx: no swapgs flag */ + jmp error_exit .endif END(\sym) .endm @@ -1222,7 +1223,6 @@ END(paranoid_exit) /* * Save all registers in pt_regs, and switch GS if needed. - * Return: EBX=0: came from user mode; EBX=1: otherwise */ ENTRY(error_entry) UNWIND_HINT_FUNC @@ -1269,7 +1269,6 @@ ENTRY(error_entry) * for these here too. */ .Lerror_kernelspace: - incl %ebx leaq native_irq_return_iret(%rip), %rcx cmpq %rcx, RIP+8(%rsp) je .Lerror_bad_iret @@ -1303,28 +1302,20 @@ ENTRY(error_entry) /* * Pretend that the exception came from user mode: set up pt_regs - * as if we faulted immediately after IRET and clear EBX so that - * error_exit knows that we will be returning to user mode. + * as if we faulted immediately after IRET. */ mov %rsp, %rdi call fixup_bad_iret mov %rax, %rsp - decl %ebx jmp .Lerror_entry_from_usermode_after_swapgs END(error_entry) - -/* - * On entry, EBX is a "return to kernel mode" flag: - * 1: already in kernel mode, don't need SWAPGS - * 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode - */ ENTRY(error_exit) UNWIND_HINT_REGS DISABLE_INTERRUPTS(CLBR_ANY) TRACE_IRQS_OFF - testl %ebx, %ebx - jnz retint_kernel + testb $3, CS(%rsp) + jz retint_kernel jmp retint_user END(error_exit) diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 9de7f1e1dede..7d0df78db727 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -84,13 +84,13 @@ ENTRY(entry_SYSENTER_compat) pushq %rdx /* pt_regs->dx */ pushq %rcx /* pt_regs->cx */ pushq $-ENOSYS /* pt_regs->ax */ - pushq %r8 /* pt_regs->r8 */ + pushq $0 /* pt_regs->r8 = 0 */ xorl %r8d, %r8d /* nospec r8 */ - pushq %r9 /* pt_regs->r9 */ + pushq $0 /* pt_regs->r9 = 0 */ xorl %r9d, %r9d /* nospec r9 */ - pushq %r10 /* pt_regs->r10 */ + pushq $0 /* pt_regs->r10 = 0 */ xorl %r10d, %r10d /* nospec r10 */ - pushq %r11 /* pt_regs->r11 */ + pushq $0 /* pt_regs->r11 = 0 */ xorl %r11d, %r11d /* nospec r11 */ pushq %rbx /* pt_regs->rbx */ xorl %ebx, %ebx /* nospec rbx */ @@ -374,13 +374,13 @@ ENTRY(entry_INT80_compat) pushq %rcx /* pt_regs->cx */ xorl %ecx, %ecx /* nospec cx */ pushq $-ENOSYS /* pt_regs->ax */ - pushq $0 /* pt_regs->r8 = 0 */ + pushq %r8 /* pt_regs->r8 */ xorl %r8d, %r8d /* nospec r8 */ - pushq $0 /* pt_regs->r9 = 0 */ + pushq %r9 /* pt_regs->r9 */ xorl %r9d, %r9d /* nospec r9 */ - pushq $0 /* pt_regs->r10 = 0 */ + pushq %r10 /* pt_regs->r10*/ xorl %r10d, %r10d /* nospec r10 */ - pushq $0 /* pt_regs->r11 = 0 */ + pushq %r11 /* pt_regs->r11 */ xorl %r11d, %r11d /* nospec r11 */ pushq %rbx /* pt_regs->rbx */ xorl %ebx, %ebx /* nospec rbx */ diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 261802b1cc50..9f695f517747 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -46,10 +46,8 @@ targets += $(vdso_img_sodbg) $(vdso_img-y:%=vdso%.so) CPPFLAGS_vdso.lds += -P -C -VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \ - -Wl,--no-undefined \ - -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096 \ - $(DISABLE_LTO) +VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 --no-undefined \ + -z max-page-size=4096 -z common-page-size=4096 $(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE $(call if_changed,vdso) @@ -58,9 +56,7 @@ HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(src hostprogs-y += vdso2c quiet_cmd_vdso2c = VDSO2C $@ -define cmd_vdso2c - $(obj)/vdso2c $< $(<:%.dbg=%) $@ -endef + cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE $(call if_changed,vdso2c) @@ -95,10 +91,8 @@ CFLAGS_REMOVE_vvar.o = -pg # CPPFLAGS_vdsox32.lds = $(CPPFLAGS_vdso.lds) -VDSO_LDFLAGS_vdsox32.lds = -Wl,-m,elf32_x86_64 \ - -Wl,-soname=linux-vdso.so.1 \ - -Wl,-z,max-page-size=4096 \ - -Wl,-z,common-page-size=4096 +VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \ + -z max-page-size=4096 -z common-page-size=4096 # x32-rebranded versions vobjx32s-y := $(vobjs-y:.o=-x32.o) @@ -123,7 +117,7 @@ $(obj)/vdsox32.so.dbg: $(obj)/vdsox32.lds $(vobjx32s) FORCE $(call if_changed,vdso) CPPFLAGS_vdso32.lds = $(CPPFLAGS_vdso.lds) -VDSO_LDFLAGS_vdso32.lds = -m32 -Wl,-m,elf_i386 -Wl,-soname=linux-gate.so.1 +VDSO_LDFLAGS_vdso32.lds = -m elf_i386 -soname linux-gate.so.1 targets += vdso32/vdso32.lds targets += vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o @@ -157,13 +151,13 @@ $(obj)/vdso32.so.dbg: FORCE \ # The DSO images are built using a special linker script. # quiet_cmd_vdso = VDSO $@ - cmd_vdso = $(CC) -nostdlib -o $@ \ + cmd_vdso = $(LD) -nostdlib -o $@ \ $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \ - -Wl,-T,$(filter %.lds,$^) $(filter %.o,$^) && \ + -T $(filter %.lds,$^) $(filter %.o,$^) && \ sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@' -VDSO_LDFLAGS = -fPIC -shared $(call cc-ldoption, -Wl$(comma)--hash-style=both) \ - $(call cc-ldoption, -Wl$(comma)--build-id) -Wl,-Bsymbolic $(LTO_CFLAGS) +VDSO_LDFLAGS = -shared $(call ld-option, --hash-style=both) \ + $(call ld-option, --build-id) -Bsymbolic GCOV_PROFILE := n # diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c index 7782cdbcd67d..82ed001e8909 100644 --- a/arch/x86/entry/vsyscall/vsyscall_64.c +++ b/arch/x86/entry/vsyscall/vsyscall_64.c @@ -201,7 +201,7 @@ bool emulate_vsyscall(struct pt_regs *regs, unsigned long address) /* * Handle seccomp. regs->ip must be the original value. - * See seccomp_send_sigsys and Documentation/prctl/seccomp_filter.txt. + * See seccomp_send_sigsys and Documentation/userspace-api/seccomp_filter.rst. * * We could optimize the seccomp disabled case, but performance * here doesn't matter. diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 4b98101209a1..d50bb4dc0650 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -579,7 +579,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) { struct cpu_perf_ibs *pcpu = this_cpu_ptr(perf_ibs->pcpu); struct perf_event *event = pcpu->event; - struct hw_perf_event *hwc = &event->hw; + struct hw_perf_event *hwc; struct perf_sample_data data; struct perf_raw_record raw; struct pt_regs regs; @@ -602,6 +602,10 @@ fail: return 0; } + if (WARN_ON_ONCE(!event)) + goto fail; + + hwc = &event->hw; msr = hwc->config_base; buf = ibs_data.regs; rdmsrl(msr, *buf); diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 707b2a96e516..035c37481f57 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2041,15 +2041,15 @@ static void intel_pmu_disable_event(struct perf_event *event) cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx); cpuc->intel_cp_status &= ~(1ull << hwc->idx); + if (unlikely(event->attr.precise_ip)) + intel_pmu_pebs_disable(event); + if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { intel_pmu_disable_fixed(hwc); return; } x86_pmu_disable_event(event); - - if (unlikely(event->attr.precise_ip)) - intel_pmu_pebs_disable(event); } static void intel_pmu_del_event(struct perf_event *event) @@ -2068,17 +2068,19 @@ static void intel_pmu_read_event(struct perf_event *event) x86_perf_event_update(event); } -static void intel_pmu_enable_fixed(struct hw_perf_event *hwc) +static void intel_pmu_enable_fixed(struct perf_event *event) { + struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx - INTEL_PMC_IDX_FIXED; - u64 ctrl_val, bits, mask; + u64 ctrl_val, mask, bits = 0; /* - * Enable IRQ generation (0x8), + * Enable IRQ generation (0x8), if not PEBS, * and enable ring-3 counting (0x2) and ring-0 counting (0x1) * if requested: */ - bits = 0x8ULL; + if (!event->attr.precise_ip) + bits |= 0x8; if (hwc->config & ARCH_PERFMON_EVENTSEL_USR) bits |= 0x2; if (hwc->config & ARCH_PERFMON_EVENTSEL_OS) @@ -2120,14 +2122,14 @@ static void intel_pmu_enable_event(struct perf_event *event) if (unlikely(event_is_checkpointed(event))) cpuc->intel_cp_status |= (1ull << hwc->idx); + if (unlikely(event->attr.precise_ip)) + intel_pmu_pebs_enable(event); + if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { - intel_pmu_enable_fixed(hwc); + intel_pmu_enable_fixed(event); return; } - if (unlikely(event->attr.precise_ip)) - intel_pmu_pebs_enable(event); - __x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE); } @@ -2280,7 +2282,10 @@ again: * counters from the GLOBAL_STATUS mask and we always process PEBS * events via drain_pebs(). */ - status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); + if (x86_pmu.flags & PMU_FL_PEBS_ALL) + status &= ~cpuc->pebs_enabled; + else + status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); /* * PEBS overflow sets bit 62 in the global status register @@ -2997,6 +3002,9 @@ static int intel_pmu_hw_config(struct perf_event *event) } if (x86_pmu.pebs_aliases) x86_pmu.pebs_aliases(event); + + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; } if (needs_branch_stack(event)) { @@ -4069,7 +4077,6 @@ __init int intel_pmu_init(void) intel_pmu_lbr_init_skl(); x86_pmu.event_constraints = intel_slm_event_constraints; - x86_pmu.pebs_constraints = intel_glp_pebs_event_constraints; x86_pmu.extra_regs = intel_glm_extra_regs; /* * It's recommended to use CPU_CLK_UNHALTED.CORE_P + NPEBS @@ -4079,6 +4086,7 @@ __init int intel_pmu_init(void) x86_pmu.pebs_prec_dist = true; x86_pmu.lbr_pt_coexist = true; x86_pmu.flags |= PMU_FL_HAS_RSP_1; + x86_pmu.flags |= PMU_FL_PEBS_ALL; x86_pmu.get_event_constraints = glp_get_event_constraints; x86_pmu.cpu_events = glm_events_attrs; /* Goldmont Plus has 4-wide pipeline */ diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 8a10a045b57b..b7b01d762d32 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -408,9 +408,11 @@ static int alloc_bts_buffer(int cpu) ds->bts_buffer_base = (unsigned long) cea; ds_update_cea(cea, buffer, BTS_BUFFER_SIZE, PAGE_KERNEL); ds->bts_index = ds->bts_buffer_base; - max = BTS_RECORD_SIZE * (BTS_BUFFER_SIZE / BTS_RECORD_SIZE); - ds->bts_absolute_maximum = ds->bts_buffer_base + max; - ds->bts_interrupt_threshold = ds->bts_absolute_maximum - (max / 16); + max = BTS_BUFFER_SIZE / BTS_RECORD_SIZE; + ds->bts_absolute_maximum = ds->bts_buffer_base + + max * BTS_RECORD_SIZE; + ds->bts_interrupt_threshold = ds->bts_absolute_maximum - + (max / 16) * BTS_RECORD_SIZE; return 0; } @@ -711,12 +713,6 @@ struct event_constraint intel_glm_pebs_event_constraints[] = { EVENT_CONSTRAINT_END }; -struct event_constraint intel_glp_pebs_event_constraints[] = { - /* Allow all events as PEBS with no flags */ - INTEL_ALL_EVENT_CONSTRAINT(0, 0xf), - EVENT_CONSTRAINT_END -}; - struct event_constraint intel_nehalem_pebs_event_constraints[] = { INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */ INTEL_FLAGS_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */ @@ -869,6 +865,13 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event) } } + /* + * Extended PEBS support + * Makes the PEBS code search the normal constraints. + */ + if (x86_pmu.flags & PMU_FL_PEBS_ALL) + return NULL; + return &emptyconstraint; } @@ -894,10 +897,16 @@ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc) { struct debug_store *ds = cpuc->ds; u64 threshold; + int reserved; + + if (x86_pmu.flags & PMU_FL_PEBS_ALL) + reserved = x86_pmu.max_pebs_events + x86_pmu.num_counters_fixed; + else + reserved = x86_pmu.max_pebs_events; if (cpuc->n_pebs == cpuc->n_large_pebs) { threshold = ds->pebs_absolute_maximum - - x86_pmu.max_pebs_events * x86_pmu.pebs_record_size; + reserved * x86_pmu.pebs_record_size; } else { threshold = ds->pebs_buffer_base + x86_pmu.pebs_record_size; } @@ -961,7 +970,11 @@ void intel_pmu_pebs_enable(struct perf_event *event) * This must be done in pmu::start(), because PERF_EVENT_IOC_PERIOD. */ if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) { - ds->pebs_event_reset[hwc->idx] = + unsigned int idx = hwc->idx; + + if (idx >= INTEL_PMC_IDX_FIXED) + idx = MAX_PEBS_EVENTS + (idx - INTEL_PMC_IDX_FIXED); + ds->pebs_event_reset[idx] = (u64)(-hwc->sample_period) & x86_pmu.cntval_mask; } else { ds->pebs_event_reset[hwc->idx] = 0; @@ -1183,17 +1196,21 @@ static void setup_pebs_sample_data(struct perf_event *event, data->data_src.val = val; } + /* + * We must however always use iregs for the unwinder to stay sane; the + * record BP,SP,IP can point into thin air when the record is from a + * previous PMI context or an (I)RET happend between the record and + * PMI. + */ + if (sample_type & PERF_SAMPLE_CALLCHAIN) + data->callchain = perf_callchain(event, iregs); + /* * We use the interrupt regs as a base because the PEBS record does not * contain a full regs set, specifically it seems to lack segment * descriptors, which get used by things like user_mode(). * * In the simple case fix up only the IP for PERF_SAMPLE_IP. - * - * We must however always use BP,SP from iregs for the unwinder to stay - * sane; the record BP,SP can point into thin air when the record is - * from a previous PMI context or an (I)RET happend between the record - * and PMI. */ *regs = *iregs; @@ -1212,15 +1229,8 @@ static void setup_pebs_sample_data(struct perf_event *event, regs->si = pebs->si; regs->di = pebs->di; - /* - * Per the above; only set BP,SP if we don't need callchains. - * - * XXX: does this make sense? - */ - if (!(sample_type & PERF_SAMPLE_CALLCHAIN)) { - regs->bp = pebs->bp; - regs->sp = pebs->sp; - } + regs->bp = pebs->bp; + regs->sp = pebs->sp; #ifndef CONFIG_X86_32 regs->r8 = pebs->r8; @@ -1482,9 +1492,10 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) struct debug_store *ds = cpuc->ds; struct perf_event *event; void *base, *at, *top; - short counts[MAX_PEBS_EVENTS] = {}; - short error[MAX_PEBS_EVENTS] = {}; - int bit, i; + short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {}; + short error[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {}; + int bit, i, size; + u64 mask; if (!x86_pmu.pebs_active) return; @@ -1494,6 +1505,13 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ds->pebs_index = ds->pebs_buffer_base; + mask = (1ULL << x86_pmu.max_pebs_events) - 1; + size = x86_pmu.max_pebs_events; + if (x86_pmu.flags & PMU_FL_PEBS_ALL) { + mask |= ((1ULL << x86_pmu.num_counters_fixed) - 1) << INTEL_PMC_IDX_FIXED; + size = INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed; + } + if (unlikely(base >= top)) { /* * The drain_pebs() could be called twice in a short period @@ -1503,7 +1521,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) * update the event->count for this case. */ for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, - x86_pmu.max_pebs_events) { + size) { event = cpuc->events[bit]; if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) intel_pmu_save_and_restart_reload(event, 0); @@ -1516,12 +1534,12 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) u64 pebs_status; pebs_status = p->status & cpuc->pebs_enabled; - pebs_status &= (1ULL << x86_pmu.max_pebs_events) - 1; + pebs_status &= mask; /* PEBS v3 has more accurate status bits */ if (x86_pmu.intel_cap.pebs_format >= 3) { for_each_set_bit(bit, (unsigned long *)&pebs_status, - x86_pmu.max_pebs_events) + size) counts[bit]++; continue; @@ -1569,7 +1587,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) counts[bit]++; } - for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) { + for (bit = 0; bit < size; bit++) { if ((counts[bit] == 0) && (error[bit] == 0)) continue; diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c index cf372b90557e..f3e006bed9a7 100644 --- a/arch/x86/events/intel/lbr.c +++ b/arch/x86/events/intel/lbr.c @@ -216,6 +216,8 @@ static void intel_pmu_lbr_reset_64(void) void intel_pmu_lbr_reset(void) { + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); + if (!x86_pmu.lbr_nr) return; @@ -223,6 +225,9 @@ void intel_pmu_lbr_reset(void) intel_pmu_lbr_reset_32(); else intel_pmu_lbr_reset_64(); + + cpuc->last_task_ctx = NULL; + cpuc->last_log_id = 0; } /* @@ -334,6 +339,7 @@ static inline u64 rdlbr_to(unsigned int idx) static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) { + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); int i; unsigned lbr_idx, mask; u64 tos; @@ -344,9 +350,21 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) return; } - mask = x86_pmu.lbr_nr - 1; tos = task_ctx->tos; - for (i = 0; i < tos; i++) { + /* + * Does not restore the LBR registers, if + * - No one else touched them, and + * - Did not enter C6 + */ + if ((task_ctx == cpuc->last_task_ctx) && + (task_ctx->log_id == cpuc->last_log_id) && + rdlbr_from(tos)) { + task_ctx->lbr_stack_state = LBR_NONE; + return; + } + + mask = x86_pmu.lbr_nr - 1; + for (i = 0; i < task_ctx->valid_lbrs; i++) { lbr_idx = (tos - i) & mask; wrlbr_from(lbr_idx, task_ctx->lbr_from[i]); wrlbr_to (lbr_idx, task_ctx->lbr_to[i]); @@ -354,14 +372,24 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); } + + for (; i < x86_pmu.lbr_nr; i++) { + lbr_idx = (tos - i) & mask; + wrlbr_from(lbr_idx, 0); + wrlbr_to(lbr_idx, 0); + if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) + wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0); + } + wrmsrl(x86_pmu.lbr_tos, tos); task_ctx->lbr_stack_state = LBR_NONE; } static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) { + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); unsigned lbr_idx, mask; - u64 tos; + u64 tos, from; int i; if (task_ctx->lbr_callstack_users == 0) { @@ -371,15 +399,22 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) mask = x86_pmu.lbr_nr - 1; tos = intel_pmu_lbr_tos(); - for (i = 0; i < tos; i++) { + for (i = 0; i < x86_pmu.lbr_nr; i++) { lbr_idx = (tos - i) & mask; - task_ctx->lbr_from[i] = rdlbr_from(lbr_idx); + from = rdlbr_from(lbr_idx); + if (!from) + break; + task_ctx->lbr_from[i] = from; task_ctx->lbr_to[i] = rdlbr_to(lbr_idx); if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); } + task_ctx->valid_lbrs = i; task_ctx->tos = tos; task_ctx->lbr_stack_state = LBR_VALID; + + cpuc->last_task_ctx = task_ctx; + cpuc->last_log_id = ++task_ctx->log_id; } void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in) @@ -531,7 +566,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc) */ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) { - bool need_info = false; + bool need_info = false, call_stack = false; unsigned long mask = x86_pmu.lbr_nr - 1; int lbr_format = x86_pmu.intel_cap.lbr_format; u64 tos = intel_pmu_lbr_tos(); @@ -542,7 +577,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) if (cpuc->lbr_sel) { need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO); if (cpuc->lbr_sel->config & LBR_CALL_STACK) - num = tos; + call_stack = true; } for (i = 0; i < num; i++) { @@ -555,6 +590,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) from = rdlbr_from(lbr_idx); to = rdlbr_to(lbr_idx); + /* + * Read LBR call stack entries + * until invalid entry (0s) is detected. + */ + if (call_stack && !from) + break; + if (lbr_format == LBR_FORMAT_INFO && need_info) { u64 info; diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h index c9e1e0bef3c3..e17ab885b1e9 100644 --- a/arch/x86/events/intel/uncore.h +++ b/arch/x86/events/intel/uncore.h @@ -28,7 +28,7 @@ #define UNCORE_PCI_DEV_TYPE(data) ((data >> 8) & 0xff) #define UNCORE_PCI_DEV_IDX(data) (data & 0xff) #define UNCORE_EXTRA_PCI_DEV 0xff -#define UNCORE_EXTRA_PCI_DEV_MAX 3 +#define UNCORE_EXTRA_PCI_DEV_MAX 4 #define UNCORE_EVENT_CONSTRAINT(c, n) EVENT_CONSTRAINT(c, n, 0xff) diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c index 87dc0263a2e1..51d7c117e3c7 100644 --- a/arch/x86/events/intel/uncore_snbep.c +++ b/arch/x86/events/intel/uncore_snbep.c @@ -1029,6 +1029,7 @@ void snbep_uncore_cpu_init(void) enum { SNBEP_PCI_QPI_PORT0_FILTER, SNBEP_PCI_QPI_PORT1_FILTER, + BDX_PCI_QPI_PORT2_FILTER, HSWEP_PCI_PCU_3, }; @@ -3286,15 +3287,18 @@ static const struct pci_device_id bdx_uncore_pci_ids[] = { }, { /* QPI Port 0 filter */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f86), - .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 0), + .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, + SNBEP_PCI_QPI_PORT0_FILTER), }, { /* QPI Port 1 filter */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f96), - .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 1), + .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, + SNBEP_PCI_QPI_PORT1_FILTER), }, { /* QPI Port 2 filter */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f46), - .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 2), + .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, + BDX_PCI_QPI_PORT2_FILTER), }, { /* PCU.3 (for Capability registers) */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6fc0), diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 9f3711470ec1..156286335351 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -163,6 +163,7 @@ struct intel_excl_cntrs { unsigned core_id; /* per-core: core id */ }; +struct x86_perf_task_context; #define MAX_LBR_ENTRIES 32 enum { @@ -214,6 +215,8 @@ struct cpu_hw_events { struct perf_branch_entry lbr_entries[MAX_LBR_ENTRIES]; struct er_account *lbr_sel; u64 br_sel; + struct x86_perf_task_context *last_task_ctx; + int last_log_id; /* * Intel host/guest exclude bits @@ -648,8 +651,10 @@ struct x86_perf_task_context { u64 lbr_to[MAX_LBR_ENTRIES]; u64 lbr_info[MAX_LBR_ENTRIES]; int tos; + int valid_lbrs; int lbr_callstack_users; int lbr_stack_state; + int log_id; }; #define x86_add_quirk(func_) \ @@ -668,6 +673,7 @@ do { \ #define PMU_FL_HAS_RSP_1 0x2 /* has 2 equivalent offcore_rsp regs */ #define PMU_FL_EXCL_CNTRS 0x4 /* has exclusive counter requirements */ #define PMU_FL_EXCL_ENABLED 0x8 /* exclusive counter active */ +#define PMU_FL_PEBS_ALL 0x10 /* all events are valid PEBS events */ #define EVENT_VAR(_id) event_attr_##_id #define EVENT_PTR(_id) &event_attr_##_id.attr.attr diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c index f68855499391..5b0f613428c2 100644 --- a/arch/x86/hyperv/hv_apic.c +++ b/arch/x86/hyperv/hv_apic.c @@ -31,6 +31,8 @@ #include #include +#include + static struct apic orig_apic; static u64 hv_apic_icr_read(void) @@ -99,6 +101,9 @@ static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector) int nr_bank = 0; int ret = 1; + if (!(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) + return false; + local_irq_save(flags); arg = (struct ipi_arg_ex **)this_cpu_ptr(hyperv_pcpu_input_arg); @@ -114,6 +119,8 @@ static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector) ipi_arg->vp_set.format = HV_GENERIC_SET_SPARSE_4K; nr_bank = cpumask_to_vpset(&(ipi_arg->vp_set), mask); } + if (nr_bank < 0) + goto ipi_mask_ex_done; if (!nr_bank) ipi_arg->vp_set.format = HV_GENERIC_SET_ALL; @@ -128,10 +135,10 @@ ipi_mask_ex_done: static bool __send_ipi_mask(const struct cpumask *mask, int vector) { int cur_cpu, vcpu; - struct ipi_arg_non_ex **arg; - struct ipi_arg_non_ex *ipi_arg; + struct ipi_arg_non_ex ipi_arg; int ret = 1; - unsigned long flags; + + trace_hyperv_send_ipi_mask(mask, vector); if (cpumask_empty(mask)) return true; @@ -142,37 +149,43 @@ static bool __send_ipi_mask(const struct cpumask *mask, int vector) if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR)) return false; - if ((ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) - return __send_ipi_mask_ex(mask, vector); - - local_irq_save(flags); - arg = (struct ipi_arg_non_ex **)this_cpu_ptr(hyperv_pcpu_input_arg); - - ipi_arg = *arg; - if (unlikely(!ipi_arg)) - goto ipi_mask_done; - - ipi_arg->vector = vector; - ipi_arg->reserved = 0; - ipi_arg->cpu_mask = 0; + /* + * From the supplied CPU set we need to figure out if we can get away + * with cheaper HVCALL_SEND_IPI hypercall. This is possible when the + * highest VP number in the set is < 64. As VP numbers are usually in + * ascending order and match Linux CPU ids, here is an optimization: + * we check the VP number for the highest bit in the supplied set first + * so we can quickly find out if using HVCALL_SEND_IPI_EX hypercall is + * a must. We will also check all VP numbers when walking the supplied + * CPU set to remain correct in all cases. + */ + if (hv_cpu_number_to_vp_number(cpumask_last(mask)) >= 64) + goto do_ex_hypercall; + + ipi_arg.vector = vector; + ipi_arg.cpu_mask = 0; for_each_cpu(cur_cpu, mask) { vcpu = hv_cpu_number_to_vp_number(cur_cpu); + if (vcpu == VP_INVAL) + return false; + /* * This particular version of the IPI hypercall can * only target upto 64 CPUs. */ if (vcpu >= 64) - goto ipi_mask_done; + goto do_ex_hypercall; - __set_bit(vcpu, (unsigned long *)&ipi_arg->cpu_mask); + __set_bit(vcpu, (unsigned long *)&ipi_arg.cpu_mask); } - ret = hv_do_hypercall(HVCALL_SEND_IPI, ipi_arg, NULL); - -ipi_mask_done: - local_irq_restore(flags); + ret = hv_do_fast_hypercall16(HVCALL_SEND_IPI, ipi_arg.vector, + ipi_arg.cpu_mask); return ((ret == 0) ? true : false); + +do_ex_hypercall: + return __send_ipi_mask_ex(mask, vector); } static bool __send_ipi_one(int cpu, int vector) @@ -228,10 +241,7 @@ static void hv_send_ipi_self(int vector) void __init hv_apic_init(void) { if (ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) { - if ((ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) - pr_info("Hyper-V: Using ext hypercalls for IPI\n"); - else - pr_info("Hyper-V: Using IPI hypercalls\n"); + pr_info("Hyper-V: Using IPI hypercalls\n"); /* * Set the IPI entry points. */ diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 4c431e1c1eff..1ff420217298 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -265,7 +265,7 @@ void __init hyperv_init(void) { u64 guest_id, required_msrs; union hv_x64_msr_hypercall_contents hypercall_msr; - int cpuhp; + int cpuhp, i; if (x86_hyper_type != X86_HYPER_MS_HYPERV) return; @@ -293,6 +293,9 @@ void __init hyperv_init(void) if (!hv_vp_index) return; + for (i = 0; i < num_possible_cpus(); i++) + hv_vp_index[i] = VP_INVAL; + hv_vp_assist_page = kcalloc(num_possible_cpus(), sizeof(*hv_vp_assist_page), GFP_KERNEL); if (!hv_vp_assist_page) { diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c index de27615c51ea..1147e1fed7ff 100644 --- a/arch/x86/hyperv/mmu.c +++ b/arch/x86/hyperv/mmu.c @@ -16,6 +16,8 @@ /* Each gva in gva_list encodes up to 4096 pages to flush */ #define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE) +static u64 hyperv_flush_tlb_others_ex(const struct cpumask *cpus, + const struct flush_tlb_info *info); /* * Fills in gva_list starting from offset. Returns the number of items added. @@ -93,10 +95,29 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus, if (cpumask_equal(cpus, cpu_present_mask)) { flush->flags |= HV_FLUSH_ALL_PROCESSORS; } else { + /* + * From the supplied CPU set we need to figure out if we can get + * away with cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} + * hypercalls. This is possible when the highest VP number in + * the set is < 64. As VP numbers are usually in ascending order + * and match Linux CPU ids, here is an optimization: we check + * the VP number for the highest bit in the supplied set first + * so we can quickly find out if using *_EX hypercalls is a + * must. We will also check all VP numbers when walking the + * supplied CPU set to remain correct in all cases. + */ + if (hv_cpu_number_to_vp_number(cpumask_last(cpus)) >= 64) + goto do_ex_hypercall; + for_each_cpu(cpu, cpus) { vcpu = hv_cpu_number_to_vp_number(cpu); - if (vcpu >= 64) + if (vcpu == VP_INVAL) { + local_irq_restore(flags); goto do_native; + } + + if (vcpu >= 64) + goto do_ex_hypercall; __set_bit(vcpu, (unsigned long *) &flush->processor_mask); @@ -123,7 +144,12 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus, status = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST, gva_n, 0, flush, NULL); } + goto check_status; + +do_ex_hypercall: + status = hyperv_flush_tlb_others_ex(cpus, info); +check_status: local_irq_restore(flags); if (!(status & HV_HYPERCALL_RESULT_MASK)) @@ -132,35 +158,22 @@ do_native: native_flush_tlb_others(cpus, info); } -static void hyperv_flush_tlb_others_ex(const struct cpumask *cpus, - const struct flush_tlb_info *info) +static u64 hyperv_flush_tlb_others_ex(const struct cpumask *cpus, + const struct flush_tlb_info *info) { int nr_bank = 0, max_gvas, gva_n; struct hv_tlb_flush_ex **flush_pcpu; struct hv_tlb_flush_ex *flush; - u64 status = U64_MAX; - unsigned long flags; + u64 status; - trace_hyperv_mmu_flush_tlb_others(cpus, info); - - if (!hv_hypercall_pg) - goto do_native; - - if (cpumask_empty(cpus)) - return; - - local_irq_save(flags); + if (!(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) + return U64_MAX; flush_pcpu = (struct hv_tlb_flush_ex **) this_cpu_ptr(hyperv_pcpu_input_arg); flush = *flush_pcpu; - if (unlikely(!flush)) { - local_irq_restore(flags); - goto do_native; - } - if (info->mm) { /* * AddressSpace argument must match the CR3 with PCID bits @@ -176,15 +189,10 @@ static void hyperv_flush_tlb_others_ex(const struct cpumask *cpus, flush->hv_vp_set.valid_bank_mask = 0; - if (!cpumask_equal(cpus, cpu_present_mask)) { - flush->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K; - nr_bank = cpumask_to_vpset(&(flush->hv_vp_set), cpus); - } - - if (!nr_bank) { - flush->hv_vp_set.format = HV_GENERIC_SET_ALL; - flush->flags |= HV_FLUSH_ALL_PROCESSORS; - } + flush->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K; + nr_bank = cpumask_to_vpset(&(flush->hv_vp_set), cpus); + if (nr_bank < 0) + return U64_MAX; /* * We can flush not more than max_gvas with one hypercall. Flush the @@ -213,12 +221,7 @@ static void hyperv_flush_tlb_others_ex(const struct cpumask *cpus, gva_n, nr_bank, flush, NULL); } - local_irq_restore(flags); - - if (!(status & HV_HYPERCALL_RESULT_MASK)) - return; -do_native: - native_flush_tlb_others(cpus, info); + return status; } void hyperv_setup_mmu_ops(void) @@ -226,11 +229,6 @@ void hyperv_setup_mmu_ops(void) if (!(ms_hyperv.hints & HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED)) return; - if (!(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) { - pr_info("Using hypercall for remote TLB flush\n"); - pv_mmu_ops.flush_tlb_others = hyperv_flush_tlb_others; - } else { - pr_info("Using ext hypercall for remote TLB flush\n"); - pv_mmu_ops.flush_tlb_others = hyperv_flush_tlb_others_ex; - } + pr_info("Using hypercall for remote TLB flush\n"); + pv_mmu_ops.flush_tlb_others = hyperv_flush_tlb_others; } diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h index c356098b6fb9..4d4015ddcf26 100644 --- a/arch/x86/include/asm/apm.h +++ b/arch/x86/include/asm/apm.h @@ -7,8 +7,6 @@ #ifndef _ASM_X86_MACH_DEFAULT_APM_H #define _ASM_X86_MACH_DEFAULT_APM_H -#include - #ifdef APM_ZERO_SEGS # define APM_DO_ZERO_SEGS \ "pushl %%ds\n\t" \ @@ -34,7 +32,6 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in, * N.B. We do NOT need a cld after the BIOS call * because we always save and restore the flags. */ - firmware_restrict_branch_speculation_start(); __asm__ __volatile__(APM_DO_ZERO_SEGS "pushl %%edi\n\t" "pushl %%ebp\n\t" @@ -47,7 +44,6 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in, "=S" (*esi) : "a" (func), "b" (ebx_in), "c" (ecx_in) : "memory", "cc"); - firmware_restrict_branch_speculation_end(); } static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in, @@ -60,7 +56,6 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in, * N.B. We do NOT need a cld after the BIOS call * because we always save and restore the flags. */ - firmware_restrict_branch_speculation_start(); __asm__ __volatile__(APM_DO_ZERO_SEGS "pushl %%edi\n\t" "pushl %%ebp\n\t" @@ -73,7 +68,6 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in, "=S" (si) : "a" (func), "b" (ebx_in), "c" (ecx_in) : "memory", "cc"); - firmware_restrict_branch_speculation_end(); return error; } diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index 219faaec51df..990770f9e76b 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -46,6 +46,65 @@ #define _ASM_SI __ASM_REG(si) #define _ASM_DI __ASM_REG(di) +#ifndef __x86_64__ +/* 32 bit */ + +#define _ASM_ARG1 _ASM_AX +#define _ASM_ARG2 _ASM_DX +#define _ASM_ARG3 _ASM_CX + +#define _ASM_ARG1L eax +#define _ASM_ARG2L edx +#define _ASM_ARG3L ecx + +#define _ASM_ARG1W ax +#define _ASM_ARG2W dx +#define _ASM_ARG3W cx + +#define _ASM_ARG1B al +#define _ASM_ARG2B dl +#define _ASM_ARG3B cl + +#else +/* 64 bit */ + +#define _ASM_ARG1 _ASM_DI +#define _ASM_ARG2 _ASM_SI +#define _ASM_ARG3 _ASM_DX +#define _ASM_ARG4 _ASM_CX +#define _ASM_ARG5 r8 +#define _ASM_ARG6 r9 + +#define _ASM_ARG1Q rdi +#define _ASM_ARG2Q rsi +#define _ASM_ARG3Q rdx +#define _ASM_ARG4Q rcx +#define _ASM_ARG5Q r8 +#define _ASM_ARG6Q r9 + +#define _ASM_ARG1L edi +#define _ASM_ARG2L esi +#define _ASM_ARG3L edx +#define _ASM_ARG4L ecx +#define _ASM_ARG5L r8d +#define _ASM_ARG6L r9d + +#define _ASM_ARG1W di +#define _ASM_ARG2W si +#define _ASM_ARG3W dx +#define _ASM_ARG4W cx +#define _ASM_ARG5W r8w +#define _ASM_ARG6W r9w + +#define _ASM_ARG1B dil +#define _ASM_ARG2B sil +#define _ASM_ARG3B dl +#define _ASM_ARG4B cl +#define _ASM_ARG5B r8b +#define _ASM_ARG6B r9b + +#endif + /* * Macros to generate condition code outputs from inline assembly, * The output operand must be type "bool". diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 0db6bec95489..b143717b92b3 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -80,6 +80,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v) * true if the result is zero, or false for all * other cases. */ +#define arch_atomic_sub_and_test arch_atomic_sub_and_test static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e); @@ -91,6 +92,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) * * Atomically increments @v by 1. */ +#define arch_atomic_inc arch_atomic_inc static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" @@ -103,6 +105,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic_dec arch_atomic_dec static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" @@ -117,6 +120,7 @@ static __always_inline void arch_atomic_dec(atomic_t *v) * returns true if the result is 0, or false for all other * cases. */ +#define arch_atomic_dec_and_test arch_atomic_dec_and_test static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e); @@ -130,6 +134,7 @@ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) * and returns true if the result is zero, or false for all * other cases. */ +#define arch_atomic_inc_and_test arch_atomic_inc_and_test static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e); @@ -144,6 +149,7 @@ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ +#define arch_atomic_add_negative arch_atomic_add_negative static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s); @@ -173,9 +179,6 @@ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) return arch_atomic_add_return(-i, v); } -#define arch_atomic_inc_return(v) (arch_atomic_add_return(1, v)) -#define arch_atomic_dec_return(v) (arch_atomic_sub_return(1, v)) - static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v) { return xadd(&v->counter, i); @@ -199,7 +202,7 @@ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n static inline int arch_atomic_xchg(atomic_t *v, int new) { - return xchg(&v->counter, new); + return arch_xchg(&v->counter, new); } static inline void arch_atomic_and(int i, atomic_t *v) @@ -253,27 +256,6 @@ static inline int arch_atomic_fetch_xor(int i, atomic_t *v) return val; } -/** - * __arch_atomic_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns the old value of @v. - */ -static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u) -{ - int c = arch_atomic_read(v); - - do { - if (unlikely(c == u)) - break; - } while (!arch_atomic_try_cmpxchg(v, &c, c + a)); - - return c; -} - #ifdef CONFIG_X86_32 # include #else diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 92212bf0484f..ef959f02d070 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -158,6 +158,7 @@ static inline long long arch_atomic64_inc_return(atomic64_t *v) "S" (v) : "memory", "ecx"); return a; } +#define arch_atomic64_inc_return arch_atomic64_inc_return static inline long long arch_atomic64_dec_return(atomic64_t *v) { @@ -166,6 +167,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v) "S" (v) : "memory", "ecx"); return a; } +#define arch_atomic64_dec_return arch_atomic64_dec_return /** * arch_atomic64_add - add integer to atomic64 variable @@ -197,26 +199,13 @@ static inline long long arch_atomic64_sub(long long i, atomic64_t *v) return i; } -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v) -{ - return arch_atomic64_sub_return(i, v) == 0; -} - /** * arch_atomic64_inc - increment atomic64 variable * @v: pointer to type atomic64_t * * Atomically increments @v by 1. */ +#define arch_atomic64_inc arch_atomic64_inc static inline void arch_atomic64_inc(atomic64_t *v) { __alternative_atomic64(inc, inc_return, /* no output */, @@ -229,52 +218,13 @@ static inline void arch_atomic64_inc(atomic64_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic64_dec arch_atomic64_dec static inline void arch_atomic64_dec(atomic64_t *v) { __alternative_atomic64(dec, dec_return, /* no output */, "S" (v) : "memory", "eax", "ecx", "edx"); } -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -static inline int arch_atomic64_dec_and_test(atomic64_t *v) -{ - return arch_atomic64_dec_return(v) == 0; -} - -/** - * atomic64_inc_and_test - increment and test - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -static inline int arch_atomic64_inc_and_test(atomic64_t *v) -{ - return arch_atomic64_inc_return(v) == 0; -} - -/** - * arch_atomic64_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -static inline int arch_atomic64_add_negative(long long i, atomic64_t *v) -{ - return arch_atomic64_add_return(i, v) < 0; -} - /** * arch_atomic64_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t @@ -295,7 +245,7 @@ static inline int arch_atomic64_add_unless(atomic64_t *v, long long a, return (int)a; } - +#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero static inline int arch_atomic64_inc_not_zero(atomic64_t *v) { int r; @@ -304,6 +254,7 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v) return r; } +#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) { long long r; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 6106b59d3260..4343d9b4f30e 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -71,6 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) * true if the result is zero, or false for all * other cases. */ +#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e); @@ -82,6 +83,7 @@ static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) * * Atomically increments @v by 1. */ +#define arch_atomic64_inc arch_atomic64_inc static __always_inline void arch_atomic64_inc(atomic64_t *v) { asm volatile(LOCK_PREFIX "incq %0" @@ -95,6 +97,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic64_dec arch_atomic64_dec static __always_inline void arch_atomic64_dec(atomic64_t *v) { asm volatile(LOCK_PREFIX "decq %0" @@ -110,6 +113,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) * returns true if the result is 0, or false for all other * cases. */ +#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test static inline bool arch_atomic64_dec_and_test(atomic64_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e); @@ -123,6 +127,7 @@ static inline bool arch_atomic64_dec_and_test(atomic64_t *v) * and returns true if the result is zero, or false for all * other cases. */ +#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test static inline bool arch_atomic64_inc_and_test(atomic64_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e); @@ -137,6 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ +#define arch_atomic64_add_negative arch_atomic64_add_negative static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s); @@ -169,9 +175,6 @@ static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v) return xadd(&v->counter, -i); } -#define arch_atomic64_inc_return(v) (arch_atomic64_add_return(1, (v))) -#define arch_atomic64_dec_return(v) (arch_atomic64_sub_return(1, (v))) - static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new) { return arch_cmpxchg(&v->counter, old, new); @@ -185,46 +188,7 @@ static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, l static inline long arch_atomic64_xchg(atomic64_t *v, long new) { - return xchg(&v->counter, new); -} - -/** - * arch_atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static inline bool arch_atomic64_add_unless(atomic64_t *v, long a, long u) -{ - s64 c = arch_atomic64_read(v); - do { - if (unlikely(c == u)) - return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + a)); - return true; -} - -#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0) - -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) -{ - s64 dec, c = arch_atomic64_read(v); - do { - dec = c - 1; - if (unlikely(dec < 0)) - break; - } while (!arch_atomic64_try_cmpxchg(v, &c, dec)); - return dec; + return arch_xchg(&v->counter, new); } static inline void arch_atomic64_and(long i, atomic64_t *v) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 042b5e892ed1..14de0432d288 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -38,7 +38,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, { unsigned long mask; - asm ("cmp %1,%2; sbb %0,%0;" + asm volatile ("cmp %1,%2; sbb %0,%0;" :"=r" (mask) :"g"(size),"r" (index) :"cc"); diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h index e3efd8a06066..a55d79b233d3 100644 --- a/arch/x86/include/asm/cmpxchg.h +++ b/arch/x86/include/asm/cmpxchg.h @@ -75,7 +75,7 @@ extern void __add_wrong_size(void) * use "asm volatile" and "memory" clobbers to prevent gcc from moving * information around. */ -#define xchg(ptr, v) __xchg_op((ptr), (v), xchg, "") +#define arch_xchg(ptr, v) __xchg_op((ptr), (v), xchg, "") /* * Atomic compare and exchange. Compare OLD with MEM, if identical, diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h index bfca3b346c74..072e5459fe2f 100644 --- a/arch/x86/include/asm/cmpxchg_64.h +++ b/arch/x86/include/asm/cmpxchg_64.h @@ -10,13 +10,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val) #define arch_cmpxchg64(ptr, o, n) \ ({ \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - cmpxchg((ptr), (o), (n)); \ + arch_cmpxchg((ptr), (o), (n)); \ }) #define arch_cmpxchg64_local(ptr, o, n) \ ({ \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - cmpxchg_local((ptr), (o), (n)); \ + arch_cmpxchg_local((ptr), (o), (n)); \ }) #define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 5701f5cecd31..b5c60faf8429 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -219,6 +219,7 @@ #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */ #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */ #define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */ +#define X86_FEATURE_IBRS_ENHANCED ( 7*32+29) /* Enhanced IBRS */ /* Virtualization flags: Linux defined, word 8 */ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ @@ -229,7 +230,7 @@ #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */ #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ - +#define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ diff --git a/arch/x86/include/asm/hw_breakpoint.h b/arch/x86/include/asm/hw_breakpoint.h index f59c39835a5a..a1f0e90d0818 100644 --- a/arch/x86/include/asm/hw_breakpoint.h +++ b/arch/x86/include/asm/hw_breakpoint.h @@ -49,11 +49,14 @@ static inline int hw_breakpoint_slots(int type) return HBP_NUM; } +struct perf_event_attr; struct perf_event; struct pmu; -extern int arch_check_bp_in_kernelspace(struct perf_event *bp); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp); +extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +extern int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h index cf090e584202..7ed08a7c3398 100644 --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -76,4 +76,17 @@ #define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */ #define INTEL_FAM6_XEON_PHI_KNM 0x85 /* Knights Mill */ +/* Useful macros */ +#define INTEL_CPU_FAM_ANY(_family, _model, _driver_data) \ +{ \ + .vendor = X86_VENDOR_INTEL, \ + .family = _family, \ + .model = _model, \ + .feature = X86_FEATURE_ANY, \ + .driver_data = (kernel_ulong_t)&_driver_data \ +} + +#define INTEL_CPU_FAM6(_model, _driver_data) \ + INTEL_CPU_FAM_ANY(6, INTEL_FAM6_##_model, _driver_data) + #endif /* _ASM_X86_INTEL_FAMILY_H */ diff --git a/arch/x86/include/asm/intel-mid.h b/arch/x86/include/asm/intel-mid.h index fe04491130ae..52f815a80539 100644 --- a/arch/x86/include/asm/intel-mid.h +++ b/arch/x86/include/asm/intel-mid.h @@ -80,35 +80,6 @@ enum intel_mid_cpu_type { extern enum intel_mid_cpu_type __intel_mid_cpu_chip; -/** - * struct intel_mid_ops - Interface between intel-mid & sub archs - * @arch_setup: arch_setup function to re-initialize platform - * structures (x86_init, x86_platform_init) - * - * This structure can be extended if any new interface is required - * between intel-mid & its sub arch files. - */ -struct intel_mid_ops { - void (*arch_setup)(void); -}; - -/* Helper API's for INTEL_MID_OPS_INIT */ -#define DECLARE_INTEL_MID_OPS_INIT(cpuname, cpuid) \ - [cpuid] = get_##cpuname##_ops - -/* Maximum number of CPU ops */ -#define MAX_CPU_OPS(a) (sizeof(a)/sizeof(void *)) - -/* - * For every new cpu addition, a weak get__ops() function needs be - * declared in arch/x86/platform/intel_mid/intel_mid_weak_decls.h. - */ -#define INTEL_MID_OPS_INIT { \ - DECLARE_INTEL_MID_OPS_INIT(penwell, INTEL_MID_CPU_CHIP_PENWELL), \ - DECLARE_INTEL_MID_OPS_INIT(cloverview, INTEL_MID_CPU_CHIP_CLOVERVIEW), \ - DECLARE_INTEL_MID_OPS_INIT(tangier, INTEL_MID_CPU_CHIP_TANGIER) \ -}; - #ifdef CONFIG_X86_INTEL_MID static inline enum intel_mid_cpu_type intel_mid_identify_cpu(void) @@ -136,20 +107,6 @@ enum intel_mid_timer_options { extern enum intel_mid_timer_options intel_mid_timer_options; -/* - * Penwell uses spread spectrum clock, so the freq number is not exactly - * the same as reported by MSR based on SDM. - */ -#define FSB_FREQ_83SKU 83200 -#define FSB_FREQ_100SKU 99840 -#define FSB_FREQ_133SKU 133000 - -#define FSB_FREQ_167SKU 167000 -#define FSB_FREQ_200SKU 200000 -#define FSB_FREQ_267SKU 267000 -#define FSB_FREQ_333SKU 333000 -#define FSB_FREQ_400SKU 400000 - /* Bus Select SoC Fuse value */ #define BSEL_SOC_FUSE_MASK 0x7 /* FSB 133MHz */ diff --git a/arch/x86/include/asm/intel_ds.h b/arch/x86/include/asm/intel_ds.h index 62a9f4966b42..ae26df1c2789 100644 --- a/arch/x86/include/asm/intel_ds.h +++ b/arch/x86/include/asm/intel_ds.h @@ -8,6 +8,7 @@ /* The maximal number of PEBS events: */ #define MAX_PEBS_EVENTS 8 +#define MAX_FIXED_PEBS_EVENTS 3 /* * A debug store configuration. @@ -23,7 +24,7 @@ struct debug_store { u64 pebs_index; u64 pebs_absolute_maximum; u64 pebs_interrupt_threshold; - u64 pebs_event_reset[MAX_PEBS_EVENTS]; + u64 pebs_event_reset[MAX_PEBS_EVENTS + MAX_FIXED_PEBS_EVENTS]; } __aligned(PAGE_SIZE); DECLARE_PER_CPU_PAGE_ALIGNED(struct debug_store, cpu_debug_store); diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 89f08955fff7..c14f2a74b2be 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -13,7 +13,9 @@ * Interrupt control: */ -static inline unsigned long native_save_fl(void) +/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ +extern inline unsigned long native_save_fl(void); +extern inline unsigned long native_save_fl(void) { unsigned long flags; diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h index 367d99cff426..c8cec1b39b88 100644 --- a/arch/x86/include/asm/kprobes.h +++ b/arch/x86/include/asm/kprobes.h @@ -78,7 +78,7 @@ struct arch_specific_insn { * boostable = true: This instruction has been boosted: we have * added a relative jump after the instruction copy in insn, * so no single-step and fixup are needed (unless there's - * a post_handler or break_handler). + * a post_handler). */ bool boostable; bool if_modifier; @@ -111,9 +111,6 @@ struct kprobe_ctlblk { unsigned long kprobe_status; unsigned long kprobe_old_flags; unsigned long kprobe_saved_flags; - unsigned long *jprobe_saved_sp; - struct pt_regs jprobe_saved_regs; - kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE]; struct prev_kprobe prev_kprobe; }; diff --git a/arch/x86/include/asm/kvm_guest.h b/arch/x86/include/asm/kvm_guest.h deleted file mode 100644 index 46185263d9c2..000000000000 --- a/arch/x86/include/asm/kvm_guest.h +++ /dev/null @@ -1,7 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_KVM_GUEST_H -#define _ASM_X86_KVM_GUEST_H - -int kvm_setup_vsyscall_timeinfo(void); - -#endif /* _ASM_X86_KVM_GUEST_H */ diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h index 3aea2658323a..4c723632c036 100644 --- a/arch/x86/include/asm/kvm_para.h +++ b/arch/x86/include/asm/kvm_para.h @@ -7,7 +7,6 @@ #include extern void kvmclock_init(void); -extern int kvm_register_clock(char *txt); #ifdef CONFIG_KVM_GUEST bool kvm_check_and_clear_guest_paused(void); diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index bbc796eb0a3b..eeeb9289c764 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -71,12 +71,7 @@ struct ldt_struct { static inline void *ldt_slot_va(int slot) { -#ifdef CONFIG_X86_64 return (void *)(LDT_BASE_ADDR + LDT_SLOT_STRIDE * slot); -#else - BUG(); - return (void *)fix_to_virt(FIX_HOLE); -#endif } /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 3cd14311edfa..19886fef1dfc 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -9,6 +9,8 @@ #include #include +#define VP_INVAL U32_MAX + struct ms_hyperv_info { u32 features; u32 misc_features; @@ -20,7 +22,6 @@ struct ms_hyperv_info { extern struct ms_hyperv_info ms_hyperv; - /* * Generate the guest ID. */ @@ -193,6 +194,40 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1) return hv_status; } +/* Fast hypercall with 16 bytes of input */ +static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2) +{ + u64 hv_status, control = (u64)code | HV_HYPERCALL_FAST_BIT; + +#ifdef CONFIG_X86_64 + { + __asm__ __volatile__("mov %4, %%r8\n" + CALL_NOSPEC + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + : "r" (input2), + THUNK_TARGET(hv_hypercall_pg) + : "cc", "r8", "r9", "r10", "r11"); + } +#else + { + u32 input1_hi = upper_32_bits(input1); + u32 input1_lo = lower_32_bits(input1); + u32 input2_hi = upper_32_bits(input2); + u32 input2_lo = lower_32_bits(input2); + + __asm__ __volatile__ (CALL_NOSPEC + : "=A"(hv_status), + "+c"(input1_lo), ASM_CALL_CONSTRAINT + : "A" (control), "b" (input1_hi), + "D"(input2_hi), "S"(input2_lo), + THUNK_TARGET(hv_hypercall_pg) + : "cc"); + } +#endif + return hv_status; +} + /* * Rep hypercalls. Callers of this functions are supposed to ensure that * rep_count and varhead_size comply with Hyper-V hypercall definition. @@ -281,6 +316,8 @@ static inline int cpumask_to_vpset(struct hv_vpset *vpset, */ for_each_cpu(cpu, cpus) { vcpu = hv_cpu_number_to_vp_number(cpu); + if (vcpu == VP_INVAL) + return -1; vcpu_bank = vcpu / 64; vcpu_offset = vcpu % 64; __set_bit(vcpu_offset, (unsigned long *) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index f6f6c63da62f..fd2a8c1b88bc 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -214,7 +214,7 @@ enum spectre_v2_mitigation { SPECTRE_V2_RETPOLINE_MINIMAL_AMD, SPECTRE_V2_RETPOLINE_GENERIC, SPECTRE_V2_RETPOLINE_AMD, - SPECTRE_V2_IBRS, + SPECTRE_V2_IBRS_ENHANCED, }; /* The Speculative Store Bypass disable variants */ diff --git a/arch/x86/include/asm/orc_types.h b/arch/x86/include/asm/orc_types.h index 9c9dc579bd7d..46f516dd80ce 100644 --- a/arch/x86/include/asm/orc_types.h +++ b/arch/x86/include/asm/orc_types.h @@ -88,6 +88,7 @@ struct orc_entry { unsigned sp_reg:4; unsigned bp_reg:4; unsigned type:2; + unsigned end:1; } __packed; /* @@ -101,6 +102,7 @@ struct unwind_hint { s16 sp_offset; u8 sp_reg; u8 type; + u8 end; }; #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h index a06b07399d17..e9202a0de8f0 100644 --- a/arch/x86/include/asm/percpu.h +++ b/arch/x86/include/asm/percpu.h @@ -450,9 +450,10 @@ do { \ bool __ret; \ typeof(pcp1) __o1 = (o1), __n1 = (n1); \ typeof(pcp2) __o2 = (o2), __n2 = (n2); \ - asm volatile("cmpxchg8b "__percpu_arg(1)"\n\tsetz %0\n\t" \ - : "=a" (__ret), "+m" (pcp1), "+m" (pcp2), "+d" (__o2) \ - : "b" (__n1), "c" (__n2), "a" (__o1)); \ + asm volatile("cmpxchg8b "__percpu_arg(1) \ + CC_SET(z) \ + : CC_OUT(z) (__ret), "+m" (pcp1), "+m" (pcp2), "+a" (__o1), "+d" (__o2) \ + : "b" (__n1), "c" (__n2)); \ __ret; \ }) diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index ada6410fd2ec..fbd578daa66e 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -184,6 +184,9 @@ static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) { + if (!pgtable_l5_enabled()) + return; + BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); free_page((unsigned long)p4d); } diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h index 685ffe8a0eaf..c399ea5eea41 100644 --- a/arch/x86/include/asm/pgtable-2level.h +++ b/arch/x86/include/asm/pgtable-2level.h @@ -19,6 +19,9 @@ static inline void native_set_pte(pte_t *ptep , pte_t pte) static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd) { +#ifdef CONFIG_PAGE_TABLE_ISOLATION + pmd.pud.p4d.pgd = pti_set_user_pgtbl(&pmdp->pud.p4d.pgd, pmd.pud.p4d.pgd); +#endif *pmdp = pmd; } @@ -58,6 +61,9 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp) #ifdef CONFIG_SMP static inline pmd_t native_pmdp_get_and_clear(pmd_t *xp) { +#ifdef CONFIG_PAGE_TABLE_ISOLATION + pti_set_user_pgtbl(&xp->pud.p4d.pgd, __pgd(0)); +#endif return __pmd(xchg((pmdval_t *)xp, 0)); } #else @@ -67,6 +73,9 @@ static inline pmd_t native_pmdp_get_and_clear(pmd_t *xp) #ifdef CONFIG_SMP static inline pud_t native_pudp_get_and_clear(pud_t *xp) { +#ifdef CONFIG_PAGE_TABLE_ISOLATION + pti_set_user_pgtbl(&xp->p4d.pgd, __pgd(0)); +#endif return __pud(xchg((pudval_t *)xp, 0)); } #else diff --git a/arch/x86/include/asm/pgtable-2level_types.h b/arch/x86/include/asm/pgtable-2level_types.h index f982ef808e7e..6deb6cd236e3 100644 --- a/arch/x86/include/asm/pgtable-2level_types.h +++ b/arch/x86/include/asm/pgtable-2level_types.h @@ -35,4 +35,7 @@ typedef union { #define PTRS_PER_PTE 1024 +/* This covers all VMSPLIT_* and VMSPLIT_*_OPT variants */ +#define PGD_KERNEL_START (CONFIG_PAGE_OFFSET >> PGDIR_SHIFT) + #endif /* _ASM_X86_PGTABLE_2LEVEL_DEFS_H */ diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index f24df59c40b2..f2ca3139ca22 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -98,6 +98,9 @@ static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd) static inline void native_set_pud(pud_t *pudp, pud_t pud) { +#ifdef CONFIG_PAGE_TABLE_ISOLATION + pud.p4d.pgd = pti_set_user_pgtbl(&pudp->p4d.pgd, pud.p4d.pgd); +#endif set_64bit((unsigned long long *)(pudp), native_pud_val(pud)); } @@ -229,6 +232,10 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp) { union split_pud res, *orig = (union split_pud *)pudp; +#ifdef CONFIG_PAGE_TABLE_ISOLATION + pti_set_user_pgtbl(&pudp->p4d.pgd, __pgd(0)); +#endif + /* xchg acts as a barrier before setting of the high bits */ res.pud_low = xchg(&orig->pud_low, 0); res.pud_high = orig->pud_high; diff --git a/arch/x86/include/asm/pgtable-3level_types.h b/arch/x86/include/asm/pgtable-3level_types.h index 6a59a6d0cc50..858358a82b14 100644 --- a/arch/x86/include/asm/pgtable-3level_types.h +++ b/arch/x86/include/asm/pgtable-3level_types.h @@ -21,9 +21,10 @@ typedef union { #endif /* !__ASSEMBLY__ */ #ifdef CONFIG_PARAVIRT -#define SHARED_KERNEL_PMD (pv_info.shared_kernel_pmd) +#define SHARED_KERNEL_PMD ((!static_cpu_has(X86_FEATURE_PTI) && \ + (pv_info.shared_kernel_pmd))) #else -#define SHARED_KERNEL_PMD 1 +#define SHARED_KERNEL_PMD (!static_cpu_has(X86_FEATURE_PTI)) #endif /* @@ -45,5 +46,6 @@ typedef union { #define PTRS_PER_PTE 512 #define MAX_POSSIBLE_PHYSMEM_BITS 36 +#define PGD_KERNEL_START (CONFIG_PAGE_OFFSET >> PGDIR_SHIFT) #endif /* _ASM_X86_PGTABLE_3LEVEL_DEFS_H */ diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 99ecde23c3ec..a1cb3339da8d 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -30,11 +30,14 @@ int __init __early_make_pgtable(unsigned long address, pmdval_t pmd); void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd); void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user); void ptdump_walk_pgd_level_checkwx(void); +void ptdump_walk_user_pgd_level_checkwx(void); #ifdef CONFIG_DEBUG_WX -#define debug_checkwx() ptdump_walk_pgd_level_checkwx() +#define debug_checkwx() ptdump_walk_pgd_level_checkwx() +#define debug_checkwx_user() ptdump_walk_user_pgd_level_checkwx() #else -#define debug_checkwx() do { } while (0) +#define debug_checkwx() do { } while (0) +#define debug_checkwx_user() do { } while (0) #endif /* @@ -640,8 +643,31 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, pmd_t *populate_extra_pmd(unsigned long vaddr); pte_t *populate_extra_pte(unsigned long vaddr); + +#ifdef CONFIG_PAGE_TABLE_ISOLATION +pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd); + +/* + * Take a PGD location (pgdp) and a pgd value that needs to be set there. + * Populates the user and returns the resulting PGD that must be set in + * the kernel copy of the page tables. + */ +static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd) +{ + if (!static_cpu_has(X86_FEATURE_PTI)) + return pgd; + return __pti_set_user_pgtbl(pgdp, pgd); +} +#else /* CONFIG_PAGE_TABLE_ISOLATION */ +static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd) +{ + return pgd; +} +#endif /* CONFIG_PAGE_TABLE_ISOLATION */ + #endif /* __ASSEMBLY__ */ + #ifdef CONFIG_X86_32 # include #else @@ -898,7 +924,7 @@ static inline unsigned long pgd_page_vaddr(pgd_t pgd) #define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ -static __always_inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) +static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) { if (!pgtable_l5_enabled()) return (p4d_t *)pgd; @@ -1154,6 +1180,70 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, } } #endif +/* + * Page table pages are page-aligned. The lower half of the top + * level is used for userspace and the top half for the kernel. + * + * Returns true for parts of the PGD that map userspace and + * false for the parts that map the kernel. + */ +static inline bool pgdp_maps_userspace(void *__ptr) +{ + unsigned long ptr = (unsigned long)__ptr; + + return (((ptr & ~PAGE_MASK) / sizeof(pgd_t)) < PGD_KERNEL_START); +} + +static inline int pgd_large(pgd_t pgd) { return 0; } + +#ifdef CONFIG_PAGE_TABLE_ISOLATION +/* + * All top-level PAGE_TABLE_ISOLATION page tables are order-1 pages + * (8k-aligned and 8k in size). The kernel one is at the beginning 4k and + * the user one is in the last 4k. To switch between them, you + * just need to flip the 12th bit in their addresses. + */ +#define PTI_PGTABLE_SWITCH_BIT PAGE_SHIFT + +/* + * This generates better code than the inline assembly in + * __set_bit(). + */ +static inline void *ptr_set_bit(void *ptr, int bit) +{ + unsigned long __ptr = (unsigned long)ptr; + + __ptr |= BIT(bit); + return (void *)__ptr; +} +static inline void *ptr_clear_bit(void *ptr, int bit) +{ + unsigned long __ptr = (unsigned long)ptr; + + __ptr &= ~BIT(bit); + return (void *)__ptr; +} + +static inline pgd_t *kernel_to_user_pgdp(pgd_t *pgdp) +{ + return ptr_set_bit(pgdp, PTI_PGTABLE_SWITCH_BIT); +} + +static inline pgd_t *user_to_kernel_pgdp(pgd_t *pgdp) +{ + return ptr_clear_bit(pgdp, PTI_PGTABLE_SWITCH_BIT); +} + +static inline p4d_t *kernel_to_user_p4dp(p4d_t *p4dp) +{ + return ptr_set_bit(p4dp, PTI_PGTABLE_SWITCH_BIT); +} + +static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp) +{ + return ptr_clear_bit(p4dp, PTI_PGTABLE_SWITCH_BIT); +} +#endif /* CONFIG_PAGE_TABLE_ISOLATION */ /* * clone_pgd_range(pgd_t *dst, pgd_t *src, int count); diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h index 88a056b01db4..b3ec519e3982 100644 --- a/arch/x86/include/asm/pgtable_32.h +++ b/arch/x86/include/asm/pgtable_32.h @@ -34,8 +34,6 @@ static inline void check_pgt_cache(void) { } void paging_init(void); void sync_initial_page_table(void); -static inline int pgd_large(pgd_t pgd) { return 0; } - /* * Define this if things work differently on an i386 and an i486: * it will (on an i486) warn about kernel memory accesses that are diff --git a/arch/x86/include/asm/pgtable_32_types.h b/arch/x86/include/asm/pgtable_32_types.h index d9a001a4a872..b0bc0fff5f1f 100644 --- a/arch/x86/include/asm/pgtable_32_types.h +++ b/arch/x86/include/asm/pgtable_32_types.h @@ -50,13 +50,18 @@ extern bool __vmalloc_start_set; /* set once high_memory is set */ ((FIXADDR_TOT_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) \ & PMD_MASK) -#define PKMAP_BASE \ +#define LDT_BASE_ADDR \ ((CPU_ENTRY_AREA_BASE - PAGE_SIZE) & PMD_MASK) +#define LDT_END_ADDR (LDT_BASE_ADDR + PMD_SIZE) + +#define PKMAP_BASE \ + ((LDT_BASE_ADDR - PAGE_SIZE) & PMD_MASK) + #ifdef CONFIG_HIGHMEM # define VMALLOC_END (PKMAP_BASE - 2 * PAGE_SIZE) #else -# define VMALLOC_END (CPU_ENTRY_AREA_BASE - 2 * PAGE_SIZE) +# define VMALLOC_END (LDT_BASE_ADDR - 2 * PAGE_SIZE) #endif #define MODULES_VADDR VMALLOC_START diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 0fdcd21dadbd..acb6970e7bcf 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -132,91 +132,7 @@ static inline pud_t native_pudp_get_and_clear(pud_t *xp) #endif } -#ifdef CONFIG_PAGE_TABLE_ISOLATION -/* - * All top-level PAGE_TABLE_ISOLATION page tables are order-1 pages - * (8k-aligned and 8k in size). The kernel one is at the beginning 4k and - * the user one is in the last 4k. To switch between them, you - * just need to flip the 12th bit in their addresses. - */ -#define PTI_PGTABLE_SWITCH_BIT PAGE_SHIFT - -/* - * This generates better code than the inline assembly in - * __set_bit(). - */ -static inline void *ptr_set_bit(void *ptr, int bit) -{ - unsigned long __ptr = (unsigned long)ptr; - - __ptr |= BIT(bit); - return (void *)__ptr; -} -static inline void *ptr_clear_bit(void *ptr, int bit) -{ - unsigned long __ptr = (unsigned long)ptr; - - __ptr &= ~BIT(bit); - return (void *)__ptr; -} - -static inline pgd_t *kernel_to_user_pgdp(pgd_t *pgdp) -{ - return ptr_set_bit(pgdp, PTI_PGTABLE_SWITCH_BIT); -} - -static inline pgd_t *user_to_kernel_pgdp(pgd_t *pgdp) -{ - return ptr_clear_bit(pgdp, PTI_PGTABLE_SWITCH_BIT); -} - -static inline p4d_t *kernel_to_user_p4dp(p4d_t *p4dp) -{ - return ptr_set_bit(p4dp, PTI_PGTABLE_SWITCH_BIT); -} - -static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp) -{ - return ptr_clear_bit(p4dp, PTI_PGTABLE_SWITCH_BIT); -} -#endif /* CONFIG_PAGE_TABLE_ISOLATION */ - -/* - * Page table pages are page-aligned. The lower half of the top - * level is used for userspace and the top half for the kernel. - * - * Returns true for parts of the PGD that map userspace and - * false for the parts that map the kernel. - */ -static inline bool pgdp_maps_userspace(void *__ptr) -{ - unsigned long ptr = (unsigned long)__ptr; - - return (ptr & ~PAGE_MASK) < (PAGE_SIZE / 2); -} - -#ifdef CONFIG_PAGE_TABLE_ISOLATION -pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd); - -/* - * Take a PGD location (pgdp) and a pgd value that needs to be set there. - * Populates the user and returns the resulting PGD that must be set in - * the kernel copy of the page tables. - */ -static inline pgd_t pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd) -{ - if (!static_cpu_has(X86_FEATURE_PTI)) - return pgd; - return __pti_set_user_pgd(pgdp, pgd); -} -#else -static inline pgd_t pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd) -{ - return pgd; -} -#endif - -static __always_inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) +static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) { pgd_t pgd; @@ -226,18 +142,18 @@ static __always_inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) } pgd = native_make_pgd(native_p4d_val(p4d)); - pgd = pti_set_user_pgd((pgd_t *)p4dp, pgd); + pgd = pti_set_user_pgtbl((pgd_t *)p4dp, pgd); *p4dp = native_make_p4d(native_pgd_val(pgd)); } -static __always_inline void native_p4d_clear(p4d_t *p4d) +static inline void native_p4d_clear(p4d_t *p4d) { native_set_p4d(p4d, native_make_p4d(0)); } static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd) { - *pgdp = pti_set_user_pgd(pgdp, pgd); + *pgdp = pti_set_user_pgtbl(pgdp, pgd); } static inline void native_pgd_clear(pgd_t *pgd) @@ -255,7 +171,6 @@ extern void sync_global_pgds(unsigned long start, unsigned long end); /* * Level 4 access. */ -static inline int pgd_large(pgd_t pgd) { return 0; } #define mk_kernel_pgd(address) __pgd((address) | _KERNPG_TABLE) /* PUD - Level3 access */ diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 054765ab2da2..04edd2d58211 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -115,6 +115,7 @@ extern unsigned int ptrs_per_p4d; #define LDT_PGD_ENTRY_L5 -112UL #define LDT_PGD_ENTRY (pgtable_l5_enabled() ? LDT_PGD_ENTRY_L5 : LDT_PGD_ENTRY_L4) #define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT) +#define LDT_END_ADDR (LDT_BASE_ADDR + PGDIR_SIZE) #define __VMALLOC_BASE_L4 0xffffc90000000000UL #define __VMALLOC_BASE_L5 0xffa0000000000000UL @@ -153,4 +154,6 @@ extern unsigned int ptrs_per_p4d; #define EARLY_DYNAMIC_PAGE_TABLES 64 +#define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t)) + #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 99fff853c944..b64acb08a62b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -50,6 +50,7 @@ #define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL) #define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1) #define _PAGE_SOFTW2 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW2) +#define _PAGE_SOFTW3 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW3) #define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT) #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE) #define _PAGE_SPECIAL (_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL) @@ -266,14 +267,37 @@ typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; typedef struct { pgdval_t pgd; } pgd_t; +#ifdef CONFIG_X86_PAE + +/* + * PHYSICAL_PAGE_MASK might be non-constant when SME is compiled in, so we can't + * use it here. + */ + +#define PGD_PAE_PAGE_MASK ((signed long)PAGE_MASK) +#define PGD_PAE_PHYS_MASK (((1ULL << __PHYSICAL_MASK_SHIFT)-1) & PGD_PAE_PAGE_MASK) + +/* + * PAE allows Base Address, P, PWT, PCD and AVL bits to be set in PGD entries. + * All other bits are Reserved MBZ + */ +#define PGD_ALLOWED_BITS (PGD_PAE_PHYS_MASK | _PAGE_PRESENT | \ + _PAGE_PWT | _PAGE_PCD | \ + _PAGE_SOFTW1 | _PAGE_SOFTW2 | _PAGE_SOFTW3) + +#else +/* No need to mask any bits for !PAE */ +#define PGD_ALLOWED_BITS (~0ULL) +#endif + static inline pgd_t native_make_pgd(pgdval_t val) { - return (pgd_t) { val }; + return (pgd_t) { val & PGD_ALLOWED_BITS }; } static inline pgdval_t native_pgd_val(pgd_t pgd) { - return pgd.pgd; + return pgd.pgd & PGD_ALLOWED_BITS; } static inline pgdval_t pgd_flags(pgd_t pgd) diff --git a/arch/x86/include/asm/processor-flags.h b/arch/x86/include/asm/processor-flags.h index 625a52a5594f..02c2cbda4a74 100644 --- a/arch/x86/include/asm/processor-flags.h +++ b/arch/x86/include/asm/processor-flags.h @@ -39,10 +39,6 @@ #define CR3_PCID_MASK 0xFFFull #define CR3_NOFLUSH BIT_ULL(63) -#ifdef CONFIG_PAGE_TABLE_ISOLATION -# define X86_CR3_PTI_PCID_USER_BIT 11 -#endif - #else /* * CR3_ADDR_MASK needs at least bits 31:5 set on PAE systems, and we save @@ -53,4 +49,8 @@ #define CR3_NOFLUSH 0 #endif +#ifdef CONFIG_PAGE_TABLE_ISOLATION +# define X86_CR3_PTI_PCID_USER_BIT 11 +#endif + #endif /* _ASM_X86_PROCESSOR_FLAGS_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index cfd29ee8c3da..59663c08c949 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -966,6 +966,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves) extern unsigned long arch_align_stack(unsigned long sp); extern void free_init_pages(char *what, unsigned long begin, unsigned long end); +extern void free_kernel_image_pages(void *begin, void *end); void default_idle(void); #ifdef CONFIG_XEN diff --git a/arch/x86/include/asm/pti.h b/arch/x86/include/asm/pti.h index 38a17f1d5c9d..5df09a0b80b8 100644 --- a/arch/x86/include/asm/pti.h +++ b/arch/x86/include/asm/pti.h @@ -6,10 +6,9 @@ #ifdef CONFIG_PAGE_TABLE_ISOLATION extern void pti_init(void); extern void pti_check_boottime_disable(void); -extern void pti_clone_kernel_text(void); +extern void pti_finalize(void); #else static inline void pti_check_boottime_disable(void) { } -static inline void pti_clone_kernel_text(void) { } #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h index 9ef5ee03d2d7..159622ee0674 100644 --- a/arch/x86/include/asm/qspinlock_paravirt.h +++ b/arch/x86/include/asm/qspinlock_paravirt.h @@ -43,7 +43,7 @@ asm (".pushsection .text;" "push %rdx;" "mov $0x1,%eax;" "xor %edx,%edx;" - "lock cmpxchg %dl,(%rdi);" + LOCK_PREFIX "cmpxchg %dl,(%rdi);" "cmp $0x1,%al;" "jne .slowpath;" "pop %rdx;" diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h index 4cf11d88d3b3..19b90521954c 100644 --- a/arch/x86/include/asm/refcount.h +++ b/arch/x86/include/asm/refcount.h @@ -5,6 +5,7 @@ * PaX/grsecurity. */ #include +#include /* * This is the first portion of the refcount error handling, which lives in diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h index 5c019d23d06b..4a911a382ade 100644 --- a/arch/x86/include/asm/sections.h +++ b/arch/x86/include/asm/sections.h @@ -7,6 +7,7 @@ extern char __brk_base[], __brk_limit[]; extern struct exception_table_entry __stop___ex_table[]; +extern char __end_rodata_aligned[]; #if defined(CONFIG_X86_64) extern char __end_rodata_hpage_align[]; diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index bd090367236c..34cffcef7375 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); +int set_memory_np_noalias(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h index eb5f7999a893..36bd243843d6 100644 --- a/arch/x86/include/asm/switch_to.h +++ b/arch/x86/include/asm/switch_to.h @@ -87,15 +87,25 @@ static inline void refresh_sysenter_cs(struct thread_struct *thread) #endif /* This is used when switching tasks or entering/exiting vm86 mode. */ -static inline void update_sp0(struct task_struct *task) +static inline void update_task_stack(struct task_struct *task) { - /* On x86_64, sp0 always points to the entry trampoline stack, which is constant: */ + /* sp0 always points to the entry trampoline stack, which is constant: */ #ifdef CONFIG_X86_32 - load_sp0(task->thread.sp0); + if (static_cpu_has(X86_FEATURE_XENPV)) + load_sp0(task->thread.sp0); + else + this_cpu_write(cpu_tss_rw.x86_tss.sp1, task->thread.sp0); #else + /* + * x86-64 updates x86_tss.sp1 via cpu_current_top_of_stack. That + * doesn't work on x86-32 because sp1 and + * cpu_current_top_of_stack have different values (because of + * the non-zero stack-padding on 32bit). + */ if (static_cpu_has(X86_FEATURE_XENPV)) load_sp0(task_top_of_stack(task)); #endif + } #endif /* _ASM_X86_SWITCH_TO_H */ diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index 2ecd34e2d46c..e85ff65c43c3 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -37,5 +37,6 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len); extern void *text_poke(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); +extern int after_bootmem; #endif /* _ASM_X86_TEXT_PATCHING_H */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 6690cd3fc8b1..511bf5fae8b8 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -148,22 +148,6 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) #define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr) #endif -static inline bool tlb_defer_switch_to_init_mm(void) -{ - /* - * If we have PCID, then switching to init_mm is reasonably - * fast. If we don't have PCID, then switching to init_mm is - * quite slow, so we try to defer it in the hopes that we can - * avoid it entirely. The latter approach runs the risk of - * receiving otherwise unnecessary IPIs. - * - * This choice is just a heuristic. The tlb code can handle this - * function returning true or false regardless of whether we have - * PCID. - */ - return !static_cpu_has(X86_FEATURE_PCID); -} - struct tlb_context { u64 ctx_id; u64 tlb_gen; @@ -554,4 +538,9 @@ extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); native_flush_tlb_others(mask, info) #endif +extern void tlb_flush_remove_tables(struct mm_struct *mm); +extern void tlb_flush_remove_tables_local(void *arg); + +#define HAVE_TLB_FLUSH_REMOVE_TABLES + #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h index 4253bca99989..9c0d4b588e3f 100644 --- a/arch/x86/include/asm/trace/hyperv.h +++ b/arch/x86/include/asm/trace/hyperv.h @@ -28,6 +28,21 @@ TRACE_EVENT(hyperv_mmu_flush_tlb_others, __entry->addr, __entry->end) ); +TRACE_EVENT(hyperv_send_ipi_mask, + TP_PROTO(const struct cpumask *cpus, + int vector), + TP_ARGS(cpus, vector), + TP_STRUCT__entry( + __field(unsigned int, ncpus) + __field(int, vector) + ), + TP_fast_assign(__entry->ncpus = cpumask_weight(cpus); + __entry->vector = vector; + ), + TP_printk("ncpus %d vector %x", + __entry->ncpus, __entry->vector) + ); + #endif /* CONFIG_HYPERV */ #undef TRACE_INCLUDE_PATH diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 2701d221583a..eb5bbfeccb66 100644 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -33,13 +33,13 @@ static inline cycles_t get_cycles(void) extern struct system_counterval_t convert_art_to_tsc(u64 art); extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns); -extern void tsc_early_delay_calibrate(void); +extern void tsc_early_init(void); extern void tsc_init(void); extern void mark_tsc_unstable(char *reason); extern int unsynchronized_tsc(void); extern int check_tsc_unstable(void); extern void mark_tsc_async_resets(char *reason); -extern unsigned long native_calibrate_cpu(void); +extern unsigned long native_calibrate_cpu_early(void); extern unsigned long native_calibrate_tsc(void); extern unsigned long long native_sched_clock_from_tsc(u64 tsc); diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 62acb613114b..a9d637bc301d 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -52,7 +52,12 @@ copy_to_user_mcsafe(void *to, const void *from, unsigned len) unsigned long ret; __uaccess_begin(); - ret = memcpy_mcsafe(to, from, len); + /* + * Note, __memcpy_mcsafe() is explicitly used since it can + * handle exceptions / faults. memcpy_mcsafe() may fall back to + * memcpy() which lacks this handling. + */ + ret = __memcpy_mcsafe(to, from, len); __uaccess_end(); return ret; } diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h index bae46fc6b9de..0bcdb1279361 100644 --- a/arch/x86/include/asm/unwind_hints.h +++ b/arch/x86/include/asm/unwind_hints.h @@ -26,7 +26,7 @@ * the debuginfo as necessary. It will also warn if it sees any * inconsistencies. */ -.macro UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=0 type=ORC_TYPE_CALL +.macro UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=0 type=ORC_TYPE_CALL end=0 #ifdef CONFIG_STACK_VALIDATION .Lunwind_hint_ip_\@: .pushsection .discard.unwind_hints @@ -35,12 +35,14 @@ .short \sp_offset .byte \sp_reg .byte \type + .byte \end + .balign 4 .popsection #endif .endm .macro UNWIND_HINT_EMPTY - UNWIND_HINT sp_reg=ORC_REG_UNDEFINED + UNWIND_HINT sp_reg=ORC_REG_UNDEFINED end=1 .endm .macro UNWIND_HINT_REGS base=%rsp offset=0 indirect=0 extra=1 iret=0 @@ -86,19 +88,21 @@ #else /* !__ASSEMBLY__ */ -#define UNWIND_HINT(sp_reg, sp_offset, type) \ +#define UNWIND_HINT(sp_reg, sp_offset, type, end) \ "987: \n\t" \ ".pushsection .discard.unwind_hints\n\t" \ /* struct unwind_hint */ \ ".long 987b - .\n\t" \ - ".short " __stringify(sp_offset) "\n\t" \ + ".short " __stringify(sp_offset) "\n\t" \ ".byte " __stringify(sp_reg) "\n\t" \ ".byte " __stringify(type) "\n\t" \ + ".byte " __stringify(end) "\n\t" \ + ".balign 4 \n\t" \ ".popsection\n\t" -#define UNWIND_HINT_SAVE UNWIND_HINT(0, 0, UNWIND_HINT_TYPE_SAVE) +#define UNWIND_HINT_SAVE UNWIND_HINT(0, 0, UNWIND_HINT_TYPE_SAVE, 0) -#define UNWIND_HINT_RESTORE UNWIND_HINT(0, 0, UNWIND_HINT_TYPE_RESTORE) +#define UNWIND_HINT_RESTORE UNWIND_HINT(0, 0, UNWIND_HINT_TYPE_RESTORE, 0) #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 425e6b8b9547..6aa8499e1f62 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -114,6 +114,7 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 #define VMX_MISC_ACTIVITY_HLT 0x00000040 +#define VMX_MISC_ZERO_LEN_INS 0x40000000 /* VMFUNC functions */ #define VMX_VMFUNC_EPTP_SWITCHING 0x00000001 @@ -351,11 +352,13 @@ enum vmcs_field { #define VECTORING_INFO_VALID_MASK INTR_INFO_VALID_MASK #define INTR_TYPE_EXT_INTR (0 << 8) /* external interrupt */ +#define INTR_TYPE_RESERVED (1 << 8) /* reserved */ #define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */ #define INTR_TYPE_HARD_EXCEPTION (3 << 8) /* processor exception */ #define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */ #define INTR_TYPE_PRIV_SW_EXCEPTION (5 << 8) /* ICE breakpoint - undocumented */ #define INTR_TYPE_SOFT_EXCEPTION (6 << 8) /* software exception */ +#define INTR_TYPE_OTHER_EVENT (7 << 8) /* other event */ /* GUEST_INTERRUPTIBILITY_INFO flags. */ #define GUEST_INTR_STATE_STI 0x00000001 diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 02d6f5cf4e70..8824d01c0c35 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -61,6 +61,7 @@ obj-y += alternative.o i8253.o hw_breakpoint.o obj-y += tsc.o tsc_msr.o io_delay.o rtc.o obj-y += pci-iommu_table.o obj-y += resource.o +obj-y += irqflags.o obj-y += process.o obj-y += fpu/ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index a481763a3776..014f214da581 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -668,6 +668,7 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, local_irq_save(flags); memcpy(addr, opcode, len); local_irq_restore(flags); + sync_core(); /* Could also do a CLFLUSH here to speed up CPU recovery; but that causes hangs on some VIA CPUs. */ return addr; @@ -693,6 +694,12 @@ void *text_poke(void *addr, const void *opcode, size_t len) struct page *pages[2]; int i; + /* + * While boot memory allocator is runnig we cannot use struct + * pages as they are not yet initialized. + */ + BUG_ON(!after_bootmem); + if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); pages[1] = vmalloc_to_page(addr + PAGE_SIZE); diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 2aabd4cb0e3f..07fa222f0c52 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -573,6 +573,9 @@ static u32 skx_deadline_rev(void) case 0x04: return 0x02000014; } + if (boot_cpu_data.x86_stepping > 4) + return 0; + return ~0U; } @@ -937,7 +940,7 @@ static int __init calibrate_APIC_clock(void) if (levt->features & CLOCK_EVT_FEAT_DUMMY) { pr_warning("APIC timer disabled due to verification failure\n"); - return -1; + return -1; } return 0; diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c index 35aaee4fc028..0954315842c0 100644 --- a/arch/x86/kernel/apic/vector.c +++ b/arch/x86/kernel/apic/vector.c @@ -218,7 +218,8 @@ static int reserve_irq_vector(struct irq_data *irqd) return 0; } -static int allocate_vector(struct irq_data *irqd, const struct cpumask *dest) +static int +assign_vector_locked(struct irq_data *irqd, const struct cpumask *dest) { struct apic_chip_data *apicd = apic_chip_data(irqd); bool resvd = apicd->has_reserved; @@ -245,22 +246,12 @@ static int allocate_vector(struct irq_data *irqd, const struct cpumask *dest) return -EBUSY; vector = irq_matrix_alloc(vector_matrix, dest, resvd, &cpu); - if (vector > 0) - apic_update_vector(irqd, vector, cpu); trace_vector_alloc(irqd->irq, vector, resvd, vector); - return vector; -} - -static int assign_vector_locked(struct irq_data *irqd, - const struct cpumask *dest) -{ - struct apic_chip_data *apicd = apic_chip_data(irqd); - int vector = allocate_vector(irqd, dest); - if (vector < 0) return vector; + apic_update_vector(irqd, vector, cpu); + apic_update_irq_cfg(irqd, vector, cpu); - apic_update_irq_cfg(irqd, apicd->vector, apicd->cpu); return 0; } @@ -433,7 +424,7 @@ static int activate_managed(struct irq_data *irqd) pr_err("Managed startup irq %u, no vector available\n", irqd->irq); } - return ret; + return ret; } static int x86_vector_activate(struct irq_domain *dom, struct irq_data *irqd, diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c index efaf2d4f9c3c..391f358ebb4c 100644 --- a/arch/x86/kernel/apic/x2apic_uv_x.c +++ b/arch/x86/kernel/apic/x2apic_uv_x.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -392,6 +393,51 @@ extern int uv_hub_info_version(void) } EXPORT_SYMBOL(uv_hub_info_version); +/* Default UV memory block size is 2GB */ +static unsigned long mem_block_size __initdata = (2UL << 30); + +/* Kernel parameter to specify UV mem block size */ +static int __init parse_mem_block_size(char *ptr) +{ + unsigned long size = memparse(ptr, NULL); + + /* Size will be rounded down by set_block_size() below */ + mem_block_size = size; + return 0; +} +early_param("uv_memblksize", parse_mem_block_size); + +static __init int adj_blksize(u32 lgre) +{ + unsigned long base = (unsigned long)lgre << UV_GAM_RANGE_SHFT; + unsigned long size; + + for (size = mem_block_size; size > MIN_MEMORY_BLOCK_SIZE; size >>= 1) + if (IS_ALIGNED(base, size)) + break; + + if (size >= mem_block_size) + return 0; + + mem_block_size = size; + return 1; +} + +static __init void set_block_size(void) +{ + unsigned int order = ffs(mem_block_size); + + if (order) { + /* adjust for ffs return of 1..64 */ + set_memory_block_size_order(order - 1); + pr_info("UV: mem_block_size set to 0x%lx\n", mem_block_size); + } else { + /* bad or zero value, default to 1UL << 31 (2GB) */ + pr_err("UV: mem_block_size error with 0x%lx\n", mem_block_size); + set_memory_block_size_order(31); + } +} + /* Build GAM range lookup table: */ static __init void build_uv_gr_table(void) { @@ -1180,23 +1226,30 @@ static void __init decode_gam_rng_tbl(unsigned long ptr) << UV_GAM_RANGE_SHFT); int order = 0; char suffix[] = " KMGTPE"; + int flag = ' '; while (size > 9999 && order < sizeof(suffix)) { size /= 1024; order++; } + /* adjust max block size to current range start */ + if (gre->type == 1 || gre->type == 2) + if (adj_blksize(lgre)) + flag = '*'; + if (!index) { pr_info("UV: GAM Range Table...\n"); - pr_info("UV: # %20s %14s %5s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN"); + pr_info("UV: # %20s %14s %6s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN"); } - pr_info("UV: %2d: 0x%014lx-0x%014lx %5lu%c %3d %04x %02x %02x\n", + pr_info("UV: %2d: 0x%014lx-0x%014lx%c %5lu%c %3d %04x %02x %02x\n", index++, (unsigned long)lgre << UV_GAM_RANGE_SHFT, (unsigned long)gre->limit << UV_GAM_RANGE_SHFT, - size, suffix[order], + flag, size, suffix[order], gre->type, gre->nasid, gre->sockid, gre->pnode); + /* update to next range start */ lgre = gre->limit; if (sock_min > gre->sockid) sock_min = gre->sockid; @@ -1427,6 +1480,7 @@ static void __init uv_system_init_hub(void) build_socket_tables(); build_uv_gr_table(); + set_block_size(); uv_init_hub_info(&hub_info); uv_possible_blades = num_possible_nodes(); if (!_node_to_pnode) diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c index 5d0de79fdab0..ec00d1ff5098 100644 --- a/arch/x86/kernel/apm_32.c +++ b/arch/x86/kernel/apm_32.c @@ -240,6 +240,7 @@ #include #include #include +#include #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT) extern int (*console_blank_hook)(int); @@ -614,11 +615,13 @@ static long __apm_bios_call(void *_call) gdt[0x40 / 8] = bad_bios_desc; apm_irq_save(flags); + firmware_restrict_branch_speculation_start(); APM_DO_SAVE_SEGS; apm_bios_call_asm(call->func, call->ebx, call->ecx, &call->eax, &call->ebx, &call->ecx, &call->edx, &call->esi); APM_DO_RESTORE_SEGS; + firmware_restrict_branch_speculation_end(); apm_irq_restore(flags); gdt[0x40 / 8] = save_desc_40; put_cpu(); @@ -690,10 +693,12 @@ static long __apm_bios_call_simple(void *_call) gdt[0x40 / 8] = bad_bios_desc; apm_irq_save(flags); + firmware_restrict_branch_speculation_start(); APM_DO_SAVE_SEGS; error = apm_bios_call_simple_asm(call->func, call->ebx, call->ecx, &call->eax); APM_DO_RESTORE_SEGS; + firmware_restrict_branch_speculation_end(); apm_irq_restore(flags); gdt[0x40 / 8] = save_desc_40; put_cpu(); diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index dcb008c320fe..01de31db300d 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -103,4 +103,9 @@ void common(void) { OFFSET(CPU_ENTRY_AREA_entry_trampoline, cpu_entry_area, entry_trampoline); OFFSET(CPU_ENTRY_AREA_entry_stack, cpu_entry_area, entry_stack_page); DEFINE(SIZEOF_entry_stack, sizeof(struct entry_stack)); + DEFINE(MASK_entry_stack, (~(sizeof(struct entry_stack) - 1))); + + /* Offset for sp0 and sp1 into the tss_struct */ + OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); + OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); } diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c index a4a3be399f4b..82826f2275cc 100644 --- a/arch/x86/kernel/asm-offsets_32.c +++ b/arch/x86/kernel/asm-offsets_32.c @@ -46,8 +46,14 @@ void foo(void) OFFSET(saved_context_gdt_desc, saved_context, gdt_desc); BLANK(); - /* Offset from the sysenter stack to tss.sp0 */ - DEFINE(TSS_sysenter_sp0, offsetof(struct cpu_entry_area, tss.x86_tss.sp0) - + /* + * Offset from the entry stack to task stack stored in TSS. Kernel entry + * happens on the per-cpu entry-stack, and the asm code switches to the + * task-stack pointer stored in x86_tss.sp1, which is a copy of + * task->thread.sp0 where entry code can find it. + */ + DEFINE(TSS_entry2task_stack, + offsetof(struct cpu_entry_area, tss.x86_tss.sp1) - offsetofend(struct cpu_entry_area, entry_stack_page.stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c index b2dcd161f514..3b9405e7ba2b 100644 --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -65,8 +65,6 @@ int main(void) #undef ENTRY OFFSET(TSS_ist, tss_struct, x86_tss.ist); - OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); - OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); BLANK(); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 7a40196967cb..347137e80bf5 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -35,7 +35,9 @@ obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o -obj-$(CONFIG_INTEL_RDT) += intel_rdt.o intel_rdt_rdtgroup.o intel_rdt_monitor.o intel_rdt_ctrlmondata.o +obj-$(CONFIG_INTEL_RDT) += intel_rdt.o intel_rdt_rdtgroup.o intel_rdt_monitor.o +obj-$(CONFIG_INTEL_RDT) += intel_rdt_ctrlmondata.o intel_rdt_pseudo_lock.o +CFLAGS_intel_rdt_pseudo_lock.o = -I$(src) obj-$(CONFIG_X86_MCE) += mcheck/ obj-$(CONFIG_MTRR) += mtrr/ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 082d7875cef8..b732438c1a1e 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -232,8 +232,6 @@ static void init_amd_k7(struct cpuinfo_x86 *c) } } - set_cpu_cap(c, X86_FEATURE_K7); - /* calling is from identify_secondary_cpu() ? */ if (!c->cpu_index) return; @@ -543,7 +541,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) nodes_per_socket = ((value >> 3) & 7) + 1; } - if (c->x86 >= 0x15 && c->x86 <= 0x17) { + if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) && + !boot_cpu_has(X86_FEATURE_VIRT_SSBD) && + c->x86 >= 0x15 && c->x86 <= 0x17) { unsigned int bit; switch (c->x86) { @@ -615,6 +615,14 @@ static void early_init_amd(struct cpuinfo_x86 *c) early_init_amd_mc(c); +#ifdef CONFIG_X86_32 + if (c->x86 == 6) + set_cpu_cap(c, X86_FEATURE_K7); +#endif + + if (c->x86 >= 0xf) + set_cpu_cap(c, X86_FEATURE_K8); + rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy); /* @@ -861,9 +869,6 @@ static void init_amd(struct cpuinfo_x86 *c) init_amd_cacheinfo(c); - if (c->x86 >= 0xf) - set_cpu_cap(c, X86_FEATURE_K8); - if (cpu_has(c, X86_FEATURE_XMM2)) { unsigned long long val; int ret; diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index cd0fda1fff6d..405a9a61bb89 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -27,6 +27,7 @@ #include #include #include +#include static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); @@ -129,6 +130,7 @@ static const char *spectre_v2_strings[] = { [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline", [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", }; #undef pr_fmt @@ -154,7 +156,8 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; /* SSBD controlled in MSR_SPEC_CTRL */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD)) + if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || + static_cpu_has(X86_FEATURE_AMD_SSBD)) hostval |= ssbd_tif_to_spec_ctrl(ti->flags); if (hostval != guestval) { @@ -311,23 +314,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) return cmd; } -/* Check for Skylake-like CPUs (for RSB handling) */ -static bool __init is_skylake_era(void) -{ - if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && - boot_cpu_data.x86 == 6) { - switch (boot_cpu_data.x86_model) { - case INTEL_FAM6_SKYLAKE_MOBILE: - case INTEL_FAM6_SKYLAKE_DESKTOP: - case INTEL_FAM6_SKYLAKE_X: - case INTEL_FAM6_KABYLAKE_MOBILE: - case INTEL_FAM6_KABYLAKE_DESKTOP: - return true; - } - } - return false; -} - static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); @@ -347,6 +333,13 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_CMD_FORCE: case SPECTRE_V2_CMD_AUTO: + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { + mode = SPECTRE_V2_IBRS_ENHANCED; + /* Force it so VMEXIT will restore correctly */ + x86_spec_ctrl_base |= SPEC_CTRL_IBRS; + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + goto specv2_set_mode; + } if (IS_ENABLED(CONFIG_RETPOLINE)) goto retpoline_auto; break; @@ -384,26 +377,20 @@ retpoline_auto: setup_force_cpu_cap(X86_FEATURE_RETPOLINE); } +specv2_set_mode: spectre_v2_enabled = mode; pr_info("%s\n", spectre_v2_strings[mode]); /* - * If neither SMEP nor PTI are available, there is a risk of - * hitting userspace addresses in the RSB after a context switch - * from a shallow call stack to a deeper one. To prevent this fill - * the entire RSB, even when using IBRS. + * If spectre v2 protection has been enabled, unconditionally fill + * RSB during a context switch; this protects against two independent + * issues: * - * Skylake era CPUs have a separate issue with *underflow* of the - * RSB, when they will predict 'ret' targets from the generic BTB. - * The proper mitigation for this is IBRS. If IBRS is not supported - * or deactivated in favour of retpolines the RSB fill on context - * switch is required. + * - RSB underflow (and switch to BTB) on Skylake+ + * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs */ - if ((!boot_cpu_has(X86_FEATURE_PTI) && - !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) { - setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); - pr_info("Spectre v2 mitigation: Filling RSB on context switch\n"); - } + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); + pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); /* Initialize Indirect Branch Prediction Barrier if supported */ if (boot_cpu_has(X86_FEATURE_IBPB)) { @@ -413,9 +400,16 @@ retpoline_auto: /* * Retpoline means the kernel is safe because it has no indirect - * branches. But firmware isn't, so use IBRS to protect that. + * branches. Enhanced IBRS protects firmware too, so, enable restricted + * speculation around firmware calls only when Enhanced IBRS isn't + * supported. + * + * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because + * the user might select retpoline on the kernel command line and if + * the CPU supports Enhanced IBRS, kernel might un-intentionally not + * enable IBRS around firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS)) { + if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } @@ -532,9 +526,10 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may * use a completely different MSR and bit dependent on family. */ - if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) + if (!static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) && + !static_cpu_has(X86_FEATURE_AMD_SSBD)) { x86_amd_ssb_disable(); - else { + } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); @@ -664,6 +659,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr if (boot_cpu_has(X86_FEATURE_PTI)) return sprintf(buf, "Mitigation: PTI\n"); + if (hypervisor_is_type(X86_HYPER_XEN_PV)) + return sprintf(buf, "Unknown (XEN PV detected, hypervisor mitigation required)\n"); + break; case X86_BUG_SPECTRE_V1: diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c index 38354c66df81..0c5fcbd998cf 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -671,7 +671,7 @@ void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id) num_sharing_cache = ((eax >> 14) & 0xfff) + 1; if (num_sharing_cache) { - int bits = get_count_order(num_sharing_cache) - 1; + int bits = get_count_order(num_sharing_cache); per_cpu(cpu_llc_id, cpu) = c->apicid >> bits; } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 0df7151cfef4..ba6b8bb1c036 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1,3 +1,6 @@ +/* cpu_feature_enabled() cannot be used this early */ +#define USE_EARLY_PGTABLE_L5 + #include #include #include @@ -1002,6 +1005,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); + if (ia32_cap & ARCH_CAP_IBRS_ALL) + setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); + if (x86_match_cpu(cpu_no_meltdown)) return; @@ -1012,6 +1018,24 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); } +/* + * The NOPL instruction is supposed to exist on all CPUs of family >= 6; + * unfortunately, that's not true in practice because of early VIA + * chips and (more importantly) broken virtualizers that are not easy + * to detect. In the latter case it doesn't even *fail* reliably, so + * probing for it doesn't even work. Disable it completely on 32-bit + * unless we can find a reliable way to detect all the broken cases. + * Enable it explicitly on 64-bit for non-constant inputs of cpu_has(). + */ +static void detect_nopl(void) +{ +#ifdef CONFIG_X86_32 + setup_clear_cpu_cap(X86_FEATURE_NOPL); +#else + setup_force_cpu_cap(X86_FEATURE_NOPL); +#endif +} + /* * Do minimum CPU detection early. * Fields really needed: vendor, cpuid_level, family, model, mask, @@ -1086,6 +1110,8 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c) */ if (!pgtable_l5_enabled()) setup_clear_cpu_cap(X86_FEATURE_LA57); + + detect_nopl(); } void __init early_cpu_init(void) @@ -1121,24 +1147,6 @@ void __init early_cpu_init(void) early_identify_cpu(&boot_cpu_data); } -/* - * The NOPL instruction is supposed to exist on all CPUs of family >= 6; - * unfortunately, that's not true in practice because of early VIA - * chips and (more importantly) broken virtualizers that are not easy - * to detect. In the latter case it doesn't even *fail* reliably, so - * probing for it doesn't even work. Disable it completely on 32-bit - * unless we can find a reliable way to detect all the broken cases. - * Enable it explicitly on 64-bit for non-constant inputs of cpu_has(). - */ -static void detect_nopl(struct cpuinfo_x86 *c) -{ -#ifdef CONFIG_X86_32 - clear_cpu_cap(c, X86_FEATURE_NOPL); -#else - set_cpu_cap(c, X86_FEATURE_NOPL); -#endif -} - static void detect_null_seg_behavior(struct cpuinfo_x86 *c) { #ifdef CONFIG_X86_64 @@ -1201,8 +1209,6 @@ static void generic_identify(struct cpuinfo_x86 *c) get_model_name(c); /* Default name */ - detect_nopl(c); - detect_null_seg_behavior(c); /* @@ -1801,11 +1807,12 @@ void cpu_init(void) enter_lazy_tlb(&init_mm, curr); /* - * Initialize the TSS. Don't bother initializing sp0, as the initial - * task never enters user mode. + * Initialize the TSS. sp0 points to the entry trampoline stack + * regardless of what task is running. */ set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss); load_TR_desc(); + load_sp0((unsigned long)(cpu_entry_stack(cpu) + 1)); load_mm_ldt(&init_mm); diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index eb75564f2d25..c050cd6066af 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -465,14 +465,17 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c) #define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 #define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 #define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 +#define x86_VMX_FEATURE_EPT_CAP_AD 0x00200000 u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; + u32 msr_vpid_cap, msr_ept_cap; clear_cpu_cap(c, X86_FEATURE_TPR_SHADOW); clear_cpu_cap(c, X86_FEATURE_VNMI); clear_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); clear_cpu_cap(c, X86_FEATURE_EPT); clear_cpu_cap(c, X86_FEATURE_VPID); + clear_cpu_cap(c, X86_FEATURE_EPT_AD); rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); msr_ctl = vmx_msr_high | vmx_msr_low; @@ -487,8 +490,13 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c) if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) { set_cpu_cap(c, X86_FEATURE_EPT); + rdmsr(MSR_IA32_VMX_EPT_VPID_CAP, + msr_ept_cap, msr_vpid_cap); + if (msr_ept_cap & x86_VMX_FEATURE_EPT_CAP_AD) + set_cpu_cap(c, X86_FEATURE_EPT_AD); + } if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) set_cpu_cap(c, X86_FEATURE_VPID); } diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index ec4754f81cbd..abb71ac70443 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -859,6 +859,8 @@ static __init bool get_rdt_resources(void) return (rdt_mon_capable || rdt_alloc_capable); } +static enum cpuhp_state rdt_online; + static int __init intel_rdt_late_init(void) { struct rdt_resource *r; @@ -880,6 +882,7 @@ static int __init intel_rdt_late_init(void) cpuhp_remove_state(state); return ret; } + rdt_online = state; for_each_alloc_capable_rdt_resource(r) pr_info("Intel RDT %s allocation detected\n", r->name); @@ -891,3 +894,11 @@ static int __init intel_rdt_late_init(void) } late_initcall(intel_rdt_late_init); + +static void __exit intel_rdt_exit(void) +{ + cpuhp_remove_state(rdt_online); + rdtgroup_exit(); +} + +__exitcall(intel_rdt_exit); diff --git a/arch/x86/kernel/cpu/intel_rdt.h b/arch/x86/kernel/cpu/intel_rdt.h index 39752825e376..4e588f36228f 100644 --- a/arch/x86/kernel/cpu/intel_rdt.h +++ b/arch/x86/kernel/cpu/intel_rdt.h @@ -80,6 +80,34 @@ enum rdt_group_type { RDT_NUM_GROUP, }; +/** + * enum rdtgrp_mode - Mode of a RDT resource group + * @RDT_MODE_SHAREABLE: This resource group allows sharing of its allocations + * @RDT_MODE_EXCLUSIVE: No sharing of this resource group's allocations allowed + * @RDT_MODE_PSEUDO_LOCKSETUP: Resource group will be used for Pseudo-Locking + * @RDT_MODE_PSEUDO_LOCKED: No sharing of this resource group's allocations + * allowed AND the allocations are Cache Pseudo-Locked + * + * The mode of a resource group enables control over the allowed overlap + * between allocations associated with different resource groups (classes + * of service). User is able to modify the mode of a resource group by + * writing to the "mode" resctrl file associated with the resource group. + * + * The "shareable", "exclusive", and "pseudo-locksetup" modes are set by + * writing the appropriate text to the "mode" file. A resource group enters + * "pseudo-locked" mode after the schemata is written while the resource + * group is in "pseudo-locksetup" mode. + */ +enum rdtgrp_mode { + RDT_MODE_SHAREABLE = 0, + RDT_MODE_EXCLUSIVE, + RDT_MODE_PSEUDO_LOCKSETUP, + RDT_MODE_PSEUDO_LOCKED, + + /* Must be last */ + RDT_NUM_MODES, +}; + /** * struct mongroup - store mon group's data in resctrl fs. * @mon_data_kn kernlfs node for the mon_data directory @@ -94,6 +122,43 @@ struct mongroup { u32 rmid; }; +/** + * struct pseudo_lock_region - pseudo-lock region information + * @r: RDT resource to which this pseudo-locked region + * belongs + * @d: RDT domain to which this pseudo-locked region + * belongs + * @cbm: bitmask of the pseudo-locked region + * @lock_thread_wq: waitqueue used to wait on the pseudo-locking thread + * completion + * @thread_done: variable used by waitqueue to test if pseudo-locking + * thread completed + * @cpu: core associated with the cache on which the setup code + * will be run + * @line_size: size of the cache lines + * @size: size of pseudo-locked region in bytes + * @kmem: the kernel memory associated with pseudo-locked region + * @minor: minor number of character device associated with this + * region + * @debugfs_dir: pointer to this region's directory in the debugfs + * filesystem + * @pm_reqs: Power management QoS requests related to this region + */ +struct pseudo_lock_region { + struct rdt_resource *r; + struct rdt_domain *d; + u32 cbm; + wait_queue_head_t lock_thread_wq; + int thread_done; + int cpu; + unsigned int line_size; + unsigned int size; + void *kmem; + unsigned int minor; + struct dentry *debugfs_dir; + struct list_head pm_reqs; +}; + /** * struct rdtgroup - store rdtgroup's data in resctrl file system. * @kn: kernfs node @@ -106,16 +171,20 @@ struct mongroup { * @type: indicates type of this rdtgroup - either * monitor only or ctrl_mon group * @mon: mongroup related data + * @mode: mode of resource group + * @plr: pseudo-locked region */ struct rdtgroup { - struct kernfs_node *kn; - struct list_head rdtgroup_list; - u32 closid; - struct cpumask cpu_mask; - int flags; - atomic_t waitcount; - enum rdt_group_type type; - struct mongroup mon; + struct kernfs_node *kn; + struct list_head rdtgroup_list; + u32 closid; + struct cpumask cpu_mask; + int flags; + atomic_t waitcount; + enum rdt_group_type type; + struct mongroup mon; + enum rdtgrp_mode mode; + struct pseudo_lock_region *plr; }; /* rdtgroup.flags */ @@ -148,6 +217,7 @@ extern struct list_head rdt_all_groups; extern int max_name_width, max_data_width; int __init rdtgroup_init(void); +void __exit rdtgroup_exit(void); /** * struct rftype - describe each file in the resctrl file system @@ -216,22 +286,24 @@ struct mbm_state { * @mbps_val: When mba_sc is enabled, this holds the bandwidth in MBps * @new_ctrl: new ctrl value to be loaded * @have_new_ctrl: did user provide new_ctrl for this domain + * @plr: pseudo-locked region (if any) associated with domain */ struct rdt_domain { - struct list_head list; - int id; - struct cpumask cpu_mask; - unsigned long *rmid_busy_llc; - struct mbm_state *mbm_total; - struct mbm_state *mbm_local; - struct delayed_work mbm_over; - struct delayed_work cqm_limbo; - int mbm_work_cpu; - int cqm_work_cpu; - u32 *ctrl_val; - u32 *mbps_val; - u32 new_ctrl; - bool have_new_ctrl; + struct list_head list; + int id; + struct cpumask cpu_mask; + unsigned long *rmid_busy_llc; + struct mbm_state *mbm_total; + struct mbm_state *mbm_local; + struct delayed_work mbm_over; + struct delayed_work cqm_limbo; + int mbm_work_cpu; + int cqm_work_cpu; + u32 *ctrl_val; + u32 *mbps_val; + u32 new_ctrl; + bool have_new_ctrl; + struct pseudo_lock_region *plr; }; /** @@ -351,7 +423,7 @@ struct rdt_resource { struct rdt_cache cache; struct rdt_membw membw; const char *format_str; - int (*parse_ctrlval) (char *buf, struct rdt_resource *r, + int (*parse_ctrlval) (void *data, struct rdt_resource *r, struct rdt_domain *d); struct list_head evt_list; int num_rmid; @@ -359,8 +431,8 @@ struct rdt_resource { unsigned long fflags; }; -int parse_cbm(char *buf, struct rdt_resource *r, struct rdt_domain *d); -int parse_bw(char *buf, struct rdt_resource *r, struct rdt_domain *d); +int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d); +int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d); extern struct mutex rdtgroup_mutex; @@ -368,7 +440,7 @@ extern struct rdt_resource rdt_resources_all[]; extern struct rdtgroup rdtgroup_default; DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key); -int __init rdtgroup_init(void); +extern struct dentry *debugfs_resctrl; enum { RDT_RESOURCE_L3, @@ -439,13 +511,32 @@ void rdt_last_cmd_printf(const char *fmt, ...); void rdt_ctrl_update(void *arg); struct rdtgroup *rdtgroup_kn_lock_live(struct kernfs_node *kn); void rdtgroup_kn_unlock(struct kernfs_node *kn); +int rdtgroup_kn_mode_restrict(struct rdtgroup *r, const char *name); +int rdtgroup_kn_mode_restore(struct rdtgroup *r, const char *name, + umode_t mask); struct rdt_domain *rdt_find_domain(struct rdt_resource *r, int id, struct list_head **pos); ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off); int rdtgroup_schemata_show(struct kernfs_open_file *of, struct seq_file *s, void *v); +bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d, + u32 _cbm, int closid, bool exclusive); +unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, struct rdt_domain *d, + u32 cbm); +enum rdtgrp_mode rdtgroup_mode_by_closid(int closid); +int rdtgroup_tasks_assigned(struct rdtgroup *r); +int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp); +int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp); +bool rdtgroup_cbm_overlaps_pseudo_locked(struct rdt_domain *d, u32 _cbm); +bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d); +int rdt_pseudo_lock_init(void); +void rdt_pseudo_lock_release(void); +int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp); +void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp); struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r); +int update_domains(struct rdt_resource *r, int closid); +void closid_free(int closid); int alloc_rmid(void); void free_rmid(u32 rmid); int rdt_get_mon_l3_config(struct rdt_resource *r); diff --git a/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c b/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c index 116d57b248d3..af358ca05160 100644 --- a/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c +++ b/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c @@ -64,9 +64,10 @@ static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r) return true; } -int parse_bw(char *buf, struct rdt_resource *r, struct rdt_domain *d) +int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d) { unsigned long data; + char *buf = _buf; if (d->have_new_ctrl) { rdt_last_cmd_printf("duplicate domain %d\n", d->id); @@ -87,7 +88,7 @@ int parse_bw(char *buf, struct rdt_resource *r, struct rdt_domain *d) * are allowed (e.g. FFFFH, 0FF0H, 003CH, etc.). * Additionally Haswell requires at least two bits set. */ -static bool cbm_validate(char *buf, unsigned long *data, struct rdt_resource *r) +static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r) { unsigned long first_bit, zero_bit, val; unsigned int cbm_len = r->cache.cbm_len; @@ -122,22 +123,64 @@ static bool cbm_validate(char *buf, unsigned long *data, struct rdt_resource *r) return true; } +struct rdt_cbm_parse_data { + struct rdtgroup *rdtgrp; + char *buf; +}; + /* * Read one cache bit mask (hex). Check that it is valid for the current * resource type. */ -int parse_cbm(char *buf, struct rdt_resource *r, struct rdt_domain *d) +int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d) { - unsigned long data; + struct rdt_cbm_parse_data *data = _data; + struct rdtgroup *rdtgrp = data->rdtgrp; + u32 cbm_val; if (d->have_new_ctrl) { rdt_last_cmd_printf("duplicate domain %d\n", d->id); return -EINVAL; } - if(!cbm_validate(buf, &data, r)) + /* + * Cannot set up more than one pseudo-locked region in a cache + * hierarchy. + */ + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP && + rdtgroup_pseudo_locked_in_hierarchy(d)) { + rdt_last_cmd_printf("pseudo-locked region in hierarchy\n"); return -EINVAL; - d->new_ctrl = data; + } + + if (!cbm_validate(data->buf, &cbm_val, r)) + return -EINVAL; + + if ((rdtgrp->mode == RDT_MODE_EXCLUSIVE || + rdtgrp->mode == RDT_MODE_SHAREABLE) && + rdtgroup_cbm_overlaps_pseudo_locked(d, cbm_val)) { + rdt_last_cmd_printf("CBM overlaps with pseudo-locked region\n"); + return -EINVAL; + } + + /* + * The CBM may not overlap with the CBM of another closid if + * either is exclusive. + */ + if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, true)) { + rdt_last_cmd_printf("overlaps with exclusive group\n"); + return -EINVAL; + } + + if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, false)) { + if (rdtgrp->mode == RDT_MODE_EXCLUSIVE || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + rdt_last_cmd_printf("overlaps with other group\n"); + return -EINVAL; + } + } + + d->new_ctrl = cbm_val; d->have_new_ctrl = true; return 0; @@ -149,8 +192,10 @@ int parse_cbm(char *buf, struct rdt_resource *r, struct rdt_domain *d) * separated by ";". The "id" is in decimal, and must match one of * the "id"s for this resource. */ -static int parse_line(char *line, struct rdt_resource *r) +static int parse_line(char *line, struct rdt_resource *r, + struct rdtgroup *rdtgrp) { + struct rdt_cbm_parse_data data; char *dom = NULL, *id; struct rdt_domain *d; unsigned long dom_id; @@ -167,15 +212,32 @@ next: dom = strim(dom); list_for_each_entry(d, &r->domains, list) { if (d->id == dom_id) { - if (r->parse_ctrlval(dom, r, d)) + data.buf = dom; + data.rdtgrp = rdtgrp; + if (r->parse_ctrlval(&data, r, d)) return -EINVAL; + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + /* + * In pseudo-locking setup mode and just + * parsed a valid CBM that should be + * pseudo-locked. Only one locked region per + * resource group and domain so just do + * the required initialization for single + * region and return. + */ + rdtgrp->plr->r = r; + rdtgrp->plr->d = d; + rdtgrp->plr->cbm = d->new_ctrl; + d->plr = rdtgrp->plr; + return 0; + } goto next; } } return -EINVAL; } -static int update_domains(struct rdt_resource *r, int closid) +int update_domains(struct rdt_resource *r, int closid) { struct msr_param msr_param; cpumask_var_t cpu_mask; @@ -220,13 +282,14 @@ done: return 0; } -static int rdtgroup_parse_resource(char *resname, char *tok, int closid) +static int rdtgroup_parse_resource(char *resname, char *tok, + struct rdtgroup *rdtgrp) { struct rdt_resource *r; for_each_alloc_enabled_rdt_resource(r) { - if (!strcmp(resname, r->name) && closid < r->num_closid) - return parse_line(tok, r); + if (!strcmp(resname, r->name) && rdtgrp->closid < r->num_closid) + return parse_line(tok, r, rdtgrp); } rdt_last_cmd_printf("unknown/unsupported resource name '%s'\n", resname); return -EINVAL; @@ -239,7 +302,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, struct rdt_domain *dom; struct rdt_resource *r; char *tok, *resname; - int closid, ret = 0; + int ret = 0; /* Valid input requires a trailing newline */ if (nbytes == 0 || buf[nbytes - 1] != '\n') @@ -253,7 +316,15 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, } rdt_last_cmd_clear(); - closid = rdtgrp->closid; + /* + * No changes to pseudo-locked region allowed. It has to be removed + * and re-created instead. + */ + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { + ret = -EINVAL; + rdt_last_cmd_puts("resource group is pseudo-locked\n"); + goto out; + } for_each_alloc_enabled_rdt_resource(r) { list_for_each_entry(dom, &r->domains, list) @@ -272,17 +343,27 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, ret = -EINVAL; goto out; } - ret = rdtgroup_parse_resource(resname, tok, closid); + ret = rdtgroup_parse_resource(resname, tok, rdtgrp); if (ret) goto out; } for_each_alloc_enabled_rdt_resource(r) { - ret = update_domains(r, closid); + ret = update_domains(r, rdtgrp->closid); if (ret) goto out; } + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + /* + * If pseudo-locking fails we keep the resource group in + * mode RDT_MODE_PSEUDO_LOCKSETUP with its class of service + * active and updated for just the domain the pseudo-locked + * region was requested for. + */ + ret = rdtgroup_pseudo_lock_create(rdtgrp); + } + out: rdtgroup_kn_unlock(of->kn); return ret ?: nbytes; @@ -318,10 +399,18 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of, rdtgrp = rdtgroup_kn_lock_live(of->kn); if (rdtgrp) { - closid = rdtgrp->closid; - for_each_alloc_enabled_rdt_resource(r) { - if (closid < r->num_closid) - show_doms(s, r, closid); + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + for_each_alloc_enabled_rdt_resource(r) + seq_printf(s, "%s:uninitialized\n", r->name); + } else if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { + seq_printf(s, "%s:%d=%x\n", rdtgrp->plr->r->name, + rdtgrp->plr->d->id, rdtgrp->plr->cbm); + } else { + closid = rdtgrp->closid; + for_each_alloc_enabled_rdt_resource(r) { + if (closid < r->num_closid) + show_doms(s, r, closid); + } } } else { ret = -ENOENT; diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c new file mode 100644 index 000000000000..40f3903ae5d9 --- /dev/null +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c @@ -0,0 +1,1522 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Resource Director Technology (RDT) + * + * Pseudo-locking support built on top of Cache Allocation Technology (CAT) + * + * Copyright (C) 2018 Intel Corporation + * + * Author: Reinette Chatre + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "intel_rdt.h" + +#define CREATE_TRACE_POINTS +#include "intel_rdt_pseudo_lock_event.h" + +/* + * MSR_MISC_FEATURE_CONTROL register enables the modification of hardware + * prefetcher state. Details about this register can be found in the MSR + * tables for specific platforms found in Intel's SDM. + */ +#define MSR_MISC_FEATURE_CONTROL 0x000001a4 + +/* + * The bits needed to disable hardware prefetching varies based on the + * platform. During initialization we will discover which bits to use. + */ +static u64 prefetch_disable_bits; + +/* + * Major number assigned to and shared by all devices exposing + * pseudo-locked regions. + */ +static unsigned int pseudo_lock_major; +static unsigned long pseudo_lock_minor_avail = GENMASK(MINORBITS, 0); +static struct class *pseudo_lock_class; + +/** + * get_prefetch_disable_bits - prefetch disable bits of supported platforms + * + * Capture the list of platforms that have been validated to support + * pseudo-locking. This includes testing to ensure pseudo-locked regions + * with low cache miss rates can be created under variety of load conditions + * as well as that these pseudo-locked regions can maintain their low cache + * miss rates under variety of load conditions for significant lengths of time. + * + * After a platform has been validated to support pseudo-locking its + * hardware prefetch disable bits are included here as they are documented + * in the SDM. + * + * When adding a platform here also add support for its cache events to + * measure_cycles_perf_fn() + * + * Return: + * If platform is supported, the bits to disable hardware prefetchers, 0 + * if platform is not supported. + */ +static u64 get_prefetch_disable_bits(void) +{ + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || + boot_cpu_data.x86 != 6) + return 0; + + switch (boot_cpu_data.x86_model) { + case INTEL_FAM6_BROADWELL_X: + /* + * SDM defines bits of MSR_MISC_FEATURE_CONTROL register + * as: + * 0 L2 Hardware Prefetcher Disable (R/W) + * 1 L2 Adjacent Cache Line Prefetcher Disable (R/W) + * 2 DCU Hardware Prefetcher Disable (R/W) + * 3 DCU IP Prefetcher Disable (R/W) + * 63:4 Reserved + */ + return 0xF; + case INTEL_FAM6_ATOM_GOLDMONT: + case INTEL_FAM6_ATOM_GEMINI_LAKE: + /* + * SDM defines bits of MSR_MISC_FEATURE_CONTROL register + * as: + * 0 L2 Hardware Prefetcher Disable (R/W) + * 1 Reserved + * 2 DCU Hardware Prefetcher Disable (R/W) + * 63:3 Reserved + */ + return 0x5; + } + + return 0; +} + +/* + * Helper to write 64bit value to MSR without tracing. Used when + * use of the cache should be restricted and use of registers used + * for local variables avoided. + */ +static inline void pseudo_wrmsrl_notrace(unsigned int msr, u64 val) +{ + __wrmsr(msr, (u32)(val & 0xffffffffULL), (u32)(val >> 32)); +} + +/** + * pseudo_lock_minor_get - Obtain available minor number + * @minor: Pointer to where new minor number will be stored + * + * A bitmask is used to track available minor numbers. Here the next free + * minor number is marked as unavailable and returned. + * + * Return: 0 on success, <0 on failure. + */ +static int pseudo_lock_minor_get(unsigned int *minor) +{ + unsigned long first_bit; + + first_bit = find_first_bit(&pseudo_lock_minor_avail, MINORBITS); + + if (first_bit == MINORBITS) + return -ENOSPC; + + __clear_bit(first_bit, &pseudo_lock_minor_avail); + *minor = first_bit; + + return 0; +} + +/** + * pseudo_lock_minor_release - Return minor number to available + * @minor: The minor number made available + */ +static void pseudo_lock_minor_release(unsigned int minor) +{ + __set_bit(minor, &pseudo_lock_minor_avail); +} + +/** + * region_find_by_minor - Locate a pseudo-lock region by inode minor number + * @minor: The minor number of the device representing pseudo-locked region + * + * When the character device is accessed we need to determine which + * pseudo-locked region it belongs to. This is done by matching the minor + * number of the device to the pseudo-locked region it belongs. + * + * Minor numbers are assigned at the time a pseudo-locked region is associated + * with a cache instance. + * + * Return: On success return pointer to resource group owning the pseudo-locked + * region, NULL on failure. + */ +static struct rdtgroup *region_find_by_minor(unsigned int minor) +{ + struct rdtgroup *rdtgrp, *rdtgrp_match = NULL; + + list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) { + if (rdtgrp->plr && rdtgrp->plr->minor == minor) { + rdtgrp_match = rdtgrp; + break; + } + } + return rdtgrp_match; +} + +/** + * pseudo_lock_pm_req - A power management QoS request list entry + * @list: Entry within the @pm_reqs list for a pseudo-locked region + * @req: PM QoS request + */ +struct pseudo_lock_pm_req { + struct list_head list; + struct dev_pm_qos_request req; +}; + +static void pseudo_lock_cstates_relax(struct pseudo_lock_region *plr) +{ + struct pseudo_lock_pm_req *pm_req, *next; + + list_for_each_entry_safe(pm_req, next, &plr->pm_reqs, list) { + dev_pm_qos_remove_request(&pm_req->req); + list_del(&pm_req->list); + kfree(pm_req); + } +} + +/** + * pseudo_lock_cstates_constrain - Restrict cores from entering C6 + * + * To prevent the cache from being affected by power management entering + * C6 has to be avoided. This is accomplished by requesting a latency + * requirement lower than lowest C6 exit latency of all supported + * platforms as found in the cpuidle state tables in the intel_idle driver. + * At this time it is possible to do so with a single latency requirement + * for all supported platforms. + * + * Since Goldmont is supported, which is affected by X86_BUG_MONITOR, + * the ACPI latencies need to be considered while keeping in mind that C2 + * may be set to map to deeper sleep states. In this case the latency + * requirement needs to prevent entering C2 also. + */ +static int pseudo_lock_cstates_constrain(struct pseudo_lock_region *plr) +{ + struct pseudo_lock_pm_req *pm_req; + int cpu; + int ret; + + for_each_cpu(cpu, &plr->d->cpu_mask) { + pm_req = kzalloc(sizeof(*pm_req), GFP_KERNEL); + if (!pm_req) { + rdt_last_cmd_puts("fail allocating mem for PM QoS\n"); + ret = -ENOMEM; + goto out_err; + } + ret = dev_pm_qos_add_request(get_cpu_device(cpu), + &pm_req->req, + DEV_PM_QOS_RESUME_LATENCY, + 30); + if (ret < 0) { + rdt_last_cmd_printf("fail to add latency req cpu%d\n", + cpu); + kfree(pm_req); + ret = -1; + goto out_err; + } + list_add(&pm_req->list, &plr->pm_reqs); + } + + return 0; + +out_err: + pseudo_lock_cstates_relax(plr); + return ret; +} + +/** + * pseudo_lock_region_clear - Reset pseudo-lock region data + * @plr: pseudo-lock region + * + * All content of the pseudo-locked region is reset - any memory allocated + * freed. + * + * Return: void + */ +static void pseudo_lock_region_clear(struct pseudo_lock_region *plr) +{ + plr->size = 0; + plr->line_size = 0; + kfree(plr->kmem); + plr->kmem = NULL; + plr->r = NULL; + if (plr->d) + plr->d->plr = NULL; + plr->d = NULL; + plr->cbm = 0; + plr->debugfs_dir = NULL; +} + +/** + * pseudo_lock_region_init - Initialize pseudo-lock region information + * @plr: pseudo-lock region + * + * Called after user provided a schemata to be pseudo-locked. From the + * schemata the &struct pseudo_lock_region is on entry already initialized + * with the resource, domain, and capacity bitmask. Here the information + * required for pseudo-locking is deduced from this data and &struct + * pseudo_lock_region initialized further. This information includes: + * - size in bytes of the region to be pseudo-locked + * - cache line size to know the stride with which data needs to be accessed + * to be pseudo-locked + * - a cpu associated with the cache instance on which the pseudo-locking + * flow can be executed + * + * Return: 0 on success, <0 on failure. Descriptive error will be written + * to last_cmd_status buffer. + */ +static int pseudo_lock_region_init(struct pseudo_lock_region *plr) +{ + struct cpu_cacheinfo *ci; + int ret; + int i; + + /* Pick the first cpu we find that is associated with the cache. */ + plr->cpu = cpumask_first(&plr->d->cpu_mask); + + if (!cpu_online(plr->cpu)) { + rdt_last_cmd_printf("cpu %u associated with cache not online\n", + plr->cpu); + ret = -ENODEV; + goto out_region; + } + + ci = get_cpu_cacheinfo(plr->cpu); + + plr->size = rdtgroup_cbm_to_size(plr->r, plr->d, plr->cbm); + + for (i = 0; i < ci->num_leaves; i++) { + if (ci->info_list[i].level == plr->r->cache_level) { + plr->line_size = ci->info_list[i].coherency_line_size; + return 0; + } + } + + ret = -1; + rdt_last_cmd_puts("unable to determine cache line size\n"); +out_region: + pseudo_lock_region_clear(plr); + return ret; +} + +/** + * pseudo_lock_init - Initialize a pseudo-lock region + * @rdtgrp: resource group to which new pseudo-locked region will belong + * + * A pseudo-locked region is associated with a resource group. When this + * association is created the pseudo-locked region is initialized. The + * details of the pseudo-locked region are not known at this time so only + * allocation is done and association established. + * + * Return: 0 on success, <0 on failure + */ +static int pseudo_lock_init(struct rdtgroup *rdtgrp) +{ + struct pseudo_lock_region *plr; + + plr = kzalloc(sizeof(*plr), GFP_KERNEL); + if (!plr) + return -ENOMEM; + + init_waitqueue_head(&plr->lock_thread_wq); + INIT_LIST_HEAD(&plr->pm_reqs); + rdtgrp->plr = plr; + return 0; +} + +/** + * pseudo_lock_region_alloc - Allocate kernel memory that will be pseudo-locked + * @plr: pseudo-lock region + * + * Initialize the details required to set up the pseudo-locked region and + * allocate the contiguous memory that will be pseudo-locked to the cache. + * + * Return: 0 on success, <0 on failure. Descriptive error will be written + * to last_cmd_status buffer. + */ +static int pseudo_lock_region_alloc(struct pseudo_lock_region *plr) +{ + int ret; + + ret = pseudo_lock_region_init(plr); + if (ret < 0) + return ret; + + /* + * We do not yet support contiguous regions larger than + * KMALLOC_MAX_SIZE. + */ + if (plr->size > KMALLOC_MAX_SIZE) { + rdt_last_cmd_puts("requested region exceeds maximum size\n"); + ret = -E2BIG; + goto out_region; + } + + plr->kmem = kzalloc(plr->size, GFP_KERNEL); + if (!plr->kmem) { + rdt_last_cmd_puts("unable to allocate memory\n"); + ret = -ENOMEM; + goto out_region; + } + + ret = 0; + goto out; +out_region: + pseudo_lock_region_clear(plr); +out: + return ret; +} + +/** + * pseudo_lock_free - Free a pseudo-locked region + * @rdtgrp: resource group to which pseudo-locked region belonged + * + * The pseudo-locked region's resources have already been released, or not + * yet created at this point. Now it can be freed and disassociated from the + * resource group. + * + * Return: void + */ +static void pseudo_lock_free(struct rdtgroup *rdtgrp) +{ + pseudo_lock_region_clear(rdtgrp->plr); + kfree(rdtgrp->plr); + rdtgrp->plr = NULL; +} + +/** + * pseudo_lock_fn - Load kernel memory into cache + * @_rdtgrp: resource group to which pseudo-lock region belongs + * + * This is the core pseudo-locking flow. + * + * First we ensure that the kernel memory cannot be found in the cache. + * Then, while taking care that there will be as little interference as + * possible, the memory to be loaded is accessed while core is running + * with class of service set to the bitmask of the pseudo-locked region. + * After this is complete no future CAT allocations will be allowed to + * overlap with this bitmask. + * + * Local register variables are utilized to ensure that the memory region + * to be locked is the only memory access made during the critical locking + * loop. + * + * Return: 0. Waiter on waitqueue will be woken on completion. + */ +static int pseudo_lock_fn(void *_rdtgrp) +{ + struct rdtgroup *rdtgrp = _rdtgrp; + struct pseudo_lock_region *plr = rdtgrp->plr; + u32 rmid_p, closid_p; + unsigned long i; +#ifdef CONFIG_KASAN + /* + * The registers used for local register variables are also used + * when KASAN is active. When KASAN is active we use a regular + * variable to ensure we always use a valid pointer, but the cost + * is that this variable will enter the cache through evicting the + * memory we are trying to lock into the cache. Thus expect lower + * pseudo-locking success rate when KASAN is active. + */ + unsigned int line_size; + unsigned int size; + void *mem_r; +#else + register unsigned int line_size asm("esi"); + register unsigned int size asm("edi"); +#ifdef CONFIG_X86_64 + register void *mem_r asm("rbx"); +#else + register void *mem_r asm("ebx"); +#endif /* CONFIG_X86_64 */ +#endif /* CONFIG_KASAN */ + + /* + * Make sure none of the allocated memory is cached. If it is we + * will get a cache hit in below loop from outside of pseudo-locked + * region. + * wbinvd (as opposed to clflush/clflushopt) is required to + * increase likelihood that allocated cache portion will be filled + * with associated memory. + */ + native_wbinvd(); + + /* + * Always called with interrupts enabled. By disabling interrupts + * ensure that we will not be preempted during this critical section. + */ + local_irq_disable(); + + /* + * Call wrmsr and rdmsr as directly as possible to avoid tracing + * clobbering local register variables or affecting cache accesses. + * + * Disable the hardware prefetcher so that when the end of the memory + * being pseudo-locked is reached the hardware will not read beyond + * the buffer and evict pseudo-locked memory read earlier from the + * cache. + */ + __wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0); + closid_p = this_cpu_read(pqr_state.cur_closid); + rmid_p = this_cpu_read(pqr_state.cur_rmid); + mem_r = plr->kmem; + size = plr->size; + line_size = plr->line_size; + /* + * Critical section begin: start by writing the closid associated + * with the capacity bitmask of the cache region being + * pseudo-locked followed by reading of kernel memory to load it + * into the cache. + */ + __wrmsr(IA32_PQR_ASSOC, rmid_p, rdtgrp->closid); + /* + * Cache was flushed earlier. Now access kernel memory to read it + * into cache region associated with just activated plr->closid. + * Loop over data twice: + * - In first loop the cache region is shared with the page walker + * as it populates the paging structure caches (including TLB). + * - In the second loop the paging structure caches are used and + * cache region is populated with the memory being referenced. + */ + for (i = 0; i < size; i += PAGE_SIZE) { + /* + * Add a barrier to prevent speculative execution of this + * loop reading beyond the end of the buffer. + */ + rmb(); + asm volatile("mov (%0,%1,1), %%eax\n\t" + : + : "r" (mem_r), "r" (i) + : "%eax", "memory"); + } + for (i = 0; i < size; i += line_size) { + /* + * Add a barrier to prevent speculative execution of this + * loop reading beyond the end of the buffer. + */ + rmb(); + asm volatile("mov (%0,%1,1), %%eax\n\t" + : + : "r" (mem_r), "r" (i) + : "%eax", "memory"); + } + /* + * Critical section end: restore closid with capacity bitmask that + * does not overlap with pseudo-locked region. + */ + __wrmsr(IA32_PQR_ASSOC, rmid_p, closid_p); + + /* Re-enable the hardware prefetcher(s) */ + wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0); + local_irq_enable(); + + plr->thread_done = 1; + wake_up_interruptible(&plr->lock_thread_wq); + return 0; +} + +/** + * rdtgroup_monitor_in_progress - Test if monitoring in progress + * @r: resource group being queried + * + * Return: 1 if monitor groups have been created for this resource + * group, 0 otherwise. + */ +static int rdtgroup_monitor_in_progress(struct rdtgroup *rdtgrp) +{ + return !list_empty(&rdtgrp->mon.crdtgrp_list); +} + +/** + * rdtgroup_locksetup_user_restrict - Restrict user access to group + * @rdtgrp: resource group needing access restricted + * + * A resource group used for cache pseudo-locking cannot have cpus or tasks + * assigned to it. This is communicated to the user by restricting access + * to all the files that can be used to make such changes. + * + * Permissions restored with rdtgroup_locksetup_user_restore() + * + * Return: 0 on success, <0 on failure. If a failure occurs during the + * restriction of access an attempt will be made to restore permissions but + * the state of the mode of these files will be uncertain when a failure + * occurs. + */ +static int rdtgroup_locksetup_user_restrict(struct rdtgroup *rdtgrp) +{ + int ret; + + ret = rdtgroup_kn_mode_restrict(rdtgrp, "tasks"); + if (ret) + return ret; + + ret = rdtgroup_kn_mode_restrict(rdtgrp, "cpus"); + if (ret) + goto err_tasks; + + ret = rdtgroup_kn_mode_restrict(rdtgrp, "cpus_list"); + if (ret) + goto err_cpus; + + if (rdt_mon_capable) { + ret = rdtgroup_kn_mode_restrict(rdtgrp, "mon_groups"); + if (ret) + goto err_cpus_list; + } + + ret = 0; + goto out; + +err_cpus_list: + rdtgroup_kn_mode_restore(rdtgrp, "cpus_list", 0777); +err_cpus: + rdtgroup_kn_mode_restore(rdtgrp, "cpus", 0777); +err_tasks: + rdtgroup_kn_mode_restore(rdtgrp, "tasks", 0777); +out: + return ret; +} + +/** + * rdtgroup_locksetup_user_restore - Restore user access to group + * @rdtgrp: resource group needing access restored + * + * Restore all file access previously removed using + * rdtgroup_locksetup_user_restrict() + * + * Return: 0 on success, <0 on failure. If a failure occurs during the + * restoration of access an attempt will be made to restrict permissions + * again but the state of the mode of these files will be uncertain when + * a failure occurs. + */ +static int rdtgroup_locksetup_user_restore(struct rdtgroup *rdtgrp) +{ + int ret; + + ret = rdtgroup_kn_mode_restore(rdtgrp, "tasks", 0777); + if (ret) + return ret; + + ret = rdtgroup_kn_mode_restore(rdtgrp, "cpus", 0777); + if (ret) + goto err_tasks; + + ret = rdtgroup_kn_mode_restore(rdtgrp, "cpus_list", 0777); + if (ret) + goto err_cpus; + + if (rdt_mon_capable) { + ret = rdtgroup_kn_mode_restore(rdtgrp, "mon_groups", 0777); + if (ret) + goto err_cpus_list; + } + + ret = 0; + goto out; + +err_cpus_list: + rdtgroup_kn_mode_restrict(rdtgrp, "cpus_list"); +err_cpus: + rdtgroup_kn_mode_restrict(rdtgrp, "cpus"); +err_tasks: + rdtgroup_kn_mode_restrict(rdtgrp, "tasks"); +out: + return ret; +} + +/** + * rdtgroup_locksetup_enter - Resource group enters locksetup mode + * @rdtgrp: resource group requested to enter locksetup mode + * + * A resource group enters locksetup mode to reflect that it would be used + * to represent a pseudo-locked region and is in the process of being set + * up to do so. A resource group used for a pseudo-locked region would + * lose the closid associated with it so we cannot allow it to have any + * tasks or cpus assigned nor permit tasks or cpus to be assigned in the + * future. Monitoring of a pseudo-locked region is not allowed either. + * + * The above and more restrictions on a pseudo-locked region are checked + * for and enforced before the resource group enters the locksetup mode. + * + * Returns: 0 if the resource group successfully entered locksetup mode, <0 + * on failure. On failure the last_cmd_status buffer is updated with text to + * communicate details of failure to the user. + */ +int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp) +{ + int ret; + + /* + * The default resource group can neither be removed nor lose the + * default closid associated with it. + */ + if (rdtgrp == &rdtgroup_default) { + rdt_last_cmd_puts("cannot pseudo-lock default group\n"); + return -EINVAL; + } + + /* + * Cache Pseudo-locking not supported when CDP is enabled. + * + * Some things to consider if you would like to enable this + * support (using L3 CDP as example): + * - When CDP is enabled two separate resources are exposed, + * L3DATA and L3CODE, but they are actually on the same cache. + * The implication for pseudo-locking is that if a + * pseudo-locked region is created on a domain of one + * resource (eg. L3CODE), then a pseudo-locked region cannot + * be created on that same domain of the other resource + * (eg. L3DATA). This is because the creation of a + * pseudo-locked region involves a call to wbinvd that will + * affect all cache allocations on particular domain. + * - Considering the previous, it may be possible to only + * expose one of the CDP resources to pseudo-locking and + * hide the other. For example, we could consider to only + * expose L3DATA and since the L3 cache is unified it is + * still possible to place instructions there are execute it. + * - If only one region is exposed to pseudo-locking we should + * still keep in mind that availability of a portion of cache + * for pseudo-locking should take into account both resources. + * Similarly, if a pseudo-locked region is created in one + * resource, the portion of cache used by it should be made + * unavailable to all future allocations from both resources. + */ + if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled || + rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled) { + rdt_last_cmd_puts("CDP enabled\n"); + return -EINVAL; + } + + /* + * Not knowing the bits to disable prefetching implies that this + * platform does not support Cache Pseudo-Locking. + */ + prefetch_disable_bits = get_prefetch_disable_bits(); + if (prefetch_disable_bits == 0) { + rdt_last_cmd_puts("pseudo-locking not supported\n"); + return -EINVAL; + } + + if (rdtgroup_monitor_in_progress(rdtgrp)) { + rdt_last_cmd_puts("monitoring in progress\n"); + return -EINVAL; + } + + if (rdtgroup_tasks_assigned(rdtgrp)) { + rdt_last_cmd_puts("tasks assigned to resource group\n"); + return -EINVAL; + } + + if (!cpumask_empty(&rdtgrp->cpu_mask)) { + rdt_last_cmd_puts("CPUs assigned to resource group\n"); + return -EINVAL; + } + + if (rdtgroup_locksetup_user_restrict(rdtgrp)) { + rdt_last_cmd_puts("unable to modify resctrl permissions\n"); + return -EIO; + } + + ret = pseudo_lock_init(rdtgrp); + if (ret) { + rdt_last_cmd_puts("unable to init pseudo-lock region\n"); + goto out_release; + } + + /* + * If this system is capable of monitoring a rmid would have been + * allocated when the control group was created. This is not needed + * anymore when this group would be used for pseudo-locking. This + * is safe to call on platforms not capable of monitoring. + */ + free_rmid(rdtgrp->mon.rmid); + + ret = 0; + goto out; + +out_release: + rdtgroup_locksetup_user_restore(rdtgrp); +out: + return ret; +} + +/** + * rdtgroup_locksetup_exit - resource group exist locksetup mode + * @rdtgrp: resource group + * + * When a resource group exits locksetup mode the earlier restrictions are + * lifted. + * + * Return: 0 on success, <0 on failure + */ +int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp) +{ + int ret; + + if (rdt_mon_capable) { + ret = alloc_rmid(); + if (ret < 0) { + rdt_last_cmd_puts("out of RMIDs\n"); + return ret; + } + rdtgrp->mon.rmid = ret; + } + + ret = rdtgroup_locksetup_user_restore(rdtgrp); + if (ret) { + free_rmid(rdtgrp->mon.rmid); + return ret; + } + + pseudo_lock_free(rdtgrp); + return 0; +} + +/** + * rdtgroup_cbm_overlaps_pseudo_locked - Test if CBM or portion is pseudo-locked + * @d: RDT domain + * @_cbm: CBM to test + * + * @d represents a cache instance and @_cbm a capacity bitmask that is + * considered for it. Determine if @_cbm overlaps with any existing + * pseudo-locked region on @d. + * + * Return: true if @_cbm overlaps with pseudo-locked region on @d, false + * otherwise. + */ +bool rdtgroup_cbm_overlaps_pseudo_locked(struct rdt_domain *d, u32 _cbm) +{ + unsigned long *cbm = (unsigned long *)&_cbm; + unsigned long *cbm_b; + unsigned int cbm_len; + + if (d->plr) { + cbm_len = d->plr->r->cache.cbm_len; + cbm_b = (unsigned long *)&d->plr->cbm; + if (bitmap_intersects(cbm, cbm_b, cbm_len)) + return true; + } + return false; +} + +/** + * rdtgroup_pseudo_locked_in_hierarchy - Pseudo-locked region in cache hierarchy + * @d: RDT domain under test + * + * The setup of a pseudo-locked region affects all cache instances within + * the hierarchy of the region. It is thus essential to know if any + * pseudo-locked regions exist within a cache hierarchy to prevent any + * attempts to create new pseudo-locked regions in the same hierarchy. + * + * Return: true if a pseudo-locked region exists in the hierarchy of @d or + * if it is not possible to test due to memory allocation issue, + * false otherwise. + */ +bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d) +{ + cpumask_var_t cpu_with_psl; + struct rdt_resource *r; + struct rdt_domain *d_i; + bool ret = false; + + if (!zalloc_cpumask_var(&cpu_with_psl, GFP_KERNEL)) + return true; + + /* + * First determine which cpus have pseudo-locked regions + * associated with them. + */ + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d_i, &r->domains, list) { + if (d_i->plr) + cpumask_or(cpu_with_psl, cpu_with_psl, + &d_i->cpu_mask); + } + } + + /* + * Next test if new pseudo-locked region would intersect with + * existing region. + */ + if (cpumask_intersects(&d->cpu_mask, cpu_with_psl)) + ret = true; + + free_cpumask_var(cpu_with_psl); + return ret; +} + +/** + * measure_cycles_lat_fn - Measure cycle latency to read pseudo-locked memory + * @_plr: pseudo-lock region to measure + * + * There is no deterministic way to test if a memory region is cached. One + * way is to measure how long it takes to read the memory, the speed of + * access is a good way to learn how close to the cpu the data was. Even + * more, if the prefetcher is disabled and the memory is read at a stride + * of half the cache line, then a cache miss will be easy to spot since the + * read of the first half would be significantly slower than the read of + * the second half. + * + * Return: 0. Waiter on waitqueue will be woken on completion. + */ +static int measure_cycles_lat_fn(void *_plr) +{ + struct pseudo_lock_region *plr = _plr; + unsigned long i; + u64 start, end; +#ifdef CONFIG_KASAN + /* + * The registers used for local register variables are also used + * when KASAN is active. When KASAN is active we use a regular + * variable to ensure we always use a valid pointer to access memory. + * The cost is that accessing this pointer, which could be in + * cache, will be included in the measurement of memory read latency. + */ + void *mem_r; +#else +#ifdef CONFIG_X86_64 + register void *mem_r asm("rbx"); +#else + register void *mem_r asm("ebx"); +#endif /* CONFIG_X86_64 */ +#endif /* CONFIG_KASAN */ + + local_irq_disable(); + /* + * The wrmsr call may be reordered with the assignment below it. + * Call wrmsr as directly as possible to avoid tracing clobbering + * local register variable used for memory pointer. + */ + __wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0); + mem_r = plr->kmem; + /* + * Dummy execute of the time measurement to load the needed + * instructions into the L1 instruction cache. + */ + start = rdtsc_ordered(); + for (i = 0; i < plr->size; i += 32) { + start = rdtsc_ordered(); + asm volatile("mov (%0,%1,1), %%eax\n\t" + : + : "r" (mem_r), "r" (i) + : "%eax", "memory"); + end = rdtsc_ordered(); + trace_pseudo_lock_mem_latency((u32)(end - start)); + } + wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0); + local_irq_enable(); + plr->thread_done = 1; + wake_up_interruptible(&plr->lock_thread_wq); + return 0; +} + +static int measure_cycles_perf_fn(void *_plr) +{ + unsigned long long l3_hits = 0, l3_miss = 0; + u64 l3_hit_bits = 0, l3_miss_bits = 0; + struct pseudo_lock_region *plr = _plr; + unsigned long long l2_hits, l2_miss; + u64 l2_hit_bits, l2_miss_bits; + unsigned long i; +#ifdef CONFIG_KASAN + /* + * The registers used for local register variables are also used + * when KASAN is active. When KASAN is active we use regular variables + * at the cost of including cache access latency to these variables + * in the measurements. + */ + unsigned int line_size; + unsigned int size; + void *mem_r; +#else + register unsigned int line_size asm("esi"); + register unsigned int size asm("edi"); +#ifdef CONFIG_X86_64 + register void *mem_r asm("rbx"); +#else + register void *mem_r asm("ebx"); +#endif /* CONFIG_X86_64 */ +#endif /* CONFIG_KASAN */ + + /* + * Non-architectural event for the Goldmont Microarchitecture + * from Intel x86 Architecture Software Developer Manual (SDM): + * MEM_LOAD_UOPS_RETIRED D1H (event number) + * Umask values: + * L1_HIT 01H + * L2_HIT 02H + * L1_MISS 08H + * L2_MISS 10H + * + * On Broadwell Microarchitecture the MEM_LOAD_UOPS_RETIRED event + * has two "no fix" errata associated with it: BDM35 and BDM100. On + * this platform we use the following events instead: + * L2_RQSTS 24H (Documented in https://download.01.org/perfmon/BDW/) + * REFERENCES FFH + * MISS 3FH + * LONGEST_LAT_CACHE 2EH (Documented in SDM) + * REFERENCE 4FH + * MISS 41H + */ + + /* + * Start by setting flags for IA32_PERFEVTSELx: + * OS (Operating system mode) 0x2 + * INT (APIC interrupt enable) 0x10 + * EN (Enable counter) 0x40 + * + * Then add the Umask value and event number to select performance + * event. + */ + + switch (boot_cpu_data.x86_model) { + case INTEL_FAM6_ATOM_GOLDMONT: + case INTEL_FAM6_ATOM_GEMINI_LAKE: + l2_hit_bits = (0x52ULL << 16) | (0x2 << 8) | 0xd1; + l2_miss_bits = (0x52ULL << 16) | (0x10 << 8) | 0xd1; + break; + case INTEL_FAM6_BROADWELL_X: + /* On BDW the l2_hit_bits count references, not hits */ + l2_hit_bits = (0x52ULL << 16) | (0xff << 8) | 0x24; + l2_miss_bits = (0x52ULL << 16) | (0x3f << 8) | 0x24; + /* On BDW the l3_hit_bits count references, not hits */ + l3_hit_bits = (0x52ULL << 16) | (0x4f << 8) | 0x2e; + l3_miss_bits = (0x52ULL << 16) | (0x41 << 8) | 0x2e; + break; + default: + goto out; + } + + local_irq_disable(); + /* + * Call wrmsr direcly to avoid the local register variables from + * being overwritten due to reordering of their assignment with + * the wrmsr calls. + */ + __wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0); + /* Disable events and reset counters */ + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 1, 0x0); + if (l3_hit_bits > 0) { + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 2, 0x0); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 3, 0x0); + } + /* Set and enable the L2 counters */ + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, l2_hit_bits); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, l2_miss_bits); + if (l3_hit_bits > 0) { + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2, + l3_hit_bits); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3, + l3_miss_bits); + } + mem_r = plr->kmem; + size = plr->size; + line_size = plr->line_size; + for (i = 0; i < size; i += line_size) { + asm volatile("mov (%0,%1,1), %%eax\n\t" + : + : "r" (mem_r), "r" (i) + : "%eax", "memory"); + } + /* + * Call wrmsr directly (no tracing) to not influence + * the cache access counters as they are disabled. + */ + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, + l2_hit_bits & ~(0x40ULL << 16)); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, + l2_miss_bits & ~(0x40ULL << 16)); + if (l3_hit_bits > 0) { + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2, + l3_hit_bits & ~(0x40ULL << 16)); + pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3, + l3_miss_bits & ~(0x40ULL << 16)); + } + l2_hits = native_read_pmc(0); + l2_miss = native_read_pmc(1); + if (l3_hit_bits > 0) { + l3_hits = native_read_pmc(2); + l3_miss = native_read_pmc(3); + } + wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0); + local_irq_enable(); + /* + * On BDW we count references and misses, need to adjust. Sometimes + * the "hits" counter is a bit more than the references, for + * example, x references but x + 1 hits. To not report invalid + * hit values in this case we treat that as misses eaqual to + * references. + */ + if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) + l2_hits -= (l2_miss > l2_hits ? l2_hits : l2_miss); + trace_pseudo_lock_l2(l2_hits, l2_miss); + if (l3_hit_bits > 0) { + if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) + l3_hits -= (l3_miss > l3_hits ? l3_hits : l3_miss); + trace_pseudo_lock_l3(l3_hits, l3_miss); + } + +out: + plr->thread_done = 1; + wake_up_interruptible(&plr->lock_thread_wq); + return 0; +} + +/** + * pseudo_lock_measure_cycles - Trigger latency measure to pseudo-locked region + * + * The measurement of latency to access a pseudo-locked region should be + * done from a cpu that is associated with that pseudo-locked region. + * Determine which cpu is associated with this region and start a thread on + * that cpu to perform the measurement, wait for that thread to complete. + * + * Return: 0 on success, <0 on failure + */ +static int pseudo_lock_measure_cycles(struct rdtgroup *rdtgrp, int sel) +{ + struct pseudo_lock_region *plr = rdtgrp->plr; + struct task_struct *thread; + unsigned int cpu; + int ret = -1; + + cpus_read_lock(); + mutex_lock(&rdtgroup_mutex); + + if (rdtgrp->flags & RDT_DELETED) { + ret = -ENODEV; + goto out; + } + + plr->thread_done = 0; + cpu = cpumask_first(&plr->d->cpu_mask); + if (!cpu_online(cpu)) { + ret = -ENODEV; + goto out; + } + + if (sel == 1) + thread = kthread_create_on_node(measure_cycles_lat_fn, plr, + cpu_to_node(cpu), + "pseudo_lock_measure/%u", + cpu); + else if (sel == 2) + thread = kthread_create_on_node(measure_cycles_perf_fn, plr, + cpu_to_node(cpu), + "pseudo_lock_measure/%u", + cpu); + else + goto out; + + if (IS_ERR(thread)) { + ret = PTR_ERR(thread); + goto out; + } + kthread_bind(thread, cpu); + wake_up_process(thread); + + ret = wait_event_interruptible(plr->lock_thread_wq, + plr->thread_done == 1); + if (ret < 0) + goto out; + + ret = 0; + +out: + mutex_unlock(&rdtgroup_mutex); + cpus_read_unlock(); + return ret; +} + +static ssize_t pseudo_lock_measure_trigger(struct file *file, + const char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct rdtgroup *rdtgrp = file->private_data; + size_t buf_size; + char buf[32]; + int ret; + int sel; + + buf_size = min(count, (sizeof(buf) - 1)); + if (copy_from_user(buf, user_buf, buf_size)) + return -EFAULT; + + buf[buf_size] = '\0'; + ret = kstrtoint(buf, 10, &sel); + if (ret == 0) { + if (sel != 1) + return -EINVAL; + ret = debugfs_file_get(file->f_path.dentry); + if (ret) + return ret; + ret = pseudo_lock_measure_cycles(rdtgrp, sel); + if (ret == 0) + ret = count; + debugfs_file_put(file->f_path.dentry); + } + + return ret; +} + +static const struct file_operations pseudo_measure_fops = { + .write = pseudo_lock_measure_trigger, + .open = simple_open, + .llseek = default_llseek, +}; + +/** + * rdtgroup_pseudo_lock_create - Create a pseudo-locked region + * @rdtgrp: resource group to which pseudo-lock region belongs + * + * Called when a resource group in the pseudo-locksetup mode receives a + * valid schemata that should be pseudo-locked. Since the resource group is + * in pseudo-locksetup mode the &struct pseudo_lock_region has already been + * allocated and initialized with the essential information. If a failure + * occurs the resource group remains in the pseudo-locksetup mode with the + * &struct pseudo_lock_region associated with it, but cleared from all + * information and ready for the user to re-attempt pseudo-locking by + * writing the schemata again. + * + * Return: 0 if the pseudo-locked region was successfully pseudo-locked, <0 + * on failure. Descriptive error will be written to last_cmd_status buffer. + */ +int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) +{ + struct pseudo_lock_region *plr = rdtgrp->plr; + struct task_struct *thread; + unsigned int new_minor; + struct device *dev; + int ret; + + ret = pseudo_lock_region_alloc(plr); + if (ret < 0) + return ret; + + ret = pseudo_lock_cstates_constrain(plr); + if (ret < 0) { + ret = -EINVAL; + goto out_region; + } + + plr->thread_done = 0; + + thread = kthread_create_on_node(pseudo_lock_fn, rdtgrp, + cpu_to_node(plr->cpu), + "pseudo_lock/%u", plr->cpu); + if (IS_ERR(thread)) { + ret = PTR_ERR(thread); + rdt_last_cmd_printf("locking thread returned error %d\n", ret); + goto out_cstates; + } + + kthread_bind(thread, plr->cpu); + wake_up_process(thread); + + ret = wait_event_interruptible(plr->lock_thread_wq, + plr->thread_done == 1); + if (ret < 0) { + /* + * If the thread does not get on the CPU for whatever + * reason and the process which sets up the region is + * interrupted then this will leave the thread in runnable + * state and once it gets on the CPU it will derefence + * the cleared, but not freed, plr struct resulting in an + * empty pseudo-locking loop. + */ + rdt_last_cmd_puts("locking thread interrupted\n"); + goto out_cstates; + } + + ret = pseudo_lock_minor_get(&new_minor); + if (ret < 0) { + rdt_last_cmd_puts("unable to obtain a new minor number\n"); + goto out_cstates; + } + + /* + * Unlock access but do not release the reference. The + * pseudo-locked region will still be here on return. + * + * The mutex has to be released temporarily to avoid a potential + * deadlock with the mm->mmap_sem semaphore which is obtained in + * the device_create() and debugfs_create_dir() callpath below + * as well as before the mmap() callback is called. + */ + mutex_unlock(&rdtgroup_mutex); + + if (!IS_ERR_OR_NULL(debugfs_resctrl)) { + plr->debugfs_dir = debugfs_create_dir(rdtgrp->kn->name, + debugfs_resctrl); + if (!IS_ERR_OR_NULL(plr->debugfs_dir)) + debugfs_create_file("pseudo_lock_measure", 0200, + plr->debugfs_dir, rdtgrp, + &pseudo_measure_fops); + } + + dev = device_create(pseudo_lock_class, NULL, + MKDEV(pseudo_lock_major, new_minor), + rdtgrp, "%s", rdtgrp->kn->name); + + mutex_lock(&rdtgroup_mutex); + + if (IS_ERR(dev)) { + ret = PTR_ERR(dev); + rdt_last_cmd_printf("failed to create character device: %d\n", + ret); + goto out_debugfs; + } + + /* We released the mutex - check if group was removed while we did so */ + if (rdtgrp->flags & RDT_DELETED) { + ret = -ENODEV; + goto out_device; + } + + plr->minor = new_minor; + + rdtgrp->mode = RDT_MODE_PSEUDO_LOCKED; + closid_free(rdtgrp->closid); + rdtgroup_kn_mode_restore(rdtgrp, "cpus", 0444); + rdtgroup_kn_mode_restore(rdtgrp, "cpus_list", 0444); + + ret = 0; + goto out; + +out_device: + device_destroy(pseudo_lock_class, MKDEV(pseudo_lock_major, new_minor)); +out_debugfs: + debugfs_remove_recursive(plr->debugfs_dir); + pseudo_lock_minor_release(new_minor); +out_cstates: + pseudo_lock_cstates_relax(plr); +out_region: + pseudo_lock_region_clear(plr); +out: + return ret; +} + +/** + * rdtgroup_pseudo_lock_remove - Remove a pseudo-locked region + * @rdtgrp: resource group to which the pseudo-locked region belongs + * + * The removal of a pseudo-locked region can be initiated when the resource + * group is removed from user space via a "rmdir" from userspace or the + * unmount of the resctrl filesystem. On removal the resource group does + * not go back to pseudo-locksetup mode before it is removed, instead it is + * removed directly. There is thus assymmetry with the creation where the + * &struct pseudo_lock_region is removed here while it was not created in + * rdtgroup_pseudo_lock_create(). + * + * Return: void + */ +void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp) +{ + struct pseudo_lock_region *plr = rdtgrp->plr; + + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + /* + * Default group cannot be a pseudo-locked region so we can + * free closid here. + */ + closid_free(rdtgrp->closid); + goto free; + } + + pseudo_lock_cstates_relax(plr); + debugfs_remove_recursive(rdtgrp->plr->debugfs_dir); + device_destroy(pseudo_lock_class, MKDEV(pseudo_lock_major, plr->minor)); + pseudo_lock_minor_release(plr->minor); + +free: + pseudo_lock_free(rdtgrp); +} + +static int pseudo_lock_dev_open(struct inode *inode, struct file *filp) +{ + struct rdtgroup *rdtgrp; + + mutex_lock(&rdtgroup_mutex); + + rdtgrp = region_find_by_minor(iminor(inode)); + if (!rdtgrp) { + mutex_unlock(&rdtgroup_mutex); + return -ENODEV; + } + + filp->private_data = rdtgrp; + atomic_inc(&rdtgrp->waitcount); + /* Perform a non-seekable open - llseek is not supported */ + filp->f_mode &= ~(FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE); + + mutex_unlock(&rdtgroup_mutex); + + return 0; +} + +static int pseudo_lock_dev_release(struct inode *inode, struct file *filp) +{ + struct rdtgroup *rdtgrp; + + mutex_lock(&rdtgroup_mutex); + rdtgrp = filp->private_data; + WARN_ON(!rdtgrp); + if (!rdtgrp) { + mutex_unlock(&rdtgroup_mutex); + return -ENODEV; + } + filp->private_data = NULL; + atomic_dec(&rdtgrp->waitcount); + mutex_unlock(&rdtgroup_mutex); + return 0; +} + +static int pseudo_lock_dev_mremap(struct vm_area_struct *area) +{ + /* Not supported */ + return -EINVAL; +} + +static const struct vm_operations_struct pseudo_mmap_ops = { + .mremap = pseudo_lock_dev_mremap, +}; + +static int pseudo_lock_dev_mmap(struct file *filp, struct vm_area_struct *vma) +{ + unsigned long vsize = vma->vm_end - vma->vm_start; + unsigned long off = vma->vm_pgoff << PAGE_SHIFT; + struct pseudo_lock_region *plr; + struct rdtgroup *rdtgrp; + unsigned long physical; + unsigned long psize; + + mutex_lock(&rdtgroup_mutex); + + rdtgrp = filp->private_data; + WARN_ON(!rdtgrp); + if (!rdtgrp) { + mutex_unlock(&rdtgroup_mutex); + return -ENODEV; + } + + plr = rdtgrp->plr; + + /* + * Task is required to run with affinity to the cpus associated + * with the pseudo-locked region. If this is not the case the task + * may be scheduled elsewhere and invalidate entries in the + * pseudo-locked region. + */ + if (!cpumask_subset(¤t->cpus_allowed, &plr->d->cpu_mask)) { + mutex_unlock(&rdtgroup_mutex); + return -EINVAL; + } + + physical = __pa(plr->kmem) >> PAGE_SHIFT; + psize = plr->size - off; + + if (off > plr->size) { + mutex_unlock(&rdtgroup_mutex); + return -ENOSPC; + } + + /* + * Ensure changes are carried directly to the memory being mapped, + * do not allow copy-on-write mapping. + */ + if (!(vma->vm_flags & VM_SHARED)) { + mutex_unlock(&rdtgroup_mutex); + return -EINVAL; + } + + if (vsize > psize) { + mutex_unlock(&rdtgroup_mutex); + return -ENOSPC; + } + + memset(plr->kmem + off, 0, vsize); + + if (remap_pfn_range(vma, vma->vm_start, physical + vma->vm_pgoff, + vsize, vma->vm_page_prot)) { + mutex_unlock(&rdtgroup_mutex); + return -EAGAIN; + } + vma->vm_ops = &pseudo_mmap_ops; + mutex_unlock(&rdtgroup_mutex); + return 0; +} + +static const struct file_operations pseudo_lock_dev_fops = { + .owner = THIS_MODULE, + .llseek = no_llseek, + .read = NULL, + .write = NULL, + .open = pseudo_lock_dev_open, + .release = pseudo_lock_dev_release, + .mmap = pseudo_lock_dev_mmap, +}; + +static char *pseudo_lock_devnode(struct device *dev, umode_t *mode) +{ + struct rdtgroup *rdtgrp; + + rdtgrp = dev_get_drvdata(dev); + if (mode) + *mode = 0600; + return kasprintf(GFP_KERNEL, "pseudo_lock/%s", rdtgrp->kn->name); +} + +int rdt_pseudo_lock_init(void) +{ + int ret; + + ret = register_chrdev(0, "pseudo_lock", &pseudo_lock_dev_fops); + if (ret < 0) + return ret; + + pseudo_lock_major = ret; + + pseudo_lock_class = class_create(THIS_MODULE, "pseudo_lock"); + if (IS_ERR(pseudo_lock_class)) { + ret = PTR_ERR(pseudo_lock_class); + unregister_chrdev(pseudo_lock_major, "pseudo_lock"); + return ret; + } + + pseudo_lock_class->devnode = pseudo_lock_devnode; + return 0; +} + +void rdt_pseudo_lock_release(void) +{ + class_destroy(pseudo_lock_class); + pseudo_lock_class = NULL; + unregister_chrdev(pseudo_lock_major, "pseudo_lock"); + pseudo_lock_major = 0; +} diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock_event.h b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock_event.h new file mode 100644 index 000000000000..2c041e6d9f05 --- /dev/null +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock_event.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM resctrl + +#if !defined(_TRACE_PSEUDO_LOCK_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_PSEUDO_LOCK_H + +#include + +TRACE_EVENT(pseudo_lock_mem_latency, + TP_PROTO(u32 latency), + TP_ARGS(latency), + TP_STRUCT__entry(__field(u32, latency)), + TP_fast_assign(__entry->latency = latency), + TP_printk("latency=%u", __entry->latency) + ); + +TRACE_EVENT(pseudo_lock_l2, + TP_PROTO(u64 l2_hits, u64 l2_miss), + TP_ARGS(l2_hits, l2_miss), + TP_STRUCT__entry(__field(u64, l2_hits) + __field(u64, l2_miss)), + TP_fast_assign(__entry->l2_hits = l2_hits; + __entry->l2_miss = l2_miss;), + TP_printk("hits=%llu miss=%llu", + __entry->l2_hits, __entry->l2_miss)); + +TRACE_EVENT(pseudo_lock_l3, + TP_PROTO(u64 l3_hits, u64 l3_miss), + TP_ARGS(l3_hits, l3_miss), + TP_STRUCT__entry(__field(u64, l3_hits) + __field(u64, l3_miss)), + TP_fast_assign(__entry->l3_hits = l3_hits; + __entry->l3_miss = l3_miss;), + TP_printk("hits=%llu miss=%llu", + __entry->l3_hits, __entry->l3_miss)); + +#endif /* _TRACE_PSEUDO_LOCK_H */ + +#undef TRACE_INCLUDE_PATH +#define TRACE_INCLUDE_PATH . +#define TRACE_INCLUDE_FILE intel_rdt_pseudo_lock_event +#include diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index 749856a2e736..d6d7ea7349d0 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -20,7 +20,9 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include +#include #include #include #include @@ -55,6 +57,8 @@ static struct kernfs_node *kn_mondata; static struct seq_buf last_cmd_status; static char last_cmd_status_buf[512]; +struct dentry *debugfs_resctrl; + void rdt_last_cmd_clear(void) { lockdep_assert_held(&rdtgroup_mutex); @@ -121,11 +125,65 @@ static int closid_alloc(void) return closid; } -static void closid_free(int closid) +void closid_free(int closid) { closid_free_map |= 1 << closid; } +/** + * closid_allocated - test if provided closid is in use + * @closid: closid to be tested + * + * Return: true if @closid is currently associated with a resource group, + * false if @closid is free + */ +static bool closid_allocated(unsigned int closid) +{ + return (closid_free_map & (1 << closid)) == 0; +} + +/** + * rdtgroup_mode_by_closid - Return mode of resource group with closid + * @closid: closid if the resource group + * + * Each resource group is associated with a @closid. Here the mode + * of a resource group can be queried by searching for it using its closid. + * + * Return: mode as &enum rdtgrp_mode of resource group with closid @closid + */ +enum rdtgrp_mode rdtgroup_mode_by_closid(int closid) +{ + struct rdtgroup *rdtgrp; + + list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) { + if (rdtgrp->closid == closid) + return rdtgrp->mode; + } + + return RDT_NUM_MODES; +} + +static const char * const rdt_mode_str[] = { + [RDT_MODE_SHAREABLE] = "shareable", + [RDT_MODE_EXCLUSIVE] = "exclusive", + [RDT_MODE_PSEUDO_LOCKSETUP] = "pseudo-locksetup", + [RDT_MODE_PSEUDO_LOCKED] = "pseudo-locked", +}; + +/** + * rdtgroup_mode_str - Return the string representation of mode + * @mode: the resource group mode as &enum rdtgroup_mode + * + * Return: string representation of valid mode, "unknown" otherwise + */ +static const char *rdtgroup_mode_str(enum rdtgrp_mode mode) +{ + if (mode < RDT_MODE_SHAREABLE || mode >= RDT_NUM_MODES) + return "unknown"; + + return rdt_mode_str[mode]; +} + /* set uid and gid of rdtgroup dirs and files to that of the creator */ static int rdtgroup_kn_set_ugid(struct kernfs_node *kn) { @@ -207,8 +265,12 @@ static int rdtgroup_cpus_show(struct kernfs_open_file *of, rdtgrp = rdtgroup_kn_lock_live(of->kn); if (rdtgrp) { - seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n", - cpumask_pr_args(&rdtgrp->cpu_mask)); + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) + seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n", + cpumask_pr_args(&rdtgrp->plr->d->cpu_mask)); + else + seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n", + cpumask_pr_args(&rdtgrp->cpu_mask)); } else { ret = -ENOENT; } @@ -394,6 +456,13 @@ static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of, goto unlock; } + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + ret = -EINVAL; + rdt_last_cmd_puts("pseudo-locking in progress\n"); + goto unlock; + } + if (is_cpu_list(of)) ret = cpulist_parse(buf, newmask); else @@ -509,6 +578,32 @@ static int __rdtgroup_move_task(struct task_struct *tsk, return ret; } +/** + * rdtgroup_tasks_assigned - Test if tasks have been assigned to resource group + * @r: Resource group + * + * Return: 1 if tasks have been assigned to @r, 0 otherwise + */ +int rdtgroup_tasks_assigned(struct rdtgroup *r) +{ + struct task_struct *p, *t; + int ret = 0; + + lockdep_assert_held(&rdtgroup_mutex); + + rcu_read_lock(); + for_each_process_thread(p, t) { + if ((r->type == RDTCTRL_GROUP && t->closid == r->closid) || + (r->type == RDTMON_GROUP && t->rmid == r->mon.rmid)) { + ret = 1; + break; + } + } + rcu_read_unlock(); + + return ret; +} + static int rdtgroup_task_write_permission(struct task_struct *task, struct kernfs_open_file *of) { @@ -570,13 +665,22 @@ static ssize_t rdtgroup_tasks_write(struct kernfs_open_file *of, if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0) return -EINVAL; rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (!rdtgrp) { + rdtgroup_kn_unlock(of->kn); + return -ENOENT; + } rdt_last_cmd_clear(); - if (rdtgrp) - ret = rdtgroup_move_task(pid, rdtgrp, of); - else - ret = -ENOENT; + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + ret = -EINVAL; + rdt_last_cmd_puts("pseudo-locking in progress\n"); + goto unlock; + } + ret = rdtgroup_move_task(pid, rdtgrp, of); + +unlock: rdtgroup_kn_unlock(of->kn); return ret ?: nbytes; @@ -662,6 +766,94 @@ static int rdt_shareable_bits_show(struct kernfs_open_file *of, return 0; } +/** + * rdt_bit_usage_show - Display current usage of resources + * + * A domain is a shared resource that can now be allocated differently. Here + * we display the current regions of the domain as an annotated bitmask. + * For each domain of this resource its allocation bitmask + * is annotated as below to indicate the current usage of the corresponding bit: + * 0 - currently unused + * X - currently available for sharing and used by software and hardware + * H - currently used by hardware only but available for software use + * S - currently used and shareable by software only + * E - currently used exclusively by one resource group + * P - currently pseudo-locked by one resource group + */ +static int rdt_bit_usage_show(struct kernfs_open_file *of, + struct seq_file *seq, void *v) +{ + struct rdt_resource *r = of->kn->parent->priv; + u32 sw_shareable = 0, hw_shareable = 0; + u32 exclusive = 0, pseudo_locked = 0; + struct rdt_domain *dom; + int i, hwb, swb, excl, psl; + enum rdtgrp_mode mode; + bool sep = false; + u32 *ctrl; + + mutex_lock(&rdtgroup_mutex); + hw_shareable = r->cache.shareable_bits; + list_for_each_entry(dom, &r->domains, list) { + if (sep) + seq_putc(seq, ';'); + ctrl = dom->ctrl_val; + sw_shareable = 0; + exclusive = 0; + seq_printf(seq, "%d=", dom->id); + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (!closid_allocated(i)) + continue; + mode = rdtgroup_mode_by_closid(i); + switch (mode) { + case RDT_MODE_SHAREABLE: + sw_shareable |= *ctrl; + break; + case RDT_MODE_EXCLUSIVE: + exclusive |= *ctrl; + break; + case RDT_MODE_PSEUDO_LOCKSETUP: + /* + * RDT_MODE_PSEUDO_LOCKSETUP is possible + * here but not included since the CBM + * associated with this CLOSID in this mode + * is not initialized and no task or cpu can be + * assigned this CLOSID. + */ + break; + case RDT_MODE_PSEUDO_LOCKED: + case RDT_NUM_MODES: + WARN(1, + "invalid mode for closid %d\n", i); + break; + } + } + for (i = r->cache.cbm_len - 1; i >= 0; i--) { + pseudo_locked = dom->plr ? dom->plr->cbm : 0; + hwb = test_bit(i, (unsigned long *)&hw_shareable); + swb = test_bit(i, (unsigned long *)&sw_shareable); + excl = test_bit(i, (unsigned long *)&exclusive); + psl = test_bit(i, (unsigned long *)&pseudo_locked); + if (hwb && swb) + seq_putc(seq, 'X'); + else if (hwb && !swb) + seq_putc(seq, 'H'); + else if (!hwb && swb) + seq_putc(seq, 'S'); + else if (excl) + seq_putc(seq, 'E'); + else if (psl) + seq_putc(seq, 'P'); + else /* Unused bits remain */ + seq_putc(seq, '0'); + } + sep = true; + } + seq_putc(seq, '\n'); + mutex_unlock(&rdtgroup_mutex); + return 0; +} + static int rdt_min_bw_show(struct kernfs_open_file *of, struct seq_file *seq, void *v) { @@ -740,6 +932,269 @@ static ssize_t max_threshold_occ_write(struct kernfs_open_file *of, return nbytes; } +/* + * rdtgroup_mode_show - Display mode of this resource group + */ +static int rdtgroup_mode_show(struct kernfs_open_file *of, + struct seq_file *s, void *v) +{ + struct rdtgroup *rdtgrp; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (!rdtgrp) { + rdtgroup_kn_unlock(of->kn); + return -ENOENT; + } + + seq_printf(s, "%s\n", rdtgroup_mode_str(rdtgrp->mode)); + + rdtgroup_kn_unlock(of->kn); + return 0; +} + +/** + * rdtgroup_cbm_overlaps - Does CBM for intended closid overlap with other + * @r: Resource to which domain instance @d belongs. + * @d: The domain instance for which @closid is being tested. + * @cbm: Capacity bitmask being tested. + * @closid: Intended closid for @cbm. + * @exclusive: Only check if overlaps with exclusive resource groups + * + * Checks if provided @cbm intended to be used for @closid on domain + * @d overlaps with any other closids or other hardware usage associated + * with this domain. If @exclusive is true then only overlaps with + * resource groups in exclusive mode will be considered. If @exclusive + * is false then overlaps with any resource group or hardware entities + * will be considered. + * + * Return: false if CBM does not overlap, true if it does. + */ +bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d, + u32 _cbm, int closid, bool exclusive) +{ + unsigned long *cbm = (unsigned long *)&_cbm; + unsigned long *ctrl_b; + enum rdtgrp_mode mode; + u32 *ctrl; + int i; + + /* Check for any overlap with regions used by hardware directly */ + if (!exclusive) { + if (bitmap_intersects(cbm, + (unsigned long *)&r->cache.shareable_bits, + r->cache.cbm_len)) + return true; + } + + /* Check for overlap with other resource groups */ + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + ctrl_b = (unsigned long *)ctrl; + mode = rdtgroup_mode_by_closid(i); + if (closid_allocated(i) && i != closid && + mode != RDT_MODE_PSEUDO_LOCKSETUP) { + if (bitmap_intersects(cbm, ctrl_b, r->cache.cbm_len)) { + if (exclusive) { + if (mode == RDT_MODE_EXCLUSIVE) + return true; + continue; + } + return true; + } + } + } + + return false; +} + +/** + * rdtgroup_mode_test_exclusive - Test if this resource group can be exclusive + * + * An exclusive resource group implies that there should be no sharing of + * its allocated resources. At the time this group is considered to be + * exclusive this test can determine if its current schemata supports this + * setting by testing for overlap with all other resource groups. + * + * Return: true if resource group can be exclusive, false if there is overlap + * with allocations of other resource groups and thus this resource group + * cannot be exclusive. + */ +static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp) +{ + int closid = rdtgrp->closid; + struct rdt_resource *r; + struct rdt_domain *d; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + if (rdtgroup_cbm_overlaps(r, d, d->ctrl_val[closid], + rdtgrp->closid, false)) + return false; + } + } + + return true; +} + +/** + * rdtgroup_mode_write - Modify the resource group's mode + * + */ +static ssize_t rdtgroup_mode_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, loff_t off) +{ + struct rdtgroup *rdtgrp; + enum rdtgrp_mode mode; + int ret = 0; + + /* Valid input requires a trailing newline */ + if (nbytes == 0 || buf[nbytes - 1] != '\n') + return -EINVAL; + buf[nbytes - 1] = '\0'; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (!rdtgrp) { + rdtgroup_kn_unlock(of->kn); + return -ENOENT; + } + + rdt_last_cmd_clear(); + + mode = rdtgrp->mode; + + if ((!strcmp(buf, "shareable") && mode == RDT_MODE_SHAREABLE) || + (!strcmp(buf, "exclusive") && mode == RDT_MODE_EXCLUSIVE) || + (!strcmp(buf, "pseudo-locksetup") && + mode == RDT_MODE_PSEUDO_LOCKSETUP) || + (!strcmp(buf, "pseudo-locked") && mode == RDT_MODE_PSEUDO_LOCKED)) + goto out; + + if (mode == RDT_MODE_PSEUDO_LOCKED) { + rdt_last_cmd_printf("cannot change pseudo-locked group\n"); + ret = -EINVAL; + goto out; + } + + if (!strcmp(buf, "shareable")) { + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + ret = rdtgroup_locksetup_exit(rdtgrp); + if (ret) + goto out; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } else if (!strcmp(buf, "exclusive")) { + if (!rdtgroup_mode_test_exclusive(rdtgrp)) { + rdt_last_cmd_printf("schemata overlaps\n"); + ret = -EINVAL; + goto out; + } + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + ret = rdtgroup_locksetup_exit(rdtgrp); + if (ret) + goto out; + } + rdtgrp->mode = RDT_MODE_EXCLUSIVE; + } else if (!strcmp(buf, "pseudo-locksetup")) { + ret = rdtgroup_locksetup_enter(rdtgrp); + if (ret) + goto out; + rdtgrp->mode = RDT_MODE_PSEUDO_LOCKSETUP; + } else { + rdt_last_cmd_printf("unknown/unsupported mode\n"); + ret = -EINVAL; + } + +out: + rdtgroup_kn_unlock(of->kn); + return ret ?: nbytes; +} + +/** + * rdtgroup_cbm_to_size - Translate CBM to size in bytes + * @r: RDT resource to which @d belongs. + * @d: RDT domain instance. + * @cbm: bitmask for which the size should be computed. + * + * The bitmask provided associated with the RDT domain instance @d will be + * translated into how many bytes it represents. The size in bytes is + * computed by first dividing the total cache size by the CBM length to + * determine how many bytes each bit in the bitmask represents. The result + * is multiplied with the number of bits set in the bitmask. + */ +unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, + struct rdt_domain *d, u32 cbm) +{ + struct cpu_cacheinfo *ci; + unsigned int size = 0; + int num_b, i; + + num_b = bitmap_weight((unsigned long *)&cbm, r->cache.cbm_len); + ci = get_cpu_cacheinfo(cpumask_any(&d->cpu_mask)); + for (i = 0; i < ci->num_leaves; i++) { + if (ci->info_list[i].level == r->cache_level) { + size = ci->info_list[i].size / r->cache.cbm_len * num_b; + break; + } + } + + return size; +} + +/** + * rdtgroup_size_show - Display size in bytes of allocated regions + * + * The "size" file mirrors the layout of the "schemata" file, printing the + * size in bytes of each region instead of the capacity bitmask. + * + */ +static int rdtgroup_size_show(struct kernfs_open_file *of, + struct seq_file *s, void *v) +{ + struct rdtgroup *rdtgrp; + struct rdt_resource *r; + struct rdt_domain *d; + unsigned int size; + bool sep = false; + u32 cbm; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (!rdtgrp) { + rdtgroup_kn_unlock(of->kn); + return -ENOENT; + } + + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { + seq_printf(s, "%*s:", max_name_width, rdtgrp->plr->r->name); + size = rdtgroup_cbm_to_size(rdtgrp->plr->r, + rdtgrp->plr->d, + rdtgrp->plr->cbm); + seq_printf(s, "%d=%u\n", rdtgrp->plr->d->id, size); + goto out; + } + + for_each_alloc_enabled_rdt_resource(r) { + seq_printf(s, "%*s:", max_name_width, r->name); + list_for_each_entry(d, &r->domains, list) { + if (sep) + seq_putc(s, ';'); + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { + size = 0; + } else { + cbm = d->ctrl_val[rdtgrp->closid]; + size = rdtgroup_cbm_to_size(r, d, cbm); + } + seq_printf(s, "%d=%u", d->id, size); + sep = true; + } + seq_putc(s, '\n'); + } + +out: + rdtgroup_kn_unlock(of->kn); + + return 0; +} + /* rdtgroup information files for one cache resource. */ static struct rftype res_common_files[] = { { @@ -791,6 +1246,13 @@ static struct rftype res_common_files[] = { .seq_show = rdt_shareable_bits_show, .fflags = RF_CTRL_INFO | RFTYPE_RES_CACHE, }, + { + .name = "bit_usage", + .mode = 0444, + .kf_ops = &rdtgroup_kf_single_ops, + .seq_show = rdt_bit_usage_show, + .fflags = RF_CTRL_INFO | RFTYPE_RES_CACHE, + }, { .name = "min_bandwidth", .mode = 0444, @@ -853,6 +1315,22 @@ static struct rftype res_common_files[] = { .seq_show = rdtgroup_schemata_show, .fflags = RF_CTRL_BASE, }, + { + .name = "mode", + .mode = 0644, + .kf_ops = &rdtgroup_kf_single_ops, + .write = rdtgroup_mode_write, + .seq_show = rdtgroup_mode_show, + .fflags = RF_CTRL_BASE, + }, + { + .name = "size", + .mode = 0444, + .kf_ops = &rdtgroup_kf_single_ops, + .seq_show = rdtgroup_size_show, + .fflags = RF_CTRL_BASE, + }, + }; static int rdtgroup_add_files(struct kernfs_node *kn, unsigned long fflags) @@ -883,6 +1361,103 @@ error: return ret; } +/** + * rdtgroup_kn_mode_restrict - Restrict user access to named resctrl file + * @r: The resource group with which the file is associated. + * @name: Name of the file + * + * The permissions of named resctrl file, directory, or link are modified + * to not allow read, write, or execute by any user. + * + * WARNING: This function is intended to communicate to the user that the + * resctrl file has been locked down - that it is not relevant to the + * particular state the system finds itself in. It should not be relied + * on to protect from user access because after the file's permissions + * are restricted the user can still change the permissions using chmod + * from the command line. + * + * Return: 0 on success, <0 on failure. + */ +int rdtgroup_kn_mode_restrict(struct rdtgroup *r, const char *name) +{ + struct iattr iattr = {.ia_valid = ATTR_MODE,}; + struct kernfs_node *kn; + int ret = 0; + + kn = kernfs_find_and_get_ns(r->kn, name, NULL); + if (!kn) + return -ENOENT; + + switch (kernfs_type(kn)) { + case KERNFS_DIR: + iattr.ia_mode = S_IFDIR; + break; + case KERNFS_FILE: + iattr.ia_mode = S_IFREG; + break; + case KERNFS_LINK: + iattr.ia_mode = S_IFLNK; + break; + } + + ret = kernfs_setattr(kn, &iattr); + kernfs_put(kn); + return ret; +} + +/** + * rdtgroup_kn_mode_restore - Restore user access to named resctrl file + * @r: The resource group with which the file is associated. + * @name: Name of the file + * @mask: Mask of permissions that should be restored + * + * Restore the permissions of the named file. If @name is a directory the + * permissions of its parent will be used. + * + * Return: 0 on success, <0 on failure. + */ +int rdtgroup_kn_mode_restore(struct rdtgroup *r, const char *name, + umode_t mask) +{ + struct iattr iattr = {.ia_valid = ATTR_MODE,}; + struct kernfs_node *kn, *parent; + struct rftype *rfts, *rft; + int ret, len; + + rfts = res_common_files; + len = ARRAY_SIZE(res_common_files); + + for (rft = rfts; rft < rfts + len; rft++) { + if (!strcmp(rft->name, name)) + iattr.ia_mode = rft->mode & mask; + } + + kn = kernfs_find_and_get_ns(r->kn, name, NULL); + if (!kn) + return -ENOENT; + + switch (kernfs_type(kn)) { + case KERNFS_DIR: + parent = kernfs_get_parent(kn); + if (parent) { + iattr.ia_mode |= parent->mode; + kernfs_put(parent); + } + iattr.ia_mode |= S_IFDIR; + break; + case KERNFS_FILE: + iattr.ia_mode |= S_IFREG; + break; + case KERNFS_LINK: + iattr.ia_mode |= S_IFLNK; + break; + } + + ret = kernfs_setattr(kn, &iattr); + kernfs_put(kn); + return ret; +} + static int rdtgroup_mkdir_info_resdir(struct rdt_resource *r, char *name, unsigned long fflags) { @@ -1224,6 +1799,9 @@ void rdtgroup_kn_unlock(struct kernfs_node *kn) if (atomic_dec_and_test(&rdtgrp->waitcount) && (rdtgrp->flags & RDT_DELETED)) { + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) + rdtgroup_pseudo_lock_remove(rdtgrp); kernfs_unbreak_active_protection(kn); kernfs_put(rdtgrp->kn); kfree(rdtgrp); @@ -1289,10 +1867,16 @@ static struct dentry *rdt_mount(struct file_system_type *fs_type, rdtgroup_default.mon.mon_data_kn = kn_mondata; } + ret = rdt_pseudo_lock_init(); + if (ret) { + dentry = ERR_PTR(ret); + goto out_mondata; + } + dentry = kernfs_mount(fs_type, flags, rdt_root, RDTGROUP_SUPER_MAGIC, NULL); if (IS_ERR(dentry)) - goto out_mondata; + goto out_psl; if (rdt_alloc_capable) static_branch_enable_cpuslocked(&rdt_alloc_enable_key); @@ -1310,6 +1894,8 @@ static struct dentry *rdt_mount(struct file_system_type *fs_type, goto out; +out_psl: + rdt_pseudo_lock_release(); out_mondata: if (rdt_mon_capable) kernfs_remove(kn_mondata); @@ -1447,6 +2033,10 @@ static void rmdir_all_sub(void) if (rdtgrp == &rdtgroup_default) continue; + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) + rdtgroup_pseudo_lock_remove(rdtgrp); + /* * Give any CPUs back to the default group. We cannot copy * cpu_online_mask because a CPU might have executed the @@ -1483,6 +2073,8 @@ static void rdt_kill_sb(struct super_block *sb) reset_all_ctrls(r); cdp_disable_all(); rmdir_all_sub(); + rdt_pseudo_lock_release(); + rdtgroup_default.mode = RDT_MODE_SHAREABLE; static_branch_disable_cpuslocked(&rdt_alloc_enable_key); static_branch_disable_cpuslocked(&rdt_mon_enable_key); static_branch_disable_cpuslocked(&rdt_enable_key); @@ -1682,6 +2274,114 @@ out_destroy: return ret; } +/** + * cbm_ensure_valid - Enforce validity on provided CBM + * @_val: Candidate CBM + * @r: RDT resource to which the CBM belongs + * + * The provided CBM represents all cache portions available for use. This + * may be represented by a bitmap that does not consist of contiguous ones + * and thus be an invalid CBM. + * Here the provided CBM is forced to be a valid CBM by only considering + * the first set of contiguous bits as valid and clearing all bits. + * The intention here is to provide a valid default CBM with which a new + * resource group is initialized. The user can follow this with a + * modification to the CBM if the default does not satisfy the + * requirements. + */ +static void cbm_ensure_valid(u32 *_val, struct rdt_resource *r) +{ + /* + * Convert the u32 _val to an unsigned long required by all the bit + * operations within this function. No more than 32 bits of this + * converted value can be accessed because all bit operations are + * additionally provided with cbm_len that is initialized during + * hardware enumeration using five bits from the EAX register and + * thus never can exceed 32 bits. + */ + unsigned long *val = (unsigned long *)_val; + unsigned int cbm_len = r->cache.cbm_len; + unsigned long first_bit, zero_bit; + + if (*val == 0) + return; + + first_bit = find_first_bit(val, cbm_len); + zero_bit = find_next_zero_bit(val, cbm_len, first_bit); + + /* Clear any remaining bits to ensure contiguous region */ + bitmap_clear(val, zero_bit, cbm_len - zero_bit); +} + +/** + * rdtgroup_init_alloc - Initialize the new RDT group's allocations + * + * A new RDT group is being created on an allocation capable (CAT) + * supporting system. Set this group up to start off with all usable + * allocations. That is, all shareable and unused bits. + * + * All-zero CBM is invalid. If there are no more shareable bits available + * on any domain then the entire allocation will fail. + */ +static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) +{ + u32 used_b = 0, unused_b = 0; + u32 closid = rdtgrp->closid; + struct rdt_resource *r; + enum rdtgrp_mode mode; + struct rdt_domain *d; + int i, ret; + u32 *ctrl; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + d->have_new_ctrl = false; + d->new_ctrl = r->cache.shareable_bits; + used_b = r->cache.shareable_bits; + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (closid_allocated(i) && i != closid) { + mode = rdtgroup_mode_by_closid(i); + if (mode == RDT_MODE_PSEUDO_LOCKSETUP) + break; + used_b |= *ctrl; + if (mode == RDT_MODE_SHAREABLE) + d->new_ctrl |= *ctrl; + } + } + if (d->plr && d->plr->cbm > 0) + used_b |= d->plr->cbm; + unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1); + unused_b &= BIT_MASK(r->cache.cbm_len) - 1; + d->new_ctrl |= unused_b; + /* + * Force the initial CBM to be valid, user can + * modify the CBM based on system availability. + */ + cbm_ensure_valid(&d->new_ctrl, r); + if (bitmap_weight((unsigned long *) &d->new_ctrl, + r->cache.cbm_len) < + r->cache.min_cbm_bits) { + rdt_last_cmd_printf("no space on %s:%d\n", + r->name, d->id); + return -ENOSPC; + } + d->have_new_ctrl = true; + } + } + + for_each_alloc_enabled_rdt_resource(r) { + ret = update_domains(r, rdtgrp->closid); + if (ret < 0) { + rdt_last_cmd_puts("failed to initialize allocations\n"); + return ret; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } + + return 0; +} + static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, struct kernfs_node *prgrp_kn, const char *name, umode_t mode, @@ -1700,6 +2400,14 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, goto out_unlock; } + if (rtype == RDTMON_GROUP && + (prdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || + prdtgrp->mode == RDT_MODE_PSEUDO_LOCKED)) { + ret = -EINVAL; + rdt_last_cmd_puts("pseudo-locking in progress\n"); + goto out_unlock; + } + /* allocate the rdtgroup. */ rdtgrp = kzalloc(sizeof(*rdtgrp), GFP_KERNEL); if (!rdtgrp) { @@ -1840,6 +2548,10 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = 0; rdtgrp->closid = closid; + ret = rdtgroup_init_alloc(rdtgrp); + if (ret < 0) + goto out_id_free; + list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups); if (rdt_mon_capable) { @@ -1850,15 +2562,16 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = mongroup_create_dir(kn, NULL, "mon_groups", NULL); if (ret) { rdt_last_cmd_puts("kernfs subdir error\n"); - goto out_id_free; + goto out_del_list; } } goto out_unlock; +out_del_list: + list_del(&rdtgrp->rdtgroup_list); out_id_free: closid_free(closid); - list_del(&rdtgrp->rdtgroup_list); out_common_fail: mkdir_rdt_prepare_clean(rdtgrp); out_unlock: @@ -1945,6 +2658,21 @@ static int rdtgroup_rmdir_mon(struct kernfs_node *kn, struct rdtgroup *rdtgrp, return 0; } +static int rdtgroup_ctrl_remove(struct kernfs_node *kn, + struct rdtgroup *rdtgrp) +{ + rdtgrp->flags = RDT_DELETED; + list_del(&rdtgrp->rdtgroup_list); + + /* + * one extra hold on this, will drop when we kfree(rdtgrp) + * in rdtgroup_kn_unlock() + */ + kernfs_get(kn); + kernfs_remove(rdtgrp->kn); + return 0; +} + static int rdtgroup_rmdir_ctrl(struct kernfs_node *kn, struct rdtgroup *rdtgrp, cpumask_var_t tmpmask) { @@ -1970,7 +2698,6 @@ static int rdtgroup_rmdir_ctrl(struct kernfs_node *kn, struct rdtgroup *rdtgrp, cpumask_or(tmpmask, tmpmask, &rdtgrp->cpu_mask); update_closid_rmid(tmpmask, NULL); - rdtgrp->flags = RDT_DELETED; closid_free(rdtgrp->closid); free_rmid(rdtgrp->mon.rmid); @@ -1979,14 +2706,7 @@ static int rdtgroup_rmdir_ctrl(struct kernfs_node *kn, struct rdtgroup *rdtgrp, */ free_all_child_rdtgrp(rdtgrp); - list_del(&rdtgrp->rdtgroup_list); - - /* - * one extra hold on this, will drop when we kfree(rdtgrp) - * in rdtgroup_kn_unlock() - */ - kernfs_get(kn); - kernfs_remove(rdtgrp->kn); + rdtgroup_ctrl_remove(kn, rdtgrp); return 0; } @@ -2014,13 +2734,19 @@ static int rdtgroup_rmdir(struct kernfs_node *kn) * If the rdtgroup is a mon group and parent directory * is a valid "mon_groups" directory, remove the mon group. */ - if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn) - ret = rdtgroup_rmdir_ctrl(kn, rdtgrp, tmpmask); - else if (rdtgrp->type == RDTMON_GROUP && - is_mon_groups(parent_kn, kn->name)) + if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn) { + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { + ret = rdtgroup_ctrl_remove(kn, rdtgrp); + } else { + ret = rdtgroup_rmdir_ctrl(kn, rdtgrp, tmpmask); + } + } else if (rdtgrp->type == RDTMON_GROUP && + is_mon_groups(parent_kn, kn->name)) { ret = rdtgroup_rmdir_mon(kn, rdtgrp, tmpmask); - else + } else { ret = -EPERM; + } out: rdtgroup_kn_unlock(kn); @@ -2046,7 +2772,8 @@ static int __init rdtgroup_setup_root(void) int ret; rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, - KERNFS_ROOT_CREATE_DEACTIVATED, + KERNFS_ROOT_CREATE_DEACTIVATED | + KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, &rdtgroup_default); if (IS_ERR(rdt_root)) return PTR_ERR(rdt_root); @@ -2102,6 +2829,29 @@ int __init rdtgroup_init(void) if (ret) goto cleanup_mountpoint; + /* + * Adding the resctrl debugfs directory here may not be ideal since + * it would let the resctrl debugfs directory appear on the debugfs + * filesystem before the resctrl filesystem is mounted. + * It may also be ok since that would enable debugging of RDT before + * resctrl is mounted. + * The reason why the debugfs directory is created here and not in + * rdt_mount() is because rdt_mount() takes rdtgroup_mutex and + * during the debugfs directory creation also &sb->s_type->i_mutex_key + * (the lockdep class of inode->i_rwsem). Other filesystem + * interactions (eg. SyS_getdents) have the lock ordering: + * &sb->s_type->i_mutex_key --> &mm->mmap_sem + * During mmap(), called with &mm->mmap_sem, the rdtgroup_mutex + * is taken, thus creating dependency: + * &mm->mmap_sem --> rdtgroup_mutex for the latter that can cause + * issues considering the other two lock dependencies. + * By creating the debugfs directory here we avoid a dependency + * that may cause deadlock (even though file operations cannot + * occur until the filesystem is mounted, but I do not know how to + * tell lockdep that). + */ + debugfs_resctrl = debugfs_create_dir("resctrl", NULL); + return 0; cleanup_mountpoint: @@ -2111,3 +2861,11 @@ cleanup_root: return ret; } + +void __exit rdtgroup_exit(void) +{ + debugfs_remove_recursive(debugfs_resctrl); + unregister_filesystem(&rdt_fs_type); + sysfs_remove_mount_point(fs_kobj, "resctrl"); + kernfs_destroy_root(rdt_root); +} diff --git a/arch/x86/kernel/cpu/mcheck/mce-severity.c b/arch/x86/kernel/cpu/mcheck/mce-severity.c index 5bbd06f38ff6..f34d89c01edc 100644 --- a/arch/x86/kernel/cpu/mcheck/mce-severity.c +++ b/arch/x86/kernel/cpu/mcheck/mce-severity.c @@ -160,6 +160,11 @@ static struct severity { SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_INSTR), USER ), + MCESEV( + PANIC, "Data load in unrecoverable area of kernel", + SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA), + KERNEL + ), #endif MCESEV( PANIC, "Action required: unknown MCACOD", diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index e4cf6ff1c2e1..4b767284b7f5 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -123,8 +123,8 @@ void mce_setup(struct mce *m) { memset(m, 0, sizeof(struct mce)); m->cpu = m->extcpu = smp_processor_id(); - /* We hope get_seconds stays lockless */ - m->time = get_seconds(); + /* need the internal __ version to avoid deadlocks */ + m->time = __ktime_get_real_seconds(); m->cpuvendor = boot_cpu_data.x86_vendor; m->cpuid = cpuid_eax(1); m->socketid = cpu_data(m->extcpu).phys_proc_id; @@ -772,23 +772,25 @@ EXPORT_SYMBOL_GPL(machine_check_poll); static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp, struct pt_regs *regs) { - int i, ret = 0; char *tmp; + int i; for (i = 0; i < mca_cfg.banks; i++) { m->status = mce_rdmsrl(msr_ops.status(i)); - if (m->status & MCI_STATUS_VAL) { - __set_bit(i, validp); - if (quirk_no_way_out) - quirk_no_way_out(i, m, regs); - } + if (!(m->status & MCI_STATUS_VAL)) + continue; + + __set_bit(i, validp); + if (quirk_no_way_out) + quirk_no_way_out(i, m, regs); if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) { + mce_read_aux(m, i); *msg = tmp; - ret = 1; + return 1; } } - return ret; + return 0; } /* @@ -1102,6 +1104,101 @@ static void mce_unmap_kpfn(unsigned long pfn) } #endif + +/* + * Cases where we avoid rendezvous handler timeout: + * 1) If this CPU is offline. + * + * 2) If crashing_cpu was set, e.g. we're entering kdump and we need to + * skip those CPUs which remain looping in the 1st kernel - see + * crash_nmi_callback(). + * + * Note: there still is a small window between kexec-ing and the new, + * kdump kernel establishing a new #MC handler where a broadcasted MCE + * might not get handled properly. + */ +static bool __mc_check_crashing_cpu(int cpu) +{ + if (cpu_is_offline(cpu) || + (crashing_cpu != -1 && crashing_cpu != cpu)) { + u64 mcgstatus; + + mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS); + if (mcgstatus & MCG_STATUS_RIPV) { + mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); + return true; + } + } + return false; +} + +static void __mc_scan_banks(struct mce *m, struct mce *final, + unsigned long *toclear, unsigned long *valid_banks, + int no_way_out, int *worst) +{ + struct mca_config *cfg = &mca_cfg; + int severity, i; + + for (i = 0; i < cfg->banks; i++) { + __clear_bit(i, toclear); + if (!test_bit(i, valid_banks)) + continue; + + if (!mce_banks[i].ctl) + continue; + + m->misc = 0; + m->addr = 0; + m->bank = i; + + m->status = mce_rdmsrl(msr_ops.status(i)); + if (!(m->status & MCI_STATUS_VAL)) + continue; + + /* + * Corrected or non-signaled errors are handled by + * machine_check_poll(). Leave them alone, unless this panics. + */ + if (!(m->status & (cfg->ser ? MCI_STATUS_S : MCI_STATUS_UC)) && + !no_way_out) + continue; + + /* Set taint even when machine check was not enabled. */ + add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); + + severity = mce_severity(m, cfg->tolerant, NULL, true); + + /* + * When machine check was for corrected/deferred handler don't + * touch, unless we're panicking. + */ + if ((severity == MCE_KEEP_SEVERITY || + severity == MCE_UCNA_SEVERITY) && !no_way_out) + continue; + + __set_bit(i, toclear); + + /* Machine check event was not enabled. Clear, but ignore. */ + if (severity == MCE_NO_SEVERITY) + continue; + + mce_read_aux(m, i); + + /* assuming valid severity level != 0 */ + m->severity = severity; + + mce_log(m); + + if (severity > *worst) { + *final = *m; + *worst = severity; + } + } + + /* mce_clear_state will clear *final, save locally for use later */ + *m = *final; +} + /* * The actual machine check handler. This only handles real * exceptions when something got corrupted coming in through int 18. @@ -1116,68 +1213,45 @@ static void mce_unmap_kpfn(unsigned long pfn) */ void do_machine_check(struct pt_regs *regs, long error_code) { + DECLARE_BITMAP(valid_banks, MAX_NR_BANKS); + DECLARE_BITMAP(toclear, MAX_NR_BANKS); struct mca_config *cfg = &mca_cfg; + int cpu = smp_processor_id(); + char *msg = "Unknown"; struct mce m, *final; - int i; int worst = 0; - int severity; /* * Establish sequential order between the CPUs entering the machine * check handler. */ int order = -1; + /* * If no_way_out gets set, there is no safe way to recover from this * MCE. If mca_cfg.tolerant is cranked up, we'll try anyway. */ int no_way_out = 0; + /* * If kill_it gets set, there might be a way to recover from this * error. */ int kill_it = 0; - DECLARE_BITMAP(toclear, MAX_NR_BANKS); - DECLARE_BITMAP(valid_banks, MAX_NR_BANKS); - char *msg = "Unknown"; /* * MCEs are always local on AMD. Same is determined by MCG_STATUS_LMCES * on Intel. */ int lmce = 1; - int cpu = smp_processor_id(); - - /* - * Cases where we avoid rendezvous handler timeout: - * 1) If this CPU is offline. - * - * 2) If crashing_cpu was set, e.g. we're entering kdump and we need to - * skip those CPUs which remain looping in the 1st kernel - see - * crash_nmi_callback(). - * - * Note: there still is a small window between kexec-ing and the new, - * kdump kernel establishing a new #MC handler where a broadcasted MCE - * might not get handled properly. - */ - if (cpu_is_offline(cpu) || - (crashing_cpu != -1 && crashing_cpu != cpu)) { - u64 mcgstatus; - mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS); - if (mcgstatus & MCG_STATUS_RIPV) { - mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); - return; - } - } + if (__mc_check_crashing_cpu(cpu)) + return; ist_enter(regs); this_cpu_inc(mce_exception_count); - if (!cfg->banks) - goto out; - mce_gather_info(&m, regs); m.tsc = rdtsc(); @@ -1205,75 +1279,20 @@ void do_machine_check(struct pt_regs *regs, long error_code) lmce = m.mcgstatus & MCG_STATUS_LMCES; /* + * Local machine check may already know that we have to panic. + * Broadcast machine check begins rendezvous in mce_start() * Go through all banks in exclusion of the other CPUs. This way we * don't report duplicated events on shared banks because the first one - * to see it will clear it. If this is a Local MCE, then no need to - * perform rendezvous. + * to see it will clear it. */ - if (!lmce) + if (lmce) { + if (no_way_out) + mce_panic("Fatal local machine check", &m, msg); + } else { order = mce_start(&no_way_out); - - for (i = 0; i < cfg->banks; i++) { - __clear_bit(i, toclear); - if (!test_bit(i, valid_banks)) - continue; - if (!mce_banks[i].ctl) - continue; - - m.misc = 0; - m.addr = 0; - m.bank = i; - - m.status = mce_rdmsrl(msr_ops.status(i)); - if ((m.status & MCI_STATUS_VAL) == 0) - continue; - - /* - * Non uncorrected or non signaled errors are handled by - * machine_check_poll. Leave them alone, unless this panics. - */ - if (!(m.status & (cfg->ser ? MCI_STATUS_S : MCI_STATUS_UC)) && - !no_way_out) - continue; - - /* - * Set taint even when machine check was not enabled. - */ - add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); - - severity = mce_severity(&m, cfg->tolerant, NULL, true); - - /* - * When machine check was for corrected/deferred handler don't - * touch, unless we're panicing. - */ - if ((severity == MCE_KEEP_SEVERITY || - severity == MCE_UCNA_SEVERITY) && !no_way_out) - continue; - __set_bit(i, toclear); - if (severity == MCE_NO_SEVERITY) { - /* - * Machine check event was not enabled. Clear, but - * ignore. - */ - continue; - } - - mce_read_aux(&m, i); - - /* assuming valid severity level != 0 */ - m.severity = severity; - - mce_log(&m); - - if (severity > worst) { - *final = m; - worst = severity; - } } - /* mce_clear_state will clear *final, save locally for use later */ - m = *final; + __mc_scan_banks(&m, final, toclear, valid_banks, no_way_out, &worst); if (!no_way_out) mce_clear_state(toclear); @@ -1287,12 +1306,17 @@ void do_machine_check(struct pt_regs *regs, long error_code) no_way_out = worst >= MCE_PANIC_SEVERITY; } else { /* - * Local MCE skipped calling mce_reign() - * If we found a fatal error, we need to panic here. + * If there was a fatal machine check we should have + * already called mce_panic earlier in this function. + * Since we re-read the banks, we might have found + * something new. Check again to see if we found a + * fatal error. We call "mce_severity()" again to + * make sure we have the right "msg". */ - if (worst >= MCE_PANIC_SEVERITY && mca_cfg.tolerant < 3) - mce_panic("Machine check from unknown source", - NULL, NULL); + if (worst >= MCE_PANIC_SEVERITY && mca_cfg.tolerant < 3) { + mce_severity(&m, cfg->tolerant, &msg, true); + mce_panic("Local fatal machine check!", &m, msg); + } } /* @@ -1307,7 +1331,7 @@ void do_machine_check(struct pt_regs *regs, long error_code) if (worst > 0) mce_report_event(regs); mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); -out: + sync_core(); if (worst != MCE_AR_SEVERITY && !kill_it) @@ -2153,9 +2177,6 @@ static ssize_t store_int_with_restart(struct device *s, if (check_interval == old_check_interval) return ret; - if (check_interval < 1) - check_interval = 1; - mutex_lock(&mce_sysfs_mutex); mce_restart(); mutex_unlock(&mce_sysfs_mutex); diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c index 1c2cfa0644aa..97ccf4c3b45b 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -190,8 +190,11 @@ static void save_microcode_patch(void *data, unsigned int size) p = memdup_patch(data, size); if (!p) pr_err("Error allocating buffer %p\n", data); - else + else { list_replace(&iter->plist, &p->plist); + kfree(iter->data); + kfree(iter); + } } } diff --git a/arch/x86/kernel/cpu/mtrr/if.c b/arch/x86/kernel/cpu/mtrr/if.c index 4021d3859499..40eee6cc4124 100644 --- a/arch/x86/kernel/cpu/mtrr/if.c +++ b/arch/x86/kernel/cpu/mtrr/if.c @@ -106,7 +106,8 @@ mtrr_write(struct file *file, const char __user *buf, size_t len, loff_t * ppos) memset(line, 0, LINE_SIZE); - length = strncpy_from_user(line, buf, LINE_SIZE - 1); + len = min_t(size_t, len, LINE_SIZE - 1); + length = strncpy_from_user(line, buf, len); if (length < 0) return length; diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 666a284116ac..9c8652974f8e 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -22,8 +22,6 @@ #include #include -#define OPCODE_BUFSIZE 64 - int panic_on_unrecovered_nmi; int panic_on_io_nmi; static int die_counter; @@ -93,26 +91,18 @@ static void printk_stack_address(unsigned long address, int reliable, */ void show_opcodes(u8 *rip, const char *loglvl) { - unsigned int code_prologue = OPCODE_BUFSIZE * 2 / 3; +#define PROLOGUE_SIZE 42 +#define EPILOGUE_SIZE 21 +#define OPCODE_BUFSIZE (PROLOGUE_SIZE + 1 + EPILOGUE_SIZE) u8 opcodes[OPCODE_BUFSIZE]; - u8 *ip; - int i; - - printk("%sCode: ", loglvl); - - ip = (u8 *)rip - code_prologue; - if (probe_kernel_read(opcodes, ip, OPCODE_BUFSIZE)) { - pr_cont("Bad RIP value.\n"); - return; - } - for (i = 0; i < OPCODE_BUFSIZE; i++, ip++) { - if (ip == rip) - pr_cont("<%02x> ", opcodes[i]); - else - pr_cont("%02x ", opcodes[i]); + if (probe_kernel_read(opcodes, rip - PROLOGUE_SIZE, OPCODE_BUFSIZE)) { + printk("%sCode: Bad RIP value.\n", loglvl); + } else { + printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %" + __stringify(EPILOGUE_SIZE) "ph\n", loglvl, opcodes, + opcodes[PROLOGUE_SIZE], opcodes + PROLOGUE_SIZE + 1); } - pr_cont("\n"); } void show_ip(struct pt_regs *regs, const char *loglvl) diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index d1f25c831447..c88c23c658c1 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1248,6 +1248,7 @@ void __init e820__memblock_setup(void) { int i; u64 end; + u64 addr = 0; /* * The bootstrap memblock region count maximum is 128 entries @@ -1264,13 +1265,21 @@ void __init e820__memblock_setup(void) struct e820_entry *entry = &e820_table->entries[i]; end = entry->addr + entry->size; + if (addr < entry->addr) + memblock_reserve(addr, entry->addr - addr); + addr = end; if (end != (resource_size_t)end) continue; + /* + * all !E820_TYPE_RAM ranges (including gap ranges) are put + * into memblock.reserved to make sure that struct pages in + * such regions are not left uninitialized after bootup. + */ if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) - continue; - - memblock_add(entry->addr, entry->size); + memblock_reserve(entry->addr, entry->size); + else + memblock_add(entry->addr, entry->size); } /* Throw away partial pages: */ diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index a21d6ace648e..8047379e575a 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -44,7 +44,7 @@ static unsigned int __initdata next_early_pgt; pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX); #ifdef CONFIG_X86_5LEVEL -unsigned int __pgtable_l5_enabled __initdata; +unsigned int __pgtable_l5_enabled __ro_after_init; unsigned int pgdir_shift __ro_after_init = 39; EXPORT_SYMBOL(pgdir_shift); unsigned int ptrs_per_p4d __ro_after_init = 1; diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index abe6df15a8fb..30f9cb2c0b55 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -512,11 +512,18 @@ ENTRY(initial_code) ENTRY(setup_once_ref) .long setup_once +#ifdef CONFIG_PAGE_TABLE_ISOLATION +#define PGD_ALIGN (2 * PAGE_SIZE) +#define PTI_USER_PGD_FILL 1024 +#else +#define PGD_ALIGN (PAGE_SIZE) +#define PTI_USER_PGD_FILL 0 +#endif /* * BSS section */ __PAGE_ALIGNED_BSS - .align PAGE_SIZE + .align PGD_ALIGN #ifdef CONFIG_X86_PAE .globl initial_pg_pmd initial_pg_pmd: @@ -526,14 +533,17 @@ initial_pg_pmd: initial_page_table: .fill 1024,4,0 #endif + .align PGD_ALIGN initial_pg_fixmap: .fill 1024,4,0 -.globl empty_zero_page -empty_zero_page: - .fill 4096,1,0 .globl swapper_pg_dir + .align PGD_ALIGN swapper_pg_dir: .fill 1024,4,0 + .fill PTI_USER_PGD_FILL,4,0 +.globl empty_zero_page +empty_zero_page: + .fill 4096,1,0 EXPORT_SYMBOL(empty_zero_page) /* @@ -542,7 +552,7 @@ EXPORT_SYMBOL(empty_zero_page) #ifdef CONFIG_X86_PAE __PAGE_ALIGNED_DATA /* Page-aligned for the benefit of paravirt? */ - .align PAGE_SIZE + .align PGD_ALIGN ENTRY(initial_page_table) .long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */ # if KPMDS == 3 diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 8344dd2f310a..15ebc2fc166e 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -235,7 +235,7 @@ ENTRY(secondary_startup_64) * address given in m16:64. */ pushq $.Lafter_lret # put return address on stack for unwinder - xorq %rbp, %rbp # clear frame pointer + xorl %ebp, %ebp # clear frame pointer movq initial_code(%rip), %rax pushq $__KERNEL_CS # set correct cs pushq %rax # target address in negative space diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c index 8771766d46b6..34a5c1715148 100644 --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -169,28 +169,29 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp) set_dr_addr_mask(0, i); } -/* - * Check for virtual address in kernel space. - */ -int arch_check_bp_in_kernelspace(struct perf_event *bp) +static int arch_bp_generic_len(int x86_len) { - unsigned int len; - unsigned long va; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - - va = info->address; - len = bp->attr.bp_len; - - /* - * We don't need to worry about va + len - 1 overflowing: - * we already require that va is aligned to a multiple of len. - */ - return (va >= TASK_SIZE_MAX) || ((va + len - 1) >= TASK_SIZE_MAX); + switch (x86_len) { + case X86_BREAKPOINT_LEN_1: + return HW_BREAKPOINT_LEN_1; + case X86_BREAKPOINT_LEN_2: + return HW_BREAKPOINT_LEN_2; + case X86_BREAKPOINT_LEN_4: + return HW_BREAKPOINT_LEN_4; +#ifdef CONFIG_X86_64 + case X86_BREAKPOINT_LEN_8: + return HW_BREAKPOINT_LEN_8; +#endif + default: + return -EINVAL; + } } int arch_bp_generic_fields(int x86_len, int x86_type, int *gen_len, int *gen_type) { + int len; + /* Type */ switch (x86_type) { case X86_BREAKPOINT_EXECUTE: @@ -211,42 +212,47 @@ int arch_bp_generic_fields(int x86_len, int x86_type, } /* Len */ - switch (x86_len) { - case X86_BREAKPOINT_LEN_1: - *gen_len = HW_BREAKPOINT_LEN_1; - break; - case X86_BREAKPOINT_LEN_2: - *gen_len = HW_BREAKPOINT_LEN_2; - break; - case X86_BREAKPOINT_LEN_4: - *gen_len = HW_BREAKPOINT_LEN_4; - break; -#ifdef CONFIG_X86_64 - case X86_BREAKPOINT_LEN_8: - *gen_len = HW_BREAKPOINT_LEN_8; - break; -#endif - default: + len = arch_bp_generic_len(x86_len); + if (len < 0) return -EINVAL; - } + *gen_len = len; return 0; } - -static int arch_build_bp_info(struct perf_event *bp) +/* + * Check for virtual address in kernel space. + */ +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); + unsigned long va; + int len; - info->address = bp->attr.bp_addr; + va = hw->address; + len = arch_bp_generic_len(hw->len); + WARN_ON_ONCE(len < 0); + + /* + * We don't need to worry about va + len - 1 overflowing: + * we already require that va is aligned to a multiple of len. + */ + return (va >= TASK_SIZE_MAX) || ((va + len - 1) >= TASK_SIZE_MAX); +} + +static int arch_build_bp_info(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) +{ + hw->address = attr->bp_addr; + hw->mask = 0; /* Type */ - switch (bp->attr.bp_type) { + switch (attr->bp_type) { case HW_BREAKPOINT_W: - info->type = X86_BREAKPOINT_WRITE; + hw->type = X86_BREAKPOINT_WRITE; break; case HW_BREAKPOINT_W | HW_BREAKPOINT_R: - info->type = X86_BREAKPOINT_RW; + hw->type = X86_BREAKPOINT_RW; break; case HW_BREAKPOINT_X: /* @@ -254,23 +260,23 @@ static int arch_build_bp_info(struct perf_event *bp) * acceptable for kprobes. On non-kprobes kernels, we don't * allow kernel breakpoints at all. */ - if (bp->attr.bp_addr >= TASK_SIZE_MAX) { + if (attr->bp_addr >= TASK_SIZE_MAX) { #ifdef CONFIG_KPROBES - if (within_kprobe_blacklist(bp->attr.bp_addr)) + if (within_kprobe_blacklist(attr->bp_addr)) return -EINVAL; #else return -EINVAL; #endif } - info->type = X86_BREAKPOINT_EXECUTE; + hw->type = X86_BREAKPOINT_EXECUTE; /* * x86 inst breakpoints need to have a specific undefined len. * But we still need to check userspace is not trying to setup * an unsupported length, to get a range breakpoint for example. */ - if (bp->attr.bp_len == sizeof(long)) { - info->len = X86_BREAKPOINT_LEN_X; + if (attr->bp_len == sizeof(long)) { + hw->len = X86_BREAKPOINT_LEN_X; return 0; } default: @@ -278,28 +284,26 @@ static int arch_build_bp_info(struct perf_event *bp) } /* Len */ - info->mask = 0; - - switch (bp->attr.bp_len) { + switch (attr->bp_len) { case HW_BREAKPOINT_LEN_1: - info->len = X86_BREAKPOINT_LEN_1; + hw->len = X86_BREAKPOINT_LEN_1; break; case HW_BREAKPOINT_LEN_2: - info->len = X86_BREAKPOINT_LEN_2; + hw->len = X86_BREAKPOINT_LEN_2; break; case HW_BREAKPOINT_LEN_4: - info->len = X86_BREAKPOINT_LEN_4; + hw->len = X86_BREAKPOINT_LEN_4; break; #ifdef CONFIG_X86_64 case HW_BREAKPOINT_LEN_8: - info->len = X86_BREAKPOINT_LEN_8; + hw->len = X86_BREAKPOINT_LEN_8; break; #endif default: /* AMD range breakpoint */ - if (!is_power_of_2(bp->attr.bp_len)) + if (!is_power_of_2(attr->bp_len)) return -EINVAL; - if (bp->attr.bp_addr & (bp->attr.bp_len - 1)) + if (attr->bp_addr & (attr->bp_len - 1)) return -EINVAL; if (!boot_cpu_has(X86_FEATURE_BPEXT)) @@ -312,8 +316,8 @@ static int arch_build_bp_info(struct perf_event *bp) * breakpoints, then we'll have to check for kprobe-blacklisted * addresses anywhere in the range. */ - info->mask = bp->attr.bp_len - 1; - info->len = X86_BREAKPOINT_LEN_1; + hw->mask = attr->bp_len - 1; + hw->len = X86_BREAKPOINT_LEN_1; } return 0; @@ -322,22 +326,23 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings */ -int arch_validate_hwbkpt_settings(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); unsigned int align; int ret; - ret = arch_build_bp_info(bp); + ret = arch_build_bp_info(bp, attr, hw); if (ret) return ret; - switch (info->len) { + switch (hw->len) { case X86_BREAKPOINT_LEN_1: align = 0; - if (info->mask) - align = info->mask; + if (hw->mask) + align = hw->mask; break; case X86_BREAKPOINT_LEN_2: align = 1; @@ -358,7 +363,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) * Check that the low-order bits of the address are appropriate * for the alignment implied by len. */ - if (info->address & align) + if (hw->address & align) return -EINVAL; return 0; diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S new file mode 100644 index 000000000000..ddeeaac8adda --- /dev/null +++ b/arch/x86/kernel/irqflags.S @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include + +/* + * unsigned long native_save_fl(void) + */ +ENTRY(native_save_fl) + pushf + pop %_ASM_AX + ret +ENDPROC(native_save_fl) +EXPORT_SYMBOL(native_save_fl) + +/* + * void native_restore_fl(unsigned long flags) + * %eax/%rdi: flags + */ +ENTRY(native_restore_fl) + push %_ASM_ARG1 + popf + ret +ENDPROC(native_restore_fl) +EXPORT_SYMBOL(native_restore_fl) diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c index e56c95be2808..eeea935e9bb5 100644 --- a/arch/x86/kernel/jump_label.c +++ b/arch/x86/kernel/jump_label.c @@ -37,15 +37,18 @@ static void bug_at(unsigned char *ip, int line) BUG(); } -static void __jump_label_transform(struct jump_entry *entry, - enum jump_label_type type, - void *(*poker)(void *, const void *, size_t), - int init) +static void __ref __jump_label_transform(struct jump_entry *entry, + enum jump_label_type type, + void *(*poker)(void *, const void *, size_t), + int init) { union jump_code_union code; const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP }; const unsigned char *ideal_nop = ideal_nops[NOP_ATOMIC5]; + if (early_boot_irqs_disabled) + poker = text_poke_early; + if (type == JUMP_LABEL_JMP) { if (init) { /* diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h index ae38dccf0c8f..2b949f4fd4d8 100644 --- a/arch/x86/kernel/kprobes/common.h +++ b/arch/x86/kernel/kprobes/common.h @@ -105,14 +105,4 @@ static inline unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsig } #endif -#ifdef CONFIG_KPROBES_ON_FTRACE -extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb); -#else -static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) -{ - return 0; -} -#endif #endif diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 6f4d42377fe5..b0d1e81c96bb 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -66,8 +66,6 @@ #include "common.h" -void jprobe_return_end(void); - DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); @@ -395,8 +393,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn) - (u8 *) real; if ((s64) (s32) newdisp != newdisp) { pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp); - pr_err("\tSrc: %p, Dest: %p, old disp: %x\n", - src, real, insn->displacement.value); return 0; } disp = (u8 *) dest + insn_offset_displacement(insn); @@ -596,7 +592,6 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs, * stepping. */ regs->ip = (unsigned long)p->ainsn.insn; - preempt_enable_no_resched(); return; } #endif @@ -640,8 +635,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs, * Raise a BUG or we'll continue in an endless reentering loop * and eventually a stack overflow. */ - printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n", - p->addr); + pr_err("Unrecoverable kprobe detected.\n"); dump_kprobe(p); BUG(); default: @@ -669,12 +663,10 @@ int kprobe_int3_handler(struct pt_regs *regs) addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t)); /* - * We don't want to be preempted for the entire - * duration of kprobe processing. We conditionally - * re-enable preemption at the end of this function, - * and also in reenter_kprobe() and setup_singlestep(). + * We don't want to be preempted for the entire duration of kprobe + * processing. Since int3 and debug trap disables irqs and we clear + * IF while singlestepping, it must be no preemptible. */ - preempt_disable(); kcb = get_kprobe_ctlblk(); p = get_kprobe(addr); @@ -690,13 +682,14 @@ int kprobe_int3_handler(struct pt_regs *regs) /* * If we have no pre-handler or it returned 0, we * continue with normal processing. If we have a - * pre-handler and it returned non-zero, it prepped - * for calling the break_handler below on re-entry - * for jprobe processing, so get out doing nothing - * more here. + * pre-handler and it returned non-zero, that means + * user handler setup registers to exit to another + * instruction, we must skip the single stepping. */ if (!p->pre_handler || !p->pre_handler(p, regs)) setup_singlestep(p, regs, kcb, 0); + else + reset_current_kprobe(); return 1; } } else if (*addr != BREAKPOINT_INSTRUCTION) { @@ -710,18 +703,9 @@ int kprobe_int3_handler(struct pt_regs *regs) * the original instruction. */ regs->ip = (unsigned long)addr; - preempt_enable_no_resched(); return 1; - } else if (kprobe_running()) { - p = __this_cpu_read(current_kprobe); - if (p->break_handler && p->break_handler(p, regs)) { - if (!skip_singlestep(p, regs, kcb)) - setup_singlestep(p, regs, kcb, 0); - return 1; - } } /* else: not a kprobe fault; let the kernel handle it */ - preempt_enable_no_resched(); return 0; } NOKPROBE_SYMBOL(kprobe_int3_handler); @@ -972,8 +956,6 @@ int kprobe_debug_handler(struct pt_regs *regs) } reset_current_kprobe(); out: - preempt_enable_no_resched(); - /* * if somebody else is singlestepping across a probe point, flags * will have TF set, in which case, continue the remaining processing @@ -1020,7 +1002,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) restore_previous_kprobe(kcb); else reset_current_kprobe(); - preempt_enable_no_resched(); } else if (kcb->kprobe_status == KPROBE_HIT_ACTIVE || kcb->kprobe_status == KPROBE_HIT_SSDONE) { /* @@ -1083,93 +1064,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, } NOKPROBE_SYMBOL(kprobe_exceptions_notify); -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct jprobe *jp = container_of(p, struct jprobe, kp); - unsigned long addr; - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - kcb->jprobe_saved_regs = *regs; - kcb->jprobe_saved_sp = stack_addr(regs); - addr = (unsigned long)(kcb->jprobe_saved_sp); - - /* - * As Linus pointed out, gcc assumes that the callee - * owns the argument space and could overwrite it, e.g. - * tailcall optimization. So, to be absolutely safe - * we also save and restore enough stack bytes to cover - * the argument area. - * Use __memcpy() to avoid KASAN stack out-of-bounds reports as we copy - * raw stack chunk with redzones: - */ - __memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr, MIN_STACK_SIZE(addr)); - regs->ip = (unsigned long)(jp->entry); - - /* - * jprobes use jprobe_return() which skips the normal return - * path of the function, and this messes up the accounting of the - * function graph tracer to get messed up. - * - * Pause function graph tracing while performing the jprobe function. - */ - pause_graph_tracing(); - return 1; -} -NOKPROBE_SYMBOL(setjmp_pre_handler); - -void jprobe_return(void) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - - /* Unpoison stack redzones in the frames we are going to jump over. */ - kasan_unpoison_stack_above_sp_to(kcb->jprobe_saved_sp); - - asm volatile ( -#ifdef CONFIG_X86_64 - " xchg %%rbx,%%rsp \n" -#else - " xchgl %%ebx,%%esp \n" -#endif - " int3 \n" - " .globl jprobe_return_end\n" - " jprobe_return_end: \n" - " nop \n"::"b" - (kcb->jprobe_saved_sp):"memory"); -} -NOKPROBE_SYMBOL(jprobe_return); -NOKPROBE_SYMBOL(jprobe_return_end); - -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) -{ - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - u8 *addr = (u8 *) (regs->ip - 1); - struct jprobe *jp = container_of(p, struct jprobe, kp); - void *saved_sp = kcb->jprobe_saved_sp; - - if ((addr > (u8 *) jprobe_return) && - (addr < (u8 *) jprobe_return_end)) { - if (stack_addr(regs) != saved_sp) { - struct pt_regs *saved_regs = &kcb->jprobe_saved_regs; - printk(KERN_ERR - "current sp %p does not match saved sp %p\n", - stack_addr(regs), saved_sp); - printk(KERN_ERR "Saved registers for jprobe %p\n", jp); - show_regs(saved_regs); - printk(KERN_ERR "Current registers\n"); - show_regs(regs); - BUG(); - } - /* It's OK to start function graph tracing again */ - unpause_graph_tracing(); - *regs = kcb->jprobe_saved_regs; - __memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp)); - preempt_enable_no_resched(); - return 1; - } - return 0; -} -NOKPROBE_SYMBOL(longjmp_break_handler); - bool arch_within_kprobe_blacklist(unsigned long addr) { bool is_in_entry_trampoline_section = false; diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c index 8dc0161cec8f..ef819e19650b 100644 --- a/arch/x86/kernel/kprobes/ftrace.c +++ b/arch/x86/kernel/kprobes/ftrace.c @@ -25,36 +25,6 @@ #include "common.h" -static nokprobe_inline -void __skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb, unsigned long orig_ip) -{ - /* - * Emulate singlestep (and also recover regs->ip) - * as if there is a 5byte nop - */ - regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE; - if (unlikely(p->post_handler)) { - kcb->kprobe_status = KPROBE_HIT_SSDONE; - p->post_handler(p, regs, 0); - } - __this_cpu_write(current_kprobe, NULL); - if (orig_ip) - regs->ip = orig_ip; -} - -int skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) -{ - if (kprobe_ftrace(p)) { - __skip_singlestep(p, regs, kcb, 0); - preempt_enable_no_resched(); - return 1; - } - return 0; -} -NOKPROBE_SYMBOL(skip_singlestep); - /* Ftrace callback handler for kprobes -- called under preepmt disabed */ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct pt_regs *regs) @@ -75,18 +45,25 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */ regs->ip = ip + sizeof(kprobe_opcode_t); - /* To emulate trap based kprobes, preempt_disable here */ - preempt_disable(); __this_cpu_write(current_kprobe, p); kcb->kprobe_status = KPROBE_HIT_ACTIVE; if (!p->pre_handler || !p->pre_handler(p, regs)) { - __skip_singlestep(p, regs, kcb, orig_ip); - preempt_enable_no_resched(); + /* + * Emulate singlestep (and also recover regs->ip) + * as if there is a 5byte nop + */ + regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE; + if (unlikely(p->post_handler)) { + kcb->kprobe_status = KPROBE_HIT_SSDONE; + p->post_handler(p, regs, 0); + } + regs->ip = orig_ip; } /* - * If pre_handler returns !0, it sets regs->ip and - * resets current kprobe, and keep preempt count +1. + * If pre_handler returns !0, it changes regs->ip. We have to + * skip emulating post_handler. */ + __this_cpu_write(current_kprobe, NULL); } } NOKPROBE_SYMBOL(kprobe_ftrace_handler); diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 203d398802a3..eaf02f2e7300 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -491,7 +491,6 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter) regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX; if (!reenter) reset_current_kprobe(); - preempt_enable_no_resched(); return 1; } return 0; diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 5b2300b818af..09aaabb2bbf1 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -45,7 +45,6 @@ #include #include #include -#include static int kvmapf = 1; @@ -66,15 +65,6 @@ static int __init parse_no_stealacc(char *arg) early_param("no-steal-acc", parse_no_stealacc); -static int kvmclock_vsyscall = 1; -static int __init parse_no_kvmclock_vsyscall(char *arg) -{ - kvmclock_vsyscall = 0; - return 0; -} - -early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall); - static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); static DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64); static int has_steal_clock = 0; @@ -154,7 +144,7 @@ void kvm_async_pf_task_wait(u32 token, int interrupt_kernel) for (;;) { if (!n.halted) - prepare_to_swait(&n.wq, &wait, TASK_UNINTERRUPTIBLE); + prepare_to_swait_exclusive(&n.wq, &wait, TASK_UNINTERRUPTIBLE); if (hlist_unhashed(&n.link)) break; @@ -188,7 +178,7 @@ static void apf_task_wake_one(struct kvm_task_sleep_node *n) if (n->halted) smp_send_reschedule(n->cpu); else if (swq_has_sleeper(&n->wq)) - swake_up(&n->wq); + swake_up_one(&n->wq); } static void apf_task_wake_all(void) @@ -560,9 +550,6 @@ static void __init kvm_guest_init(void) if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) apic_set_eoi_write(kvm_guest_apic_eoi_write); - if (kvmclock_vsyscall) - kvm_setup_vsyscall_timeinfo(); - #ifdef CONFIG_SMP smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus; smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu; @@ -628,6 +615,7 @@ const __initconst struct hypervisor_x86 x86_hyper_kvm = { .name = "KVM", .detect = kvm_detect, .type = X86_HYPER_KVM, + .init.init_platform = kvmclock_init, .init.guest_late_init = kvm_guest_init, .init.x2apic_available = kvm_para_available, }; diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index bf8d1eb7fca3..d2edd7e6c294 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -23,30 +23,56 @@ #include #include #include -#include +#include #include #include +#include +#include #include #include #include #include -static int kvmclock __ro_after_init = 1; -static int msr_kvm_system_time = MSR_KVM_SYSTEM_TIME; -static int msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK; -static u64 kvm_sched_clock_offset; +static int kvmclock __initdata = 1; +static int kvmclock_vsyscall __initdata = 1; +static int msr_kvm_system_time __ro_after_init = MSR_KVM_SYSTEM_TIME; +static int msr_kvm_wall_clock __ro_after_init = MSR_KVM_WALL_CLOCK; +static u64 kvm_sched_clock_offset __ro_after_init; -static int parse_no_kvmclock(char *arg) +static int __init parse_no_kvmclock(char *arg) { kvmclock = 0; return 0; } early_param("no-kvmclock", parse_no_kvmclock); -/* The hypervisor will put information about time periodically here */ -static struct pvclock_vsyscall_time_info *hv_clock; -static struct pvclock_wall_clock *wall_clock; +static int __init parse_no_kvmclock_vsyscall(char *arg) +{ + kvmclock_vsyscall = 0; + return 0; +} +early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall); + +/* Aligned to page sizes to match whats mapped via vsyscalls to userspace */ +#define HV_CLOCK_SIZE (sizeof(struct pvclock_vsyscall_time_info) * NR_CPUS) +#define HVC_BOOT_ARRAY_SIZE \ + (PAGE_SIZE / sizeof(struct pvclock_vsyscall_time_info)) + +static struct pvclock_vsyscall_time_info + hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __aligned(PAGE_SIZE); +static struct pvclock_wall_clock wall_clock; +static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu); + +static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void) +{ + return &this_cpu_read(hv_clock_per_cpu)->pvti; +} + +static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void) +{ + return this_cpu_read(hv_clock_per_cpu); +} /* * The wallclock is the time of day when we booted. Since then, some time may @@ -55,21 +81,10 @@ static struct pvclock_wall_clock *wall_clock; */ static void kvm_get_wallclock(struct timespec64 *now) { - struct pvclock_vcpu_time_info *vcpu_time; - int low, high; - int cpu; - - low = (int)slow_virt_to_phys(wall_clock); - high = ((u64)slow_virt_to_phys(wall_clock) >> 32); - - native_write_msr(msr_kvm_wall_clock, low, high); - - cpu = get_cpu(); - - vcpu_time = &hv_clock[cpu].pvti; - pvclock_read_wallclock(wall_clock, vcpu_time, now); - - put_cpu(); + wrmsrl(msr_kvm_wall_clock, slow_virt_to_phys(&wall_clock)); + preempt_disable(); + pvclock_read_wallclock(&wall_clock, this_cpu_pvti(), now); + preempt_enable(); } static int kvm_set_wallclock(const struct timespec64 *now) @@ -79,14 +94,10 @@ static int kvm_set_wallclock(const struct timespec64 *now) static u64 kvm_clock_read(void) { - struct pvclock_vcpu_time_info *src; u64 ret; - int cpu; preempt_disable_notrace(); - cpu = smp_processor_id(); - src = &hv_clock[cpu].pvti; - ret = pvclock_clocksource_read(src); + ret = pvclock_clocksource_read(this_cpu_pvti()); preempt_enable_notrace(); return ret; } @@ -112,11 +123,11 @@ static inline void kvm_sched_clock_init(bool stable) kvm_sched_clock_offset = kvm_clock_read(); pv_time_ops.sched_clock = kvm_sched_clock_read; - printk(KERN_INFO "kvm-clock: using sched offset of %llu cycles\n", - kvm_sched_clock_offset); + pr_info("kvm-clock: using sched offset of %llu cycles", + kvm_sched_clock_offset); BUILD_BUG_ON(sizeof(kvm_sched_clock_offset) > - sizeof(((struct pvclock_vcpu_time_info *)NULL)->system_time)); + sizeof(((struct pvclock_vcpu_time_info *)NULL)->system_time)); } /* @@ -130,18 +141,11 @@ static inline void kvm_sched_clock_init(bool stable) */ static unsigned long kvm_get_tsc_khz(void) { - struct pvclock_vcpu_time_info *src; - int cpu; - unsigned long tsc_khz; - - cpu = get_cpu(); - src = &hv_clock[cpu].pvti; - tsc_khz = pvclock_tsc_khz(src); - put_cpu(); - return tsc_khz; + setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ); + return pvclock_tsc_khz(this_cpu_pvti()); } -static void kvm_get_preset_lpj(void) +static void __init kvm_get_preset_lpj(void) { unsigned long khz; u64 lpj; @@ -155,49 +159,40 @@ static void kvm_get_preset_lpj(void) bool kvm_check_and_clear_guest_paused(void) { + struct pvclock_vsyscall_time_info *src = this_cpu_hvclock(); bool ret = false; - struct pvclock_vcpu_time_info *src; - int cpu = smp_processor_id(); - if (!hv_clock) + if (!src) return ret; - src = &hv_clock[cpu].pvti; - if ((src->flags & PVCLOCK_GUEST_STOPPED) != 0) { - src->flags &= ~PVCLOCK_GUEST_STOPPED; + if ((src->pvti.flags & PVCLOCK_GUEST_STOPPED) != 0) { + src->pvti.flags &= ~PVCLOCK_GUEST_STOPPED; pvclock_touch_watchdogs(); ret = true; } - return ret; } struct clocksource kvm_clock = { - .name = "kvm-clock", - .read = kvm_clock_get_cycles, - .rating = 400, - .mask = CLOCKSOURCE_MASK(64), - .flags = CLOCK_SOURCE_IS_CONTINUOUS, + .name = "kvm-clock", + .read = kvm_clock_get_cycles, + .rating = 400, + .mask = CLOCKSOURCE_MASK(64), + .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; EXPORT_SYMBOL_GPL(kvm_clock); -int kvm_register_clock(char *txt) +static void kvm_register_clock(char *txt) { - int cpu = smp_processor_id(); - int low, high, ret; - struct pvclock_vcpu_time_info *src; - - if (!hv_clock) - return 0; + struct pvclock_vsyscall_time_info *src = this_cpu_hvclock(); + u64 pa; - src = &hv_clock[cpu].pvti; - low = (int)slow_virt_to_phys(src) | 1; - high = ((u64)slow_virt_to_phys(src) >> 32); - ret = native_write_msr_safe(msr_kvm_system_time, low, high); - printk(KERN_INFO "kvm-clock: cpu %d, msr %x:%x, %s\n", - cpu, high, low, txt); + if (!src) + return; - return ret; + pa = slow_virt_to_phys(&src->pvti) | 0x01ULL; + wrmsrl(msr_kvm_system_time, pa); + pr_info("kvm-clock: cpu %d, msr %llx, %s", smp_processor_id(), pa, txt); } static void kvm_save_sched_clock_state(void) @@ -212,11 +207,7 @@ static void kvm_restore_sched_clock_state(void) #ifdef CONFIG_X86_LOCAL_APIC static void kvm_setup_secondary_clock(void) { - /* - * Now that the first cpu already had this clocksource initialized, - * we shouldn't fail. - */ - WARN_ON(kvm_register_clock("secondary cpu clock")); + kvm_register_clock("secondary cpu clock"); } #endif @@ -244,98 +235,84 @@ static void kvm_shutdown(void) native_machine_shutdown(); } -static phys_addr_t __init kvm_memblock_alloc(phys_addr_t size, - phys_addr_t align) +static int __init kvm_setup_vsyscall_timeinfo(void) { - phys_addr_t mem; +#ifdef CONFIG_X86_64 + u8 flags; - mem = memblock_alloc(size, align); - if (!mem) + if (!per_cpu(hv_clock_per_cpu, 0) || !kvmclock_vsyscall) return 0; - if (sev_active()) { - if (early_set_memory_decrypted((unsigned long)__va(mem), size)) - goto e_free; - } + flags = pvclock_read_flags(&hv_clock_boot[0].pvti); + if (!(flags & PVCLOCK_TSC_STABLE_BIT)) + return 0; - return mem; -e_free: - memblock_free(mem, size); + kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK; +#endif return 0; } +early_initcall(kvm_setup_vsyscall_timeinfo); -static void __init kvm_memblock_free(phys_addr_t addr, phys_addr_t size) +static int kvmclock_setup_percpu(unsigned int cpu) { - if (sev_active()) - early_set_memory_encrypted((unsigned long)__va(addr), size); + struct pvclock_vsyscall_time_info *p = per_cpu(hv_clock_per_cpu, cpu); + + /* + * The per cpu area setup replicates CPU0 data to all cpu + * pointers. So carefully check. CPU0 has been set up in init + * already. + */ + if (!cpu || (p && p != per_cpu(hv_clock_per_cpu, 0))) + return 0; + + /* Use the static page for the first CPUs, allocate otherwise */ + if (cpu < HVC_BOOT_ARRAY_SIZE) + p = &hv_clock_boot[cpu]; + else + p = kzalloc(sizeof(*p), GFP_KERNEL); - memblock_free(addr, size); + per_cpu(hv_clock_per_cpu, cpu) = p; + return p ? 0 : -ENOMEM; } void __init kvmclock_init(void) { - struct pvclock_vcpu_time_info *vcpu_time; - unsigned long mem, mem_wall_clock; - int size, cpu, wall_clock_size; u8 flags; - size = PAGE_ALIGN(sizeof(struct pvclock_vsyscall_time_info)*NR_CPUS); - - if (!kvm_para_available()) + if (!kvm_para_available() || !kvmclock) return; - if (kvmclock && kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE2)) { + if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE2)) { msr_kvm_system_time = MSR_KVM_SYSTEM_TIME_NEW; msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK_NEW; - } else if (!(kvmclock && kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE))) - return; - - wall_clock_size = PAGE_ALIGN(sizeof(struct pvclock_wall_clock)); - mem_wall_clock = kvm_memblock_alloc(wall_clock_size, PAGE_SIZE); - if (!mem_wall_clock) - return; - - wall_clock = __va(mem_wall_clock); - memset(wall_clock, 0, wall_clock_size); - - mem = kvm_memblock_alloc(size, PAGE_SIZE); - if (!mem) { - kvm_memblock_free(mem_wall_clock, wall_clock_size); - wall_clock = NULL; + } else if (!kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE)) { return; } - hv_clock = __va(mem); - memset(hv_clock, 0, size); - - if (kvm_register_clock("primary cpu clock")) { - hv_clock = NULL; - kvm_memblock_free(mem, size); - kvm_memblock_free(mem_wall_clock, wall_clock_size); - wall_clock = NULL; + if (cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "kvmclock:setup_percpu", + kvmclock_setup_percpu, NULL) < 0) { return; } - printk(KERN_INFO "kvm-clock: Using msrs %x and %x", + pr_info("kvm-clock: Using msrs %x and %x", msr_kvm_system_time, msr_kvm_wall_clock); + this_cpu_write(hv_clock_per_cpu, &hv_clock_boot[0]); + kvm_register_clock("primary cpu clock"); + pvclock_set_pvti_cpu0_va(hv_clock_boot); + if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT)) pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT); - cpu = get_cpu(); - vcpu_time = &hv_clock[cpu].pvti; - flags = pvclock_read_flags(vcpu_time); - + flags = pvclock_read_flags(&hv_clock_boot[0].pvti); kvm_sched_clock_init(flags & PVCLOCK_TSC_STABLE_BIT); - put_cpu(); x86_platform.calibrate_tsc = kvm_get_tsc_khz; x86_platform.calibrate_cpu = kvm_get_tsc_khz; x86_platform.get_wallclock = kvm_get_wallclock; x86_platform.set_wallclock = kvm_set_wallclock; #ifdef CONFIG_X86_LOCAL_APIC - x86_cpuinit.early_percpu_clock_init = - kvm_setup_secondary_clock; + x86_cpuinit.early_percpu_clock_init = kvm_setup_secondary_clock; #endif x86_platform.save_sched_clock_state = kvm_save_sched_clock_state; x86_platform.restore_sched_clock_state = kvm_restore_sched_clock_state; @@ -347,34 +324,3 @@ void __init kvmclock_init(void) clocksource_register_hz(&kvm_clock, NSEC_PER_SEC); pv_info.name = "KVM"; } - -int __init kvm_setup_vsyscall_timeinfo(void) -{ -#ifdef CONFIG_X86_64 - int cpu; - u8 flags; - struct pvclock_vcpu_time_info *vcpu_time; - unsigned int size; - - if (!hv_clock) - return 0; - - size = PAGE_ALIGN(sizeof(struct pvclock_vsyscall_time_info)*NR_CPUS); - - cpu = get_cpu(); - - vcpu_time = &hv_clock[cpu].pvti; - flags = pvclock_read_flags(vcpu_time); - - if (!(flags & PVCLOCK_TSC_STABLE_BIT)) { - put_cpu(); - return 1; - } - - pvclock_set_pvti_cpu0_va(hv_clock); - put_cpu(); - - kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK; -#endif - return 0; -} diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index c9b14020f4dd..733e6ace0fa4 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -100,6 +100,102 @@ static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries) return new_ldt; } +#ifdef CONFIG_PAGE_TABLE_ISOLATION + +static void do_sanity_check(struct mm_struct *mm, + bool had_kernel_mapping, + bool had_user_mapping) +{ + if (mm->context.ldt) { + /* + * We already had an LDT. The top-level entry should already + * have been allocated and synchronized with the usermode + * tables. + */ + WARN_ON(!had_kernel_mapping); + if (static_cpu_has(X86_FEATURE_PTI)) + WARN_ON(!had_user_mapping); + } else { + /* + * This is the first time we're mapping an LDT for this process. + * Sync the pgd to the usermode tables. + */ + WARN_ON(had_kernel_mapping); + if (static_cpu_has(X86_FEATURE_PTI)) + WARN_ON(had_user_mapping); + } +} + +#ifdef CONFIG_X86_PAE + +static pmd_t *pgd_to_pmd_walk(pgd_t *pgd, unsigned long va) +{ + p4d_t *p4d; + pud_t *pud; + + if (pgd->pgd == 0) + return NULL; + + p4d = p4d_offset(pgd, va); + if (p4d_none(*p4d)) + return NULL; + + pud = pud_offset(p4d, va); + if (pud_none(*pud)) + return NULL; + + return pmd_offset(pud, va); +} + +static void map_ldt_struct_to_user(struct mm_struct *mm) +{ + pgd_t *k_pgd = pgd_offset(mm, LDT_BASE_ADDR); + pgd_t *u_pgd = kernel_to_user_pgdp(k_pgd); + pmd_t *k_pmd, *u_pmd; + + k_pmd = pgd_to_pmd_walk(k_pgd, LDT_BASE_ADDR); + u_pmd = pgd_to_pmd_walk(u_pgd, LDT_BASE_ADDR); + + if (static_cpu_has(X86_FEATURE_PTI) && !mm->context.ldt) + set_pmd(u_pmd, *k_pmd); +} + +static void sanity_check_ldt_mapping(struct mm_struct *mm) +{ + pgd_t *k_pgd = pgd_offset(mm, LDT_BASE_ADDR); + pgd_t *u_pgd = kernel_to_user_pgdp(k_pgd); + bool had_kernel, had_user; + pmd_t *k_pmd, *u_pmd; + + k_pmd = pgd_to_pmd_walk(k_pgd, LDT_BASE_ADDR); + u_pmd = pgd_to_pmd_walk(u_pgd, LDT_BASE_ADDR); + had_kernel = (k_pmd->pmd != 0); + had_user = (u_pmd->pmd != 0); + + do_sanity_check(mm, had_kernel, had_user); +} + +#else /* !CONFIG_X86_PAE */ + +static void map_ldt_struct_to_user(struct mm_struct *mm) +{ + pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR); + + if (static_cpu_has(X86_FEATURE_PTI) && !mm->context.ldt) + set_pgd(kernel_to_user_pgdp(pgd), *pgd); +} + +static void sanity_check_ldt_mapping(struct mm_struct *mm) +{ + pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR); + bool had_kernel = (pgd->pgd != 0); + bool had_user = (kernel_to_user_pgdp(pgd)->pgd != 0); + + do_sanity_check(mm, had_kernel, had_user); +} + +#endif /* CONFIG_X86_PAE */ + /* * If PTI is enabled, this maps the LDT into the kernelmode and * usermode tables for the given mm. @@ -115,9 +211,8 @@ static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries) static int map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) { -#ifdef CONFIG_PAGE_TABLE_ISOLATION - bool is_vmalloc, had_top_level_entry; unsigned long va; + bool is_vmalloc; spinlock_t *ptl; pgd_t *pgd; int i; @@ -131,13 +226,15 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) */ WARN_ON(ldt->slot != -1); + /* Check if the current mappings are sane */ + sanity_check_ldt_mapping(mm); + /* * Did we already have the top level entry allocated? We can't * use pgd_none() for this because it doens't do anything on * 4-level page table kernels. */ pgd = pgd_offset(mm, LDT_BASE_ADDR); - had_top_level_entry = (pgd->pgd != 0); is_vmalloc = is_vmalloc_addr(ldt->entries); @@ -172,41 +269,31 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) pte_unmap_unlock(ptep, ptl); } - if (mm->context.ldt) { - /* - * We already had an LDT. The top-level entry should already - * have been allocated and synchronized with the usermode - * tables. - */ - WARN_ON(!had_top_level_entry); - if (static_cpu_has(X86_FEATURE_PTI)) - WARN_ON(!kernel_to_user_pgdp(pgd)->pgd); - } else { - /* - * This is the first time we're mapping an LDT for this process. - * Sync the pgd to the usermode tables. - */ - WARN_ON(had_top_level_entry); - if (static_cpu_has(X86_FEATURE_PTI)) { - WARN_ON(kernel_to_user_pgdp(pgd)->pgd); - set_pgd(kernel_to_user_pgdp(pgd), *pgd); - } - } + /* Propagate LDT mapping to the user page-table */ + map_ldt_struct_to_user(mm); va = (unsigned long)ldt_slot_va(slot); flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, 0); ldt->slot = slot; -#endif return 0; } +#else /* !CONFIG_PAGE_TABLE_ISOLATION */ + +static int +map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) +{ + return 0; +} +#endif /* CONFIG_PAGE_TABLE_ISOLATION */ + static void free_ldt_pgtables(struct mm_struct *mm) { #ifdef CONFIG_PAGE_TABLE_ISOLATION struct mmu_gather tlb; unsigned long start = LDT_BASE_ADDR; - unsigned long end = start + (1UL << PGDIR_SHIFT); + unsigned long end = LDT_END_ADDR; if (!static_cpu_has(X86_FEATURE_PTI)) return; diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c index d1ab07ec8c9a..5409c2800ab5 100644 --- a/arch/x86/kernel/machine_kexec_32.c +++ b/arch/x86/kernel/machine_kexec_32.c @@ -56,7 +56,7 @@ static void load_segments(void) static void machine_kexec_free_page_tables(struct kimage *image) { - free_page((unsigned long)image->arch.pgd); + free_pages((unsigned long)image->arch.pgd, PGD_ALLOCATION_ORDER); image->arch.pgd = NULL; #ifdef CONFIG_X86_PAE free_page((unsigned long)image->arch.pmd0); @@ -72,7 +72,8 @@ static void machine_kexec_free_page_tables(struct kimage *image) static int machine_kexec_alloc_page_tables(struct kimage *image) { - image->arch.pgd = (pgd_t *)get_zeroed_page(GFP_KERNEL); + image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + PGD_ALLOCATION_ORDER); #ifdef CONFIG_X86_PAE image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL); image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL); diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 99dc79e76bdc..930c88341e4e 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -88,10 +88,12 @@ unsigned paravirt_patch_call(void *insnbuf, struct branch *b = insnbuf; unsigned long delta = (unsigned long)target - (addr+5); - if (tgt_clobbers & ~site_clobbers) - return len; /* target would clobber too much for this site */ - if (len < 5) + if (len < 5) { +#ifdef CONFIG_RETPOLINE + WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr); +#endif return len; /* call too long for patch site */ + } b->opcode = 0xe8; /* call */ b->delta = delta; @@ -106,8 +108,12 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target, struct branch *b = insnbuf; unsigned long delta = (unsigned long)target - (addr+5); - if (len < 5) + if (len < 5) { +#ifdef CONFIG_RETPOLINE + WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr); +#endif return len; /* call too long for patch site */ + } b->opcode = 0xe9; /* jmp */ b->delta = delta; diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c index 9edadabf04f6..9cb98f7b07c9 100644 --- a/arch/x86/kernel/paravirt_patch_64.c +++ b/arch/x86/kernel/paravirt_patch_64.c @@ -20,7 +20,7 @@ DEF_NATIVE(, mov64, "mov %rdi, %rax"); #if defined(CONFIG_PARAVIRT_SPINLOCKS) DEF_NATIVE(pv_lock_ops, queued_spin_unlock, "movb $0, (%rdi)"); -DEF_NATIVE(pv_lock_ops, vcpu_is_preempted, "xor %rax, %rax"); +DEF_NATIVE(pv_lock_ops, vcpu_is_preempted, "xor %eax, %eax"); #endif unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len) diff --git a/arch/x86/kernel/pci-iommu_table.c b/arch/x86/kernel/pci-iommu_table.c index 4dfd90a75e63..2e9006c1e240 100644 --- a/arch/x86/kernel/pci-iommu_table.c +++ b/arch/x86/kernel/pci-iommu_table.c @@ -60,7 +60,7 @@ void __init check_iommu_entries(struct iommu_table_entry *start, printk(KERN_ERR "CYCLIC DEPENDENCY FOUND! %pS depends on %pS and vice-versa. BREAKING IT.\n", p->detect, q->detect); /* Heavy handed way..*/ - x->depend = 0; + x->depend = NULL; } } diff --git a/arch/x86/kernel/pcspeaker.c b/arch/x86/kernel/pcspeaker.c index da5190a1ea16..4a710ffffd9a 100644 --- a/arch/x86/kernel/pcspeaker.c +++ b/arch/x86/kernel/pcspeaker.c @@ -9,6 +9,6 @@ static __init int add_pcspkr(void) pd = platform_device_register_simple("pcspkr", -1, NULL, 0); - return IS_ERR(pd) ? PTR_ERR(pd) : 0; + return PTR_ERR_OR_ZERO(pd); } device_initcall(add_pcspkr); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 30ca2d1a9231..c93fcfdf1673 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -57,14 +57,12 @@ __visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = { */ .sp0 = (1UL << (BITS_PER_LONG-1)) + 1, -#ifdef CONFIG_X86_64 /* * .sp1 is cpu_current_top_of_stack. The init task never * runs user code, but cpu_current_top_of_stack should still * be well defined before the first context switch. */ .sp1 = TOP_OF_INIT_STACK, -#endif #ifdef CONFIG_X86_32 .ss0 = __KERNEL_DS, diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 0ae659de21eb..2924fd447e61 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -285,7 +285,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) * current_thread_info(). Refresh the SYSENTER configuration in * case prev or next is vm86. */ - update_sp0(next_p); + update_task_stack(next_p); refresh_sysenter_cs(next); this_cpu_write(cpu_current_top_of_stack, (unsigned long)task_stack_page(next_p) + diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 12bb445fb98d..476e3ddf8890 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -478,7 +478,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p)); /* Reload sp0. */ - update_sp0(next_p); + update_task_stack(next_p); /* * Now maybe reload the debug registers and handle I/O bitmaps diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c index 697a4ce04308..736348ead421 100644 --- a/arch/x86/kernel/quirks.c +++ b/arch/x86/kernel/quirks.c @@ -645,12 +645,19 @@ static void quirk_intel_brickland_xeon_ras_cap(struct pci_dev *pdev) /* Skylake */ static void quirk_intel_purley_xeon_ras_cap(struct pci_dev *pdev) { - u32 capid0; + u32 capid0, capid5; pci_read_config_dword(pdev, 0x84, &capid0); + pci_read_config_dword(pdev, 0x98, &capid5); - if ((capid0 & 0xc0) == 0xc0) + /* + * CAPID0{7:6} indicate whether this is an advanced RAS SKU + * CAPID5{8:5} indicate that various NVDIMM usage modes are + * enabled, so memory machine check recovery is also enabled. + */ + if ((capid0 & 0xc0) == 0xc0 || (capid5 & 0x1e0)) static_branch_inc(&mcsafe_key); + } DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0ec3, quirk_intel_brickland_xeon_ras_cap); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, quirk_intel_brickland_xeon_ras_cap); diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 2f86d883dd95..5d32c55aeb8b 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -866,6 +866,8 @@ void __init setup_arch(char **cmdline_p) idt_setup_early_traps(); early_cpu_init(); + arch_init_ideal_nops(); + jump_label_init(); early_ioremap_init(); setup_olpc_ofw_pgd(); @@ -1012,6 +1014,7 @@ void __init setup_arch(char **cmdline_p) */ init_hypervisor_platform(); + tsc_early_init(); x86_init.resources.probe_roms(); /* after parse_early_param, so could debug it */ @@ -1197,11 +1200,6 @@ void __init setup_arch(char **cmdline_p) memblock_find_dma_reserve(); -#ifdef CONFIG_KVM_GUEST - kvmclock_init(); -#endif - - tsc_early_delay_calibrate(); if (!early_xdbc_setup_hardware()) early_xdbc_register_console(); @@ -1272,8 +1270,6 @@ void __init setup_arch(char **cmdline_p) mcheck_init(); - arch_init_ideal_nops(); - register_refined_jiffies(CLOCK_TICK_RATE); #ifdef CONFIG_EFI diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c index 445ca11ff863..92a3b312a53c 100644 --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -692,7 +692,7 @@ setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs) * Increment event counter and perform fixup for the pre-signal * frame. */ - rseq_signal_deliver(regs); + rseq_signal_deliver(ksig, regs); /* Set up the stack frame */ if (is_ia32_frame(ksig)) { diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index c2f7d1d2a5c3..db9656e13ea0 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -221,6 +221,11 @@ static void notrace start_secondary(void *unused) #ifdef CONFIG_X86_32 /* switch away from the initial page table */ load_cr3(swapper_pg_dir); + /* + * Initialize the CR4 shadow before doing anything that could + * try to read it. + */ + cr4_init_shadow(); __flush_tlb_all(); #endif load_current_idt(); diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c index 093f2ea5dd56..7627455047c2 100644 --- a/arch/x86/kernel/stacktrace.c +++ b/arch/x86/kernel/stacktrace.c @@ -81,16 +81,6 @@ EXPORT_SYMBOL_GPL(save_stack_trace_tsk); #ifdef CONFIG_HAVE_RELIABLE_STACKTRACE -#define STACKTRACE_DUMP_ONCE(task) ({ \ - static bool __section(.data.unlikely) __dumped; \ - \ - if (!__dumped) { \ - __dumped = true; \ - WARN_ON(1); \ - show_stack(task, NULL); \ - } \ -}) - static int __always_inline __save_stack_trace_reliable(struct stack_trace *trace, struct task_struct *task) @@ -99,30 +89,25 @@ __save_stack_trace_reliable(struct stack_trace *trace, struct pt_regs *regs; unsigned long addr; - for (unwind_start(&state, task, NULL, NULL); !unwind_done(&state); + for (unwind_start(&state, task, NULL, NULL); + !unwind_done(&state) && !unwind_error(&state); unwind_next_frame(&state)) { regs = unwind_get_entry_regs(&state, NULL); if (regs) { + /* Success path for user tasks */ + if (user_mode(regs)) + goto success; + /* * Kernel mode registers on the stack indicate an * in-kernel interrupt or exception (e.g., preemption * or a page fault), which can make frame pointers * unreliable. */ - if (!user_mode(regs)) - return -EINVAL; - /* - * The last frame contains the user mode syscall - * pt_regs. Skip it and finish the unwind. - */ - unwind_next_frame(&state); - if (!unwind_done(&state)) { - STACKTRACE_DUMP_ONCE(task); + if (IS_ENABLED(CONFIG_FRAME_POINTER)) return -EINVAL; - } - break; } addr = unwind_get_return_address(&state); @@ -132,21 +117,22 @@ __save_stack_trace_reliable(struct stack_trace *trace, * generated code which __kernel_text_address() doesn't know * about. */ - if (!addr) { - STACKTRACE_DUMP_ONCE(task); + if (!addr) return -EINVAL; - } if (save_stack_address(trace, addr, false)) return -EINVAL; } /* Check for stack corruption */ - if (unwind_error(&state)) { - STACKTRACE_DUMP_ONCE(task); + if (unwind_error(&state)) + return -EINVAL; + + /* Success path for non-user tasks, i.e. kthreads and idle tasks */ + if (!(task->flags & (PF_KTHREAD | PF_IDLE))) return -EINVAL; - } +success: if (trace->nr_entries < trace->max_entries) trace->entries[trace->nr_entries++] = ULONG_MAX; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index a535dd64de63..e6db475164ed 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -835,16 +835,18 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr) char *str = (trapnr == X86_TRAP_MF) ? "fpu exception" : "simd exception"; - if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, SIGFPE) == NOTIFY_STOP) - return; cond_local_irq_enable(regs); if (!user_mode(regs)) { - if (!fixup_exception(regs, trapnr)) { - task->thread.error_code = error_code; - task->thread.trap_nr = trapnr; + if (fixup_exception(regs, trapnr)) + return; + + task->thread.error_code = error_code; + task->thread.trap_nr = trapnr; + + if (notify_die(DIE_TRAP, str, regs, error_code, + trapnr, SIGFPE) != NOTIFY_STOP) die(str, regs, error_code); - } return; } diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 74392d9d51e0..1463468ba9a0 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -33,16 +33,13 @@ EXPORT_SYMBOL(cpu_khz); unsigned int __read_mostly tsc_khz; EXPORT_SYMBOL(tsc_khz); +#define KHZ 1000 + /* * TSC can be unstable due to cpufreq or due to unsynced TSCs */ static int __read_mostly tsc_unstable; -/* native_sched_clock() is called before tsc_init(), so - we must start with the TSC soft disabled to prevent - erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */ -static int __read_mostly tsc_disabled = -1; - static DEFINE_STATIC_KEY_FALSE(__use_tsc); int tsc_clocksource_reliable; @@ -106,23 +103,6 @@ void cyc2ns_read_end(void) * -johnstul@us.ibm.com "math is hard, lets go shopping!" */ -static void cyc2ns_data_init(struct cyc2ns_data *data) -{ - data->cyc2ns_mul = 0; - data->cyc2ns_shift = 0; - data->cyc2ns_offset = 0; -} - -static void __init cyc2ns_init(int cpu) -{ - struct cyc2ns *c2n = &per_cpu(cyc2ns, cpu); - - cyc2ns_data_init(&c2n->data[0]); - cyc2ns_data_init(&c2n->data[1]); - - seqcount_init(&c2n->seq); -} - static inline unsigned long long cycles_2_ns(unsigned long long cyc) { struct cyc2ns_data data; @@ -138,18 +118,11 @@ static inline unsigned long long cycles_2_ns(unsigned long long cyc) return ns; } -static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_now) +static void __set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_now) { unsigned long long ns_now; struct cyc2ns_data data; struct cyc2ns *c2n; - unsigned long flags; - - local_irq_save(flags); - sched_clock_idle_sleep_event(); - - if (!khz) - goto done; ns_now = cycles_2_ns(tsc_now); @@ -181,12 +154,55 @@ static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_ c2n->data[0] = data; raw_write_seqcount_latch(&c2n->seq); c2n->data[1] = data; +} + +static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_now) +{ + unsigned long flags; + + local_irq_save(flags); + sched_clock_idle_sleep_event(); + + if (khz) + __set_cyc2ns_scale(khz, cpu, tsc_now); -done: sched_clock_idle_wakeup_event(); local_irq_restore(flags); } +/* + * Initialize cyc2ns for boot cpu + */ +static void __init cyc2ns_init_boot_cpu(void) +{ + struct cyc2ns *c2n = this_cpu_ptr(&cyc2ns); + + seqcount_init(&c2n->seq); + __set_cyc2ns_scale(tsc_khz, smp_processor_id(), rdtsc()); +} + +/* + * Secondary CPUs do not run through tsc_init(), so set up + * all the scale factors for all CPUs, assuming the same + * speed as the bootup CPU. (cpufreq notifiers will fix this + * up if their speed diverges) + */ +static void __init cyc2ns_init_secondary_cpus(void) +{ + unsigned int cpu, this_cpu = smp_processor_id(); + struct cyc2ns *c2n = this_cpu_ptr(&cyc2ns); + struct cyc2ns_data *data = c2n->data; + + for_each_possible_cpu(cpu) { + if (cpu != this_cpu) { + seqcount_init(&c2n->seq); + c2n = per_cpu_ptr(&cyc2ns, cpu); + c2n->data[0] = data[0]; + c2n->data[1] = data[1]; + } + } +} + /* * Scheduler clock - returns current time in nanosec units. */ @@ -248,8 +264,7 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable); #ifdef CONFIG_X86_TSC int __init notsc_setup(char *str) { - pr_warn("Kernel compiled with CONFIG_X86_TSC, cannot disable TSC completely\n"); - tsc_disabled = 1; + mark_tsc_unstable("boot parameter notsc"); return 1; } #else @@ -665,30 +680,17 @@ static unsigned long cpu_khz_from_cpuid(void) return eax_base_mhz * 1000; } -/** - * native_calibrate_cpu - calibrate the cpu on boot +/* + * calibrate cpu using pit, hpet, and ptimer methods. They are available + * later in boot after acpi is initialized. */ -unsigned long native_calibrate_cpu(void) +static unsigned long pit_hpet_ptimer_calibrate_cpu(void) { u64 tsc1, tsc2, delta, ref1, ref2; unsigned long tsc_pit_min = ULONG_MAX, tsc_ref_min = ULONG_MAX; - unsigned long flags, latch, ms, fast_calibrate; + unsigned long flags, latch, ms; int hpet = is_hpet_enabled(), i, loopmin; - fast_calibrate = cpu_khz_from_cpuid(); - if (fast_calibrate) - return fast_calibrate; - - fast_calibrate = cpu_khz_from_msr(); - if (fast_calibrate) - return fast_calibrate; - - local_irq_save(flags); - fast_calibrate = quick_pit_calibrate(); - local_irq_restore(flags); - if (fast_calibrate) - return fast_calibrate; - /* * Run 5 calibration loops to get the lowest frequency value * (the best estimate). We use two different calibration modes @@ -831,6 +833,37 @@ unsigned long native_calibrate_cpu(void) return tsc_pit_min; } +/** + * native_calibrate_cpu_early - can calibrate the cpu early in boot + */ +unsigned long native_calibrate_cpu_early(void) +{ + unsigned long flags, fast_calibrate = cpu_khz_from_cpuid(); + + if (!fast_calibrate) + fast_calibrate = cpu_khz_from_msr(); + if (!fast_calibrate) { + local_irq_save(flags); + fast_calibrate = quick_pit_calibrate(); + local_irq_restore(flags); + } + return fast_calibrate; +} + + +/** + * native_calibrate_cpu - calibrate the cpu + */ +static unsigned long native_calibrate_cpu(void) +{ + unsigned long tsc_freq = native_calibrate_cpu_early(); + + if (!tsc_freq) + tsc_freq = pit_hpet_ptimer_calibrate_cpu(); + + return tsc_freq; +} + void recalibrate_cpu_khz(void) { #ifndef CONFIG_SMP @@ -1307,7 +1340,7 @@ unreg: static int __init init_tsc_clocksource(void) { - if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz) + if (!boot_cpu_has(X86_FEATURE_TSC) || !tsc_khz) return 0; if (tsc_unstable) @@ -1341,40 +1374,22 @@ unreg: */ device_initcall(init_tsc_clocksource); -void __init tsc_early_delay_calibrate(void) +static bool __init determine_cpu_tsc_frequencies(bool early) { - unsigned long lpj; - - if (!boot_cpu_has(X86_FEATURE_TSC)) - return; - - cpu_khz = x86_platform.calibrate_cpu(); - tsc_khz = x86_platform.calibrate_tsc(); - - tsc_khz = tsc_khz ? : cpu_khz; - if (!tsc_khz) - return; - - lpj = tsc_khz * 1000; - do_div(lpj, HZ); - loops_per_jiffy = lpj; -} - -void __init tsc_init(void) -{ - u64 lpj, cyc; - int cpu; - - if (!boot_cpu_has(X86_FEATURE_TSC)) { - setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); - return; + /* Make sure that cpu and tsc are not already calibrated */ + WARN_ON(cpu_khz || tsc_khz); + + if (early) { + cpu_khz = x86_platform.calibrate_cpu(); + tsc_khz = x86_platform.calibrate_tsc(); + } else { + /* We should not be here with non-native cpu calibration */ + WARN_ON(x86_platform.calibrate_cpu != native_calibrate_cpu); + cpu_khz = pit_hpet_ptimer_calibrate_cpu(); } - cpu_khz = x86_platform.calibrate_cpu(); - tsc_khz = x86_platform.calibrate_tsc(); - /* - * Trust non-zero tsc_khz as authorative, + * Trust non-zero tsc_khz as authoritative, * and use it to sanity check cpu_khz, * which will be off if system timer is off. */ @@ -1383,52 +1398,78 @@ void __init tsc_init(void) else if (abs(cpu_khz - tsc_khz) * 10 > tsc_khz) cpu_khz = tsc_khz; - if (!tsc_khz) { - mark_tsc_unstable("could not calculate TSC khz"); - setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); - return; - } + if (tsc_khz == 0) + return false; pr_info("Detected %lu.%03lu MHz processor\n", - (unsigned long)cpu_khz / 1000, - (unsigned long)cpu_khz % 1000); + (unsigned long)cpu_khz / KHZ, + (unsigned long)cpu_khz % KHZ); if (cpu_khz != tsc_khz) { pr_info("Detected %lu.%03lu MHz TSC", - (unsigned long)tsc_khz / 1000, - (unsigned long)tsc_khz % 1000); + (unsigned long)tsc_khz / KHZ, + (unsigned long)tsc_khz % KHZ); } + return true; +} + +static unsigned long __init get_loops_per_jiffy(void) +{ + unsigned long lpj = tsc_khz * KHZ; + do_div(lpj, HZ); + return lpj; +} + +static void __init tsc_enable_sched_clock(void) +{ /* Sanitize TSC ADJUST before cyc2ns gets initialized */ tsc_store_and_check_tsc_adjust(true); + cyc2ns_init_boot_cpu(); + static_branch_enable(&__use_tsc); +} + +void __init tsc_early_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_TSC)) + return; + if (!determine_cpu_tsc_frequencies(true)) + return; + loops_per_jiffy = get_loops_per_jiffy(); + tsc_enable_sched_clock(); +} + +void __init tsc_init(void) +{ /* - * Secondary CPUs do not run through tsc_init(), so set up - * all the scale factors for all CPUs, assuming the same - * speed as the bootup CPU. (cpufreq notifiers will fix this - * up if their speed diverges) + * native_calibrate_cpu_early can only calibrate using methods that are + * available early in boot. */ - cyc = rdtsc(); - for_each_possible_cpu(cpu) { - cyc2ns_init(cpu); - set_cyc2ns_scale(tsc_khz, cpu, cyc); - } + if (x86_platform.calibrate_cpu == native_calibrate_cpu_early) + x86_platform.calibrate_cpu = native_calibrate_cpu; - if (tsc_disabled > 0) + if (!boot_cpu_has(X86_FEATURE_TSC)) { + setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); return; + } - /* now allow native_sched_clock() to use rdtsc */ + if (!tsc_khz) { + /* We failed to determine frequencies earlier, try again */ + if (!determine_cpu_tsc_frequencies(false)) { + mark_tsc_unstable("could not calculate TSC khz"); + setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); + return; + } + tsc_enable_sched_clock(); + } - tsc_disabled = 0; - static_branch_enable(&__use_tsc); + cyc2ns_init_secondary_cpus(); if (!no_sched_irq_time) enable_sched_clock_irqtime(); - lpj = ((u64)tsc_khz * 1000); - do_div(lpj, HZ); - lpj_fine = lpj; - + lpj_fine = get_loops_per_jiffy(); use_tsc_delay(); check_system_tsc_reliable(); @@ -1455,7 +1496,7 @@ unsigned long calibrate_delay_is_known(void) int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC); const struct cpumask *mask = topology_core_cpumask(cpu); - if (tsc_disabled || !constant_tsc || !mask) + if (!constant_tsc || !mask) return 0; sibling = cpumask_any_but(mask, cpu); diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c index 19afdbd7d0a7..27ef714d886c 100644 --- a/arch/x86/kernel/tsc_msr.c +++ b/arch/x86/kernel/tsc_msr.c @@ -1,17 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * tsc_msr.c - TSC frequency enumeration via MSR + * TSC frequency enumeration via MSR * - * Copyright (C) 2013 Intel Corporation + * Copyright (C) 2013, 2018 Intel Corporation * Author: Bin Gao - * - * This file is released under the GPLv2. */ #include -#include -#include + #include +#include +#include +#include #include +#include #define MAX_NUM_FREQS 9 @@ -23,44 +25,48 @@ * field msr_plat does. */ struct freq_desc { - u8 x86_family; /* CPU family */ - u8 x86_model; /* model */ u8 msr_plat; /* 1: use MSR_PLATFORM_INFO, 0: MSR_IA32_PERF_STATUS */ u32 freqs[MAX_NUM_FREQS]; }; -static struct freq_desc freq_desc_tables[] = { - /* PNW */ - { 6, 0x27, 0, { 0, 0, 0, 0, 0, 99840, 0, 83200 } }, - /* CLV+ */ - { 6, 0x35, 0, { 0, 133200, 0, 0, 0, 99840, 0, 83200 } }, - /* TNG - Intel Atom processor Z3400 series */ - { 6, 0x4a, 1, { 0, 100000, 133300, 0, 0, 0, 0, 0 } }, - /* VLV2 - Intel Atom processor E3000, Z3600, Z3700 series */ - { 6, 0x37, 1, { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 } }, - /* ANN - Intel Atom processor Z3500 series */ - { 6, 0x5a, 1, { 83300, 100000, 133300, 100000, 0, 0, 0, 0 } }, - /* AMT - Intel Atom processor X7-Z8000 and X5-Z8000 series */ - { 6, 0x4c, 1, { 83300, 100000, 133300, 116700, - 80000, 93300, 90000, 88900, 87500 } }, +/* + * Penwell and Clovertrail use spread spectrum clock, + * so the freq number is not exactly the same as reported + * by MSR based on SDM. + */ +static const struct freq_desc freq_desc_pnw = { + 0, { 0, 0, 0, 0, 0, 99840, 0, 83200 } }; -static int match_cpu(u8 family, u8 model) -{ - int i; +static const struct freq_desc freq_desc_clv = { + 0, { 0, 133200, 0, 0, 0, 99840, 0, 83200 } +}; - for (i = 0; i < ARRAY_SIZE(freq_desc_tables); i++) { - if ((family == freq_desc_tables[i].x86_family) && - (model == freq_desc_tables[i].x86_model)) - return i; - } +static const struct freq_desc freq_desc_byt = { + 1, { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 } +}; - return -1; -} +static const struct freq_desc freq_desc_cht = { + 1, { 83300, 100000, 133300, 116700, 80000, 93300, 90000, 88900, 87500 } +}; -/* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */ -#define id_to_freq(cpu_index, freq_id) \ - (freq_desc_tables[cpu_index].freqs[freq_id]) +static const struct freq_desc freq_desc_tng = { + 1, { 0, 100000, 133300, 0, 0, 0, 0, 0 } +}; + +static const struct freq_desc freq_desc_ann = { + 1, { 83300, 100000, 133300, 100000, 0, 0, 0, 0 } +}; + +static const struct x86_cpu_id tsc_msr_cpu_ids[] = { + INTEL_CPU_FAM6(ATOM_PENWELL, freq_desc_pnw), + INTEL_CPU_FAM6(ATOM_CLOVERVIEW, freq_desc_clv), + INTEL_CPU_FAM6(ATOM_SILVERMONT1, freq_desc_byt), + INTEL_CPU_FAM6(ATOM_AIRMONT, freq_desc_cht), + INTEL_CPU_FAM6(ATOM_MERRIFIELD, freq_desc_tng), + INTEL_CPU_FAM6(ATOM_MOOREFIELD, freq_desc_ann), + {} +}; /* * MSR-based CPU/TSC frequency discovery for certain CPUs. @@ -70,18 +76,17 @@ static int match_cpu(u8 family, u8 model) */ unsigned long cpu_khz_from_msr(void) { - u32 lo, hi, ratio, freq_id, freq; + u32 lo, hi, ratio, freq; + const struct freq_desc *freq_desc; + const struct x86_cpu_id *id; unsigned long res; - int cpu_index; - - if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) - return 0; - cpu_index = match_cpu(boot_cpu_data.x86, boot_cpu_data.x86_model); - if (cpu_index < 0) + id = x86_match_cpu(tsc_msr_cpu_ids); + if (!id) return 0; - if (freq_desc_tables[cpu_index].msr_plat) { + freq_desc = (struct freq_desc *)id->driver_data; + if (freq_desc->msr_plat) { rdmsr(MSR_PLATFORM_INFO, lo, hi); ratio = (lo >> 8) & 0xff; } else { @@ -91,8 +96,9 @@ unsigned long cpu_khz_from_msr(void) /* Get FSB FREQ ID */ rdmsr(MSR_FSB_FREQ, lo, hi); - freq_id = lo & 0x7; - freq = id_to_freq(cpu_index, freq_id); + + /* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */ + freq = freq_desc->freqs[lo & 0x7]; /* TSC frequency = maximum resolved freq * maximum resolved bus ratio */ res = freq * ratio; diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index feb28fee6cea..26038eacf74a 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -198,7 +198,7 @@ static int orc_sort_cmp(const void *_a, const void *_b) * whitelisted .o files which didn't get objtool generation. */ orc_a = cur_orc_table + (a - cur_orc_ip_table); - return orc_a->sp_reg == ORC_REG_UNDEFINED ? -1 : 1; + return orc_a->sp_reg == ORC_REG_UNDEFINED && !orc_a->end ? -1 : 1; } #ifdef CONFIG_MODULES @@ -352,7 +352,7 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr bool unwind_next_frame(struct unwind_state *state) { - unsigned long ip_p, sp, orig_ip, prev_sp = state->sp; + unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp; enum stack_type prev_type = state->stack_info.type; struct orc_entry *orc; bool indirect = false; @@ -363,9 +363,9 @@ bool unwind_next_frame(struct unwind_state *state) /* Don't let modules unload while we're reading their ORC data. */ preempt_disable(); - /* Have we reached the end? */ + /* End-of-stack check for user tasks: */ if (state->regs && user_mode(state->regs)) - goto done; + goto the_end; /* * Find the orc_entry associated with the text address. @@ -374,9 +374,16 @@ bool unwind_next_frame(struct unwind_state *state) * calls and calls to noreturn functions. */ orc = orc_find(state->signal ? state->ip : state->ip - 1); - if (!orc || orc->sp_reg == ORC_REG_UNDEFINED) - goto done; - orig_ip = state->ip; + if (!orc) + goto err; + + /* End-of-stack check for kernel threads: */ + if (orc->sp_reg == ORC_REG_UNDEFINED) { + if (!orc->end) + goto err; + + goto the_end; + } /* Find the previous frame's stack: */ switch (orc->sp_reg) { @@ -402,7 +409,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!state->regs || !state->full_regs) { orc_warn("missing regs for base reg R10 at ip %pB\n", (void *)state->ip); - goto done; + goto err; } sp = state->regs->r10; break; @@ -411,7 +418,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!state->regs || !state->full_regs) { orc_warn("missing regs for base reg R13 at ip %pB\n", (void *)state->ip); - goto done; + goto err; } sp = state->regs->r13; break; @@ -420,7 +427,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!state->regs || !state->full_regs) { orc_warn("missing regs for base reg DI at ip %pB\n", (void *)state->ip); - goto done; + goto err; } sp = state->regs->di; break; @@ -429,7 +436,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!state->regs || !state->full_regs) { orc_warn("missing regs for base reg DX at ip %pB\n", (void *)state->ip); - goto done; + goto err; } sp = state->regs->dx; break; @@ -437,12 +444,12 @@ bool unwind_next_frame(struct unwind_state *state) default: orc_warn("unknown SP base reg %d for ip %pB\n", orc->sp_reg, (void *)state->ip); - goto done; + goto err; } if (indirect) { if (!deref_stack_reg(state, sp, &sp)) - goto done; + goto err; } /* Find IP, SP and possibly regs: */ @@ -451,7 +458,7 @@ bool unwind_next_frame(struct unwind_state *state) ip_p = sp - sizeof(long); if (!deref_stack_reg(state, ip_p, &state->ip)) - goto done; + goto err; state->ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, state->ip, (void *)ip_p); @@ -465,7 +472,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!deref_stack_regs(state, sp, &state->ip, &state->sp)) { orc_warn("can't dereference registers at %p for ip %pB\n", (void *)sp, (void *)orig_ip); - goto done; + goto err; } state->regs = (struct pt_regs *)sp; @@ -477,7 +484,7 @@ bool unwind_next_frame(struct unwind_state *state) if (!deref_stack_iret_regs(state, sp, &state->ip, &state->sp)) { orc_warn("can't dereference iret registers at %p for ip %pB\n", (void *)sp, (void *)orig_ip); - goto done; + goto err; } state->regs = (void *)sp - IRET_FRAME_OFFSET; @@ -500,18 +507,18 @@ bool unwind_next_frame(struct unwind_state *state) case ORC_REG_PREV_SP: if (!deref_stack_reg(state, sp + orc->bp_offset, &state->bp)) - goto done; + goto err; break; case ORC_REG_BP: if (!deref_stack_reg(state, state->bp + orc->bp_offset, &state->bp)) - goto done; + goto err; break; default: orc_warn("unknown BP base reg %d for ip %pB\n", orc->bp_reg, (void *)orig_ip); - goto done; + goto err; } /* Prevent a recursive loop due to bad ORC data: */ @@ -520,13 +527,16 @@ bool unwind_next_frame(struct unwind_state *state) state->sp <= prev_sp) { orc_warn("stack going in the wrong direction? ip=%pB\n", (void *)orig_ip); - goto done; + goto err; } preempt_enable(); return true; -done: +err: + state->error = true; + +the_end: preempt_enable(); state->stack_info.type = STACK_TYPE_UNKNOWN; return false; diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 58d8d800875d..deb576b23b7c 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -293,7 +293,7 @@ static int uprobe_init_insn(struct arch_uprobe *auprobe, struct insn *insn, bool insn_init(insn, auprobe->insn, sizeof(auprobe->insn), x86_64); /* has the side-effect of processing the entire instruction */ insn_get_length(insn); - if (WARN_ON_ONCE(!insn_complete(insn))) + if (!insn_complete(insn)) return -ENOEXEC; if (is_prefix_bad(insn)) diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c index 9d0b5af7db91..1c03e4aa6474 100644 --- a/arch/x86/kernel/vm86_32.c +++ b/arch/x86/kernel/vm86_32.c @@ -149,7 +149,7 @@ void save_v86_state(struct kernel_vm86_regs *regs, int retval) preempt_disable(); tsk->thread.sp0 = vm86->saved_sp0; tsk->thread.sysenter_cs = __KERNEL_CS; - update_sp0(tsk); + update_task_stack(tsk); refresh_sysenter_cs(&tsk->thread); vm86->saved_sp0 = 0; preempt_enable(); @@ -374,7 +374,7 @@ static long do_sys_vm86(struct vm86plus_struct __user *user_vm86, bool plus) refresh_sysenter_cs(&tsk->thread); } - update_sp0(tsk); + update_task_stack(tsk); preempt_enable(); if (vm86->flags & VM86_SCREEN_BITMAP) diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 5e1458f609a1..8bde0a419f86 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -55,19 +55,22 @@ jiffies_64 = jiffies; * so we can enable protection checks as well as retain 2MB large page * mappings for kernel text. */ -#define X64_ALIGN_RODATA_BEGIN . = ALIGN(HPAGE_SIZE); +#define X86_ALIGN_RODATA_BEGIN . = ALIGN(HPAGE_SIZE); -#define X64_ALIGN_RODATA_END \ +#define X86_ALIGN_RODATA_END \ . = ALIGN(HPAGE_SIZE); \ - __end_rodata_hpage_align = .; + __end_rodata_hpage_align = .; \ + __end_rodata_aligned = .; #define ALIGN_ENTRY_TEXT_BEGIN . = ALIGN(PMD_SIZE); #define ALIGN_ENTRY_TEXT_END . = ALIGN(PMD_SIZE); #else -#define X64_ALIGN_RODATA_BEGIN -#define X64_ALIGN_RODATA_END +#define X86_ALIGN_RODATA_BEGIN +#define X86_ALIGN_RODATA_END \ + . = ALIGN(PAGE_SIZE); \ + __end_rodata_aligned = .; #define ALIGN_ENTRY_TEXT_BEGIN #define ALIGN_ENTRY_TEXT_END @@ -141,9 +144,9 @@ SECTIONS /* .text should occupy whole number of pages */ . = ALIGN(PAGE_SIZE); - X64_ALIGN_RODATA_BEGIN + X86_ALIGN_RODATA_BEGIN RO_DATA(PAGE_SIZE) - X64_ALIGN_RODATA_END + X86_ALIGN_RODATA_END /* Data */ .data : AT(ADDR(.data) - LOAD_OFFSET) { diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index 3ab867603e81..2792b5573818 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -109,7 +109,7 @@ struct x86_cpuinit_ops x86_cpuinit = { static void default_nmi_init(void) { }; struct x86_platform_ops x86_platform __ro_after_init = { - .calibrate_cpu = native_calibrate_cpu, + .calibrate_cpu = native_calibrate_cpu_early, .calibrate_tsc = native_calibrate_tsc, .get_wallclock = mach_get_cmos_time, .set_wallclock = mach_set_rtc_mmss, diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 92fd433c50b9..1bbec387d289 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -85,7 +85,7 @@ config KVM_AMD_SEV def_bool y bool "AMD Secure Encrypted Virtualization (SEV) support" depends on KVM_AMD && X86_64 - depends on CRYPTO_DEV_CCP && CRYPTO_DEV_CCP_DD && CRYPTO_DEV_SP_PSP + depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) ---help--- Provides support for launching Encrypted VMs on AMD processors. diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index b5cd8465d44f..d536d457517b 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1379,7 +1379,7 @@ static void apic_timer_expired(struct kvm_lapic *apic) * using swait_active() is safe. */ if (swait_active(q)) - swake_up(q); + swake_up_one(q); if (apic_lvtt_tscdeadline(apic)) ktimer->expired_tscdeadline = ktimer->tscdeadline; diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d594690d8b95..6b8f11521c41 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -890,7 +890,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, if (cache->nobjs >= min) return 0; while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - page = (void *)__get_free_page(GFP_KERNEL); + page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT); if (!page) return -ENOMEM; cache->objects[cache->nobjs++] = page; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index d0dd35d582da..5d8e317c2b04 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1705,6 +1705,17 @@ static inline bool nested_cpu_has_vmwrite_any_field(struct kvm_vcpu *vcpu) MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS; } +static inline bool nested_cpu_has_zero_length_injection(struct kvm_vcpu *vcpu) +{ + return to_vmx(vcpu)->nested.msrs.misc_low & VMX_MISC_ZERO_LEN_INS; +} + +static inline bool nested_cpu_supports_monitor_trap_flag(struct kvm_vcpu *vcpu) +{ + return to_vmx(vcpu)->nested.msrs.procbased_ctls_high & + CPU_BASED_MONITOR_TRAP_FLAG; +} + static inline bool nested_cpu_has(struct vmcs12 *vmcs12, u32 bit) { return vmcs12->cpu_based_vm_exec_control & bit; @@ -2560,6 +2571,7 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx = to_vmx(vcpu); #ifdef CONFIG_X86_64 int cpu = raw_smp_processor_id(); + unsigned long fs_base, kernel_gs_base; #endif int i; @@ -2575,12 +2587,20 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu) vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel; #ifdef CONFIG_X86_64 - save_fsgs_for_kvm(); - vmx->host_state.fs_sel = current->thread.fsindex; - vmx->host_state.gs_sel = current->thread.gsindex; -#else - savesegment(fs, vmx->host_state.fs_sel); - savesegment(gs, vmx->host_state.gs_sel); + if (likely(is_64bit_mm(current->mm))) { + save_fsgs_for_kvm(); + vmx->host_state.fs_sel = current->thread.fsindex; + vmx->host_state.gs_sel = current->thread.gsindex; + fs_base = current->thread.fsbase; + kernel_gs_base = current->thread.gsbase; + } else { +#endif + savesegment(fs, vmx->host_state.fs_sel); + savesegment(gs, vmx->host_state.gs_sel); +#ifdef CONFIG_X86_64 + fs_base = read_msr(MSR_FS_BASE); + kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE); + } #endif if (!(vmx->host_state.fs_sel & 7)) { vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel); @@ -2600,10 +2620,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu) savesegment(ds, vmx->host_state.ds_sel); savesegment(es, vmx->host_state.es_sel); - vmcs_writel(HOST_FS_BASE, current->thread.fsbase); + vmcs_writel(HOST_FS_BASE, fs_base); vmcs_writel(HOST_GS_BASE, cpu_kernelmode_gs_base(cpu)); - vmx->msr_host_kernel_gs_base = current->thread.gsbase; + vmx->msr_host_kernel_gs_base = kernel_gs_base; if (is_long_mode(&vmx->vcpu)) wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); #else @@ -4311,11 +4331,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf) vmcs_conf->order = get_order(vmcs_conf->size); vmcs_conf->basic_cap = vmx_msr_high & ~0x1fff; - /* KVM supports Enlightened VMCS v1 only */ - if (static_branch_unlikely(&enable_evmcs)) - vmcs_conf->revision_id = KVM_EVMCS_VERSION; - else - vmcs_conf->revision_id = vmx_msr_low; + vmcs_conf->revision_id = vmx_msr_low; vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control; vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control; @@ -4385,7 +4401,13 @@ static struct vmcs *alloc_vmcs_cpu(int cpu) return NULL; vmcs = page_address(pages); memset(vmcs, 0, vmcs_config.size); - vmcs->revision_id = vmcs_config.revision_id; /* vmcs revision id */ + + /* KVM supports Enlightened VMCS v1 only */ + if (static_branch_unlikely(&enable_evmcs)) + vmcs->revision_id = KVM_EVMCS_VERSION; + else + vmcs->revision_id = vmcs_config.revision_id; + return vmcs; } @@ -4429,16 +4451,14 @@ static int alloc_loaded_vmcs(struct loaded_vmcs *loaded_vmcs) goto out_vmcs; memset(loaded_vmcs->msr_bitmap, 0xff, PAGE_SIZE); -#if IS_ENABLED(CONFIG_HYPERV) - if (static_branch_unlikely(&enable_evmcs) && + if (IS_ENABLED(CONFIG_HYPERV) && + static_branch_unlikely(&enable_evmcs) && (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) { struct hv_enlightened_vmcs *evmcs = (struct hv_enlightened_vmcs *)loaded_vmcs->vmcs; evmcs->hv_enlightenments_control.msr_bitmap = 1; } -#endif - } return 0; @@ -4555,6 +4575,19 @@ static __init int alloc_kvm_area(void) return -ENOMEM; } + /* + * When eVMCS is enabled, alloc_vmcs_cpu() sets + * vmcs->revision_id to KVM_EVMCS_VERSION instead of + * revision_id reported by MSR_IA32_VMX_BASIC. + * + * However, even though not explictly documented by + * TLFS, VMXArea passed as VMXON argument should + * still be marked with revision_id reported by + * physical CPU. + */ + if (static_branch_unlikely(&enable_evmcs)) + vmcs->revision_id = vmcs_config.revision_id; + per_cpu(vmxarea, cpu) = vmcs; } return 0; @@ -7860,6 +7893,8 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) HRTIMER_MODE_REL_PINNED); vmx->nested.preemption_timer.function = vmx_preemption_timer_fn; + vmx->nested.vpid02 = allocate_vpid(); + vmx->nested.vmxon = true; return 0; @@ -8447,21 +8482,20 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) /* Emulate the VMPTRST instruction */ static int handle_vmptrst(struct kvm_vcpu *vcpu) { - unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); - u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); - gva_t vmcs_gva; + unsigned long exit_qual = vmcs_readl(EXIT_QUALIFICATION); + u32 instr_info = vmcs_read32(VMX_INSTRUCTION_INFO); + gpa_t current_vmptr = to_vmx(vcpu)->nested.current_vmptr; struct x86_exception e; + gva_t gva; if (!nested_vmx_check_permission(vcpu)) return 1; - if (get_vmx_mem_address(vcpu, exit_qualification, - vmx_instruction_info, true, &vmcs_gva)) + if (get_vmx_mem_address(vcpu, exit_qual, instr_info, true, &gva)) return 1; /* *_system ok, nested_vmx_check_permission has verified cpl=0 */ - if (kvm_write_guest_virt_system(vcpu, vmcs_gva, - (void *)&to_vmx(vcpu)->nested.current_vmptr, - sizeof(u64), &e)) { + if (kvm_write_guest_virt_system(vcpu, gva, (void *)¤t_vmptr, + sizeof(gpa_t), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } @@ -10337,11 +10371,9 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) goto free_vmcs; } - if (nested) { + if (nested) nested_vmx_setup_ctls_msrs(&vmx->nested.msrs, kvm_vcpu_apicv_active(&vmx->vcpu)); - vmx->nested.vpid02 = allocate_vpid(); - } vmx->nested.posted_intr_nv = -1; vmx->nested.current_vmptr = -1ull; @@ -10358,7 +10390,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) return &vmx->vcpu; free_vmcs: - free_vpid(vmx->nested.vpid02); free_loaded_vmcs(vmx->loaded_vmcs); free_msrs: kfree(vmx->guest_msrs); @@ -11622,6 +11653,62 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) !nested_cr3_valid(vcpu, vmcs12->host_cr3)) return VMXERR_ENTRY_INVALID_HOST_STATE_FIELD; + /* + * From the Intel SDM, volume 3: + * Fields relevant to VM-entry event injection must be set properly. + * These fields are the VM-entry interruption-information field, the + * VM-entry exception error code, and the VM-entry instruction length. + */ + if (vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) { + u32 intr_info = vmcs12->vm_entry_intr_info_field; + u8 vector = intr_info & INTR_INFO_VECTOR_MASK; + u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK; + bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK; + bool should_have_error_code; + bool urg = nested_cpu_has2(vmcs12, + SECONDARY_EXEC_UNRESTRICTED_GUEST); + bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE; + + /* VM-entry interruption-info field: interruption type */ + if (intr_type == INTR_TYPE_RESERVED || + (intr_type == INTR_TYPE_OTHER_EVENT && + !nested_cpu_supports_monitor_trap_flag(vcpu))) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: vector */ + if ((intr_type == INTR_TYPE_NMI_INTR && vector != NMI_VECTOR) || + (intr_type == INTR_TYPE_HARD_EXCEPTION && vector > 31) || + (intr_type == INTR_TYPE_OTHER_EVENT && vector != 0)) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: deliver error code */ + should_have_error_code = + intr_type == INTR_TYPE_HARD_EXCEPTION && prot_mode && + x86_exception_has_error_code(vector); + if (has_error_code != should_have_error_code) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry exception error code */ + if (has_error_code && + vmcs12->vm_entry_exception_error_code & GENMASK(31, 15)) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: reserved bits */ + if (intr_info & INTR_INFO_RESVD_BITS_MASK) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry instruction length */ + switch (intr_type) { + case INTR_TYPE_SOFT_EXCEPTION: + case INTR_TYPE_SOFT_INTR: + case INTR_TYPE_PRIV_SW_EXCEPTION: + if ((vmcs12->vm_entry_instruction_len > 15) || + (vmcs12->vm_entry_instruction_len == 0 && + !nested_cpu_has_zero_length_injection(vcpu))) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + } + } + return 0; } @@ -11688,7 +11775,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); struct vmcs12 *vmcs12 = get_vmcs12(vcpu); - u32 msr_entry_idx; u32 exit_qual; int r; @@ -11710,10 +11796,10 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) nested_get_vmcs12_pages(vcpu, vmcs12); r = EXIT_REASON_MSR_LOAD_FAIL; - msr_entry_idx = nested_vmx_load_msr(vcpu, - vmcs12->vm_entry_msr_load_addr, - vmcs12->vm_entry_msr_load_count); - if (msr_entry_idx) + exit_qual = nested_vmx_load_msr(vcpu, + vmcs12->vm_entry_msr_load_addr, + vmcs12->vm_entry_msr_load_count); + if (exit_qual) goto fail; /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6bcecc325e7e..2b812b3c5088 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1097,6 +1097,7 @@ static u32 msr_based_features[] = { MSR_F10H_DECFG, MSR_IA32_UCODE_REV, + MSR_IA32_ARCH_CAPABILITIES, }; static unsigned int num_msr_based_features; @@ -1105,7 +1106,8 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr) { switch (msr->index) { case MSR_IA32_UCODE_REV: - rdmsrl(msr->index, msr->data); + case MSR_IA32_ARCH_CAPABILITIES: + rdmsrl_safe(msr->index, &msr->data); break; default: if (kvm_x86_ops->get_msr_feature(msr)) @@ -8567,7 +8569,7 @@ int kvm_arch_hardware_setup(void) /* * Make sure the user can only configure tsc_khz values that * fit into a signed integer. - * A min value is not calculated needed because it will always + * A min value is not calculated because it will always * be 1 on all machines. */ u64 max = min(0x7fffffffULL, diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 331993c49dae..257f27620bc2 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -110,6 +110,15 @@ static inline bool is_la57_mode(struct kvm_vcpu *vcpu) #endif } +static inline bool x86_exception_has_error_code(unsigned int vector) +{ + static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) | + BIT(NP_VECTOR) | BIT(SS_VECTOR) | BIT(GP_VECTOR) | + BIT(PF_VECTOR) | BIT(AC_VECTOR); + + return (1U << vector) & exception_has_error_code; +} + static inline bool mmu_is_nested(struct kvm_vcpu *vcpu) { return vcpu->arch.walk_mmu == &vcpu->arch.nested_mmu; diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index 298ef1479240..3b24dc05251c 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -256,7 +256,7 @@ ENTRY(__memcpy_mcsafe) /* Copy successful. Return zero */ .L_done_memcpy_trap: - xorq %rax, %rax + xorl %eax, %eax ret ENDPROC(__memcpy_mcsafe) EXPORT_SYMBOL_GPL(__memcpy_mcsafe) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 2f3c9196b834..a12afff146d1 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -111,6 +111,8 @@ static struct addr_marker address_markers[] = { [END_OF_SPACE_NR] = { -1, NULL } }; +#define INIT_PGD ((pgd_t *) &init_top_pgt) + #else /* CONFIG_X86_64 */ enum address_markers_idx { @@ -120,6 +122,9 @@ enum address_markers_idx { VMALLOC_END_NR, #ifdef CONFIG_HIGHMEM PKMAP_BASE_NR, +#endif +#ifdef CONFIG_MODIFY_LDT_SYSCALL + LDT_NR, #endif CPU_ENTRY_AREA_NR, FIXADDR_START_NR, @@ -133,12 +138,17 @@ static struct addr_marker address_markers[] = { [VMALLOC_END_NR] = { 0UL, "vmalloc() End" }, #ifdef CONFIG_HIGHMEM [PKMAP_BASE_NR] = { 0UL, "Persistent kmap() Area" }, +#endif +#ifdef CONFIG_MODIFY_LDT_SYSCALL + [LDT_NR] = { 0UL, "LDT remap" }, #endif [CPU_ENTRY_AREA_NR] = { 0UL, "CPU entry area" }, [FIXADDR_START_NR] = { 0UL, "Fixmap area" }, [END_OF_SPACE_NR] = { -1, NULL } }; +#define INIT_PGD (swapper_pg_dir) + #endif /* !CONFIG_X86_64 */ /* Multipliers for offsets within the PTEs */ @@ -496,11 +506,7 @@ static inline bool is_hypervisor_range(int idx) static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, bool checkwx, bool dmesg) { -#ifdef CONFIG_X86_64 - pgd_t *start = (pgd_t *) &init_top_pgt; -#else - pgd_t *start = swapper_pg_dir; -#endif + pgd_t *start = INIT_PGD; pgprotval_t prot, eff; int i; struct pg_state st = {}; @@ -563,12 +569,13 @@ void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user) } EXPORT_SYMBOL_GPL(ptdump_walk_pgd_level_debugfs); -static void ptdump_walk_user_pgd_level_checkwx(void) +void ptdump_walk_user_pgd_level_checkwx(void) { #ifdef CONFIG_PAGE_TABLE_ISOLATION - pgd_t *pgd = (pgd_t *) &init_top_pgt; + pgd_t *pgd = INIT_PGD; - if (!static_cpu_has(X86_FEATURE_PTI)) + if (!(__supported_pte_mask & _PAGE_NX) || + !static_cpu_has(X86_FEATURE_PTI)) return; pr_info("x86/mm: Checking user space page tables\n"); @@ -580,7 +587,6 @@ static void ptdump_walk_user_pgd_level_checkwx(void) void ptdump_walk_pgd_level_checkwx(void) { ptdump_walk_pgd_level_core(NULL, NULL, true, false); - ptdump_walk_user_pgd_level_checkwx(); } static int __init pt_dump_init(void) @@ -609,6 +615,9 @@ static int __init pt_dump_init(void) # endif address_markers[FIXADDR_START_NR].start_address = FIXADDR_START; address_markers[CPU_ENTRY_AREA_NR].start_address = CPU_ENTRY_AREA_BASE; +# ifdef CONFIG_MODIFY_LDT_SYSCALL + address_markers[LDT_NR].start_address = LDT_BASE_ADDR; +# endif #endif return 0; } diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9a84a0d08727..db1c042e9853 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -317,8 +317,6 @@ static noinline int vmalloc_fault(unsigned long address) if (!(address >= VMALLOC_START && address < VMALLOC_END)) return -1; - WARN_ON_ONCE(in_nmi()); - /* * Synchronize this task's top level page-table * with the 'reference' page table. @@ -641,11 +639,6 @@ static int is_f00f_bug(struct pt_regs *regs, unsigned long address) return 0; } -static const char nx_warning[] = KERN_CRIT -"kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n"; -static const char smep_warning[] = KERN_CRIT -"unable to execute userspace code (SMEP?) (uid: %d)\n"; - static void show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long address) @@ -664,20 +657,18 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pte = lookup_address_in_pgd(pgd, address, &level); if (pte && pte_present(*pte) && !pte_exec(*pte)) - printk(nx_warning, from_kuid(&init_user_ns, current_uid())); + pr_crit("kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", + from_kuid(&init_user_ns, current_uid())); if (pte && pte_present(*pte) && pte_exec(*pte) && (pgd_flags(*pgd) & _PAGE_USER) && (__read_cr4() & X86_CR4_SMEP)) - printk(smep_warning, from_kuid(&init_user_ns, current_uid())); + pr_crit("unable to execute userspace code (SMEP?) (uid: %d)\n", + from_kuid(&init_user_ns, current_uid())); } - printk(KERN_ALERT "BUG: unable to handle kernel "); - if (address < PAGE_SIZE) - printk(KERN_CONT "NULL pointer dereference"); - else - printk(KERN_CONT "paging request"); - - printk(KERN_CONT " at %px\n", (void *) address); + pr_alert("BUG: unable to handle kernel %s at %px\n", + address < PAGE_SIZE ? "NULL pointer dereference" : "paging request", + (void *)address); dump_pagetable(address); } diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index fec82b577c18..74b157ac078d 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -706,7 +706,9 @@ void __init init_mem_mapping(void) */ int devmem_is_allowed(unsigned long pagenr) { - if (page_is_ram(pagenr)) { + if (region_intersects(PFN_PHYS(pagenr), PAGE_SIZE, + IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE) + != REGION_DISJOINT) { /* * For disallowed memory regions in the low 1MB range, * request that the page be shown as all zeros. @@ -771,13 +773,44 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end) } } +/* + * begin/end can be in the direct map or the "high kernel mapping" + * used for the kernel image only. free_init_pages() will do the + * right thing for either kind of address. + */ +void free_kernel_image_pages(void *begin, void *end) +{ + unsigned long begin_ul = (unsigned long)begin; + unsigned long end_ul = (unsigned long)end; + unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT; + + + free_init_pages("unused kernel image", begin_ul, end_ul); + + /* + * PTI maps some of the kernel into userspace. For performance, + * this includes some kernel areas that do not contain secrets. + * Those areas might be adjacent to the parts of the kernel image + * being freed, which may contain secrets. Remove the "high kernel + * image mapping" for these freed areas, ensuring they are not even + * potentially vulnerable to Meltdown regardless of the specific + * optimizations PTI is currently using. + * + * The "noalias" prevents unmapping the direct map alias which is + * needed to access the freed pages. + * + * This is only valid for 64bit kernels. 32bit has only one mapping + * which can't be treated in this way for obvious reasons. + */ + if (IS_ENABLED(CONFIG_X86_64) && cpu_feature_enabled(X86_FEATURE_PTI)) + set_memory_np_noalias(begin_ul, len_pages); +} + void __ref free_initmem(void) { e820__reallocate_tables(); - free_init_pages("unused kernel", - (unsigned long)(&__init_begin), - (unsigned long)(&__init_end)); + free_kernel_image_pages(&__init_begin, &__init_end); } #ifdef CONFIG_BLK_DEV_INITRD diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index c893c6a3d707..979e0a02cbe1 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -692,7 +692,7 @@ void __init initmem_init(void) high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1; #endif - memblock_set_node(0, (phys_addr_t)ULLONG_MAX, &memblock.memory, 0); + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); sparse_memory_present_with_active_regions(0); #ifdef CONFIG_FLATMEM diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 17383f9677fa..dd519f372169 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -742,7 +742,7 @@ kernel_physical_mapping_init(unsigned long paddr_start, #ifndef CONFIG_NUMA void __init initmem_init(void) { - memblock_set_node(0, (phys_addr_t)ULLONG_MAX, &memblock.memory, 0); + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); } #endif @@ -1283,20 +1283,10 @@ void mark_rodata_ro(void) set_memory_ro(start, (end-start) >> PAGE_SHIFT); #endif - free_init_pages("unused kernel", - (unsigned long) __va(__pa_symbol(text_end)), - (unsigned long) __va(__pa_symbol(rodata_start))); - free_init_pages("unused kernel", - (unsigned long) __va(__pa_symbol(rodata_end)), - (unsigned long) __va(__pa_symbol(_sdata))); + free_kernel_image_pages((void *)text_end, (void *)rodata_start); + free_kernel_image_pages((void *)rodata_end, (void *)_sdata); debug_checkwx(); - - /* - * Do this after all of the manipulation of the - * kernel text page tables are complete. - */ - pti_clone_kernel_text(); } int kern_addr_valid(unsigned long addr) @@ -1350,16 +1340,28 @@ int kern_addr_valid(unsigned long addr) /* Amount of ram needed to start using large blocks */ #define MEM_SIZE_FOR_LARGE_BLOCK (64UL << 30) +/* Adjustable memory block size */ +static unsigned long set_memory_block_size; +int __init set_memory_block_size_order(unsigned int order) +{ + unsigned long size = 1UL << order; + + if (size > MEM_SIZE_FOR_LARGE_BLOCK || size < MIN_MEMORY_BLOCK_SIZE) + return -EINVAL; + + set_memory_block_size = size; + return 0; +} + static unsigned long probe_memory_block_size(void) { unsigned long boot_mem_end = max_pfn << PAGE_SHIFT; unsigned long bz; - /* If this is UV system, always set 2G block size */ - if (is_uv_system()) { - bz = MAX_BLOCK_SIZE; + /* If memory block size has been set, then use it */ + bz = set_memory_block_size; + if (bz) goto done; - } /* Use regular block if RAM is smaller than MEM_SIZE_FOR_LARGE_BLOCK */ if (boot_mem_end < MEM_SIZE_FOR_LARGE_BLOCK) { diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c index 34a2a3bfde9c..b54d52a2d00a 100644 --- a/arch/x86/mm/numa_emulation.c +++ b/arch/x86/mm/numa_emulation.c @@ -61,7 +61,7 @@ static int __init emu_setup_memblk(struct numa_meminfo *ei, eb->nid = nid; if (emu_nid_to_phys[nid] == NUMA_NO_NODE) - emu_nid_to_phys[nid] = nid; + emu_nid_to_phys[nid] = pb->nid; pb->start += size; if (pb->start >= pb->end) { @@ -198,40 +198,73 @@ static u64 __init find_end_of_node(u64 start, u64 max_addr, u64 size) return end; } +static u64 uniform_size(u64 max_addr, u64 base, u64 hole, int nr_nodes) +{ + unsigned long max_pfn = PHYS_PFN(max_addr); + unsigned long base_pfn = PHYS_PFN(base); + unsigned long hole_pfns = PHYS_PFN(hole); + + return PFN_PHYS((max_pfn - base_pfn - hole_pfns) / nr_nodes); +} + /* * Sets up fake nodes of `size' interleaved over physical nodes ranging from * `addr' to `max_addr'. * * Returns zero on success or negative on error. */ -static int __init split_nodes_size_interleave(struct numa_meminfo *ei, +static int __init split_nodes_size_interleave_uniform(struct numa_meminfo *ei, struct numa_meminfo *pi, - u64 addr, u64 max_addr, u64 size) + u64 addr, u64 max_addr, u64 size, + int nr_nodes, struct numa_memblk *pblk, + int nid) { nodemask_t physnode_mask = numa_nodes_parsed; + int i, ret, uniform = 0; u64 min_size; - int nid = 0; - int i, ret; - if (!size) + if ((!size && !nr_nodes) || (nr_nodes && !pblk)) return -1; + /* - * The limit on emulated nodes is MAX_NUMNODES, so the size per node is - * increased accordingly if the requested size is too small. This - * creates a uniform distribution of node sizes across the entire - * machine (but not necessarily over physical nodes). + * In the 'uniform' case split the passed in physical node by + * nr_nodes, in the non-uniform case, ignore the passed in + * physical block and try to create nodes of at least size + * @size. + * + * In the uniform case, split the nodes strictly by physical + * capacity, i.e. ignore holes. In the non-uniform case account + * for holes and treat @size as a minimum floor. */ - min_size = (max_addr - addr - mem_hole_size(addr, max_addr)) / MAX_NUMNODES; - min_size = max(min_size, FAKE_NODE_MIN_SIZE); - if ((min_size & FAKE_NODE_MIN_HASH_MASK) < min_size) - min_size = (min_size + FAKE_NODE_MIN_SIZE) & - FAKE_NODE_MIN_HASH_MASK; + if (!nr_nodes) + nr_nodes = MAX_NUMNODES; + else { + nodes_clear(physnode_mask); + node_set(pblk->nid, physnode_mask); + uniform = 1; + } + + if (uniform) { + min_size = uniform_size(max_addr, addr, 0, nr_nodes); + size = min_size; + } else { + /* + * The limit on emulated nodes is MAX_NUMNODES, so the + * size per node is increased accordingly if the + * requested size is too small. This creates a uniform + * distribution of node sizes across the entire machine + * (but not necessarily over physical nodes). + */ + min_size = uniform_size(max_addr, addr, + mem_hole_size(addr, max_addr), nr_nodes); + } + min_size = ALIGN(max(min_size, FAKE_NODE_MIN_SIZE), FAKE_NODE_MIN_SIZE); if (size < min_size) { pr_err("Fake node size %LuMB too small, increasing to %LuMB\n", size >> 20, min_size >> 20); size = min_size; } - size &= FAKE_NODE_MIN_HASH_MASK; + size = ALIGN_DOWN(size, FAKE_NODE_MIN_SIZE); /* * Fill physical nodes with fake nodes of size until there is no memory @@ -248,10 +281,14 @@ static int __init split_nodes_size_interleave(struct numa_meminfo *ei, node_clear(i, physnode_mask); continue; } + start = pi->blk[phys_blk].start; limit = pi->blk[phys_blk].end; - end = find_end_of_node(start, limit, size); + if (uniform) + end = start + size; + else + end = find_end_of_node(start, limit, size); /* * If there won't be at least FAKE_NODE_MIN_SIZE of * non-reserved memory in ZONE_DMA32 for the next node, @@ -266,7 +303,8 @@ static int __init split_nodes_size_interleave(struct numa_meminfo *ei, * next node, this one must extend to the end of the * physical node. */ - if (limit - end - mem_hole_size(end, limit) < size) + if ((limit - end - mem_hole_size(end, limit) < size) + && !uniform) end = limit; ret = emu_setup_memblk(ei, pi, nid++ % MAX_NUMNODES, @@ -276,7 +314,15 @@ static int __init split_nodes_size_interleave(struct numa_meminfo *ei, return ret; } } - return 0; + return nid; +} + +static int __init split_nodes_size_interleave(struct numa_meminfo *ei, + struct numa_meminfo *pi, + u64 addr, u64 max_addr, u64 size) +{ + return split_nodes_size_interleave_uniform(ei, pi, addr, max_addr, size, + 0, NULL, NUMA_NO_NODE); } int __init setup_emu2phys_nid(int *dfl_phys_nid) @@ -346,7 +392,28 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt) * the fixed node size. Otherwise, if it is just a single number N, * split the system RAM into N fake nodes. */ - if (strchr(emu_cmdline, 'M') || strchr(emu_cmdline, 'G')) { + if (strchr(emu_cmdline, 'U')) { + nodemask_t physnode_mask = numa_nodes_parsed; + unsigned long n; + int nid = 0; + + n = simple_strtoul(emu_cmdline, &emu_cmdline, 0); + ret = -1; + for_each_node_mask(i, physnode_mask) { + ret = split_nodes_size_interleave_uniform(&ei, &pi, + pi.blk[i].start, pi.blk[i].end, 0, + n, &pi.blk[i], nid); + if (ret < 0) + break; + if (ret < n) { + pr_info("%s: phys: %d only got %d of %ld nodes, failing\n", + __func__, i, ret, n); + ret = -1; + break; + } + nid = ret; + } + } else if (strchr(emu_cmdline, 'M') || strchr(emu_cmdline, 'G')) { u64 size; size = memparse(emu_cmdline, &emu_cmdline); diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 3bded76e8d5c..0a74996a1149 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock); #define CPA_FLUSHTLB 1 #define CPA_ARRAY 2 #define CPA_PAGES_ARRAY 4 +#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */ #ifdef CONFIG_PROC_FS static unsigned long direct_pages_count[PG_LEVEL_NUM]; @@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages, /* No alias checking for _NX bit modifications */ checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX; + /* Has caller explicitly disabled alias checking? */ + if (in_flag & CPA_NO_CHECK_ALIAS) + checkalias = 0; ret = __change_page_attr_set_clr(&cpa, checkalias); @@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, int numpages) return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0); } +int set_memory_np_noalias(unsigned long addr, int numpages) +{ + int cpa_flags = CPA_NO_CHECK_ALIAS; + + return change_page_attr_set_clr(&addr, numpages, __pgprot(0), + __pgprot(_PAGE_PRESENT), 0, + cpa_flags, NULL); +} + int set_memory_4k(unsigned long addr, int numpages) { return change_page_attr_set_clr(&addr, numpages, __pgprot(0), @@ -1784,6 +1797,12 @@ int set_memory_nonglobal(unsigned long addr, int numpages) __pgprot(_PAGE_GLOBAL), 0); } +int set_memory_global(unsigned long addr, int numpages) +{ + return change_page_attr_set(&addr, numpages, + __pgprot(_PAGE_GLOBAL), 0); +} + static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) { struct cpa_data cpa; diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 47b5951e592b..3ef095c70ae3 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -182,6 +182,14 @@ static void pgd_dtor(pgd_t *pgd) */ #define PREALLOCATED_PMDS UNSHARED_PTRS_PER_PGD +/* + * We allocate separate PMDs for the kernel part of the user page-table + * when PTI is enabled. We need them to map the per-process LDT into the + * user-space page-table. + */ +#define PREALLOCATED_USER_PMDS (static_cpu_has(X86_FEATURE_PTI) ? \ + KERNEL_PGD_PTRS : 0) + void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) { paravirt_alloc_pmd(mm, __pa(pmd) >> PAGE_SHIFT); @@ -202,14 +210,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) /* No need to prepopulate any pagetable entries in non-PAE modes. */ #define PREALLOCATED_PMDS 0 - +#define PREALLOCATED_USER_PMDS 0 #endif /* CONFIG_X86_PAE */ -static void free_pmds(struct mm_struct *mm, pmd_t *pmds[]) +static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; - for(i = 0; i < PREALLOCATED_PMDS; i++) + for (i = 0; i < count; i++) if (pmds[i]) { pgtable_pmd_page_dtor(virt_to_page(pmds[i])); free_page((unsigned long)pmds[i]); @@ -217,7 +225,7 @@ static void free_pmds(struct mm_struct *mm, pmd_t *pmds[]) } } -static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[]) +static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; bool failed = false; @@ -226,7 +234,7 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[]) if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; - for(i = 0; i < PREALLOCATED_PMDS; i++) { + for (i = 0; i < count; i++) { pmd_t *pmd = (pmd_t *)__get_free_page(gfp); if (!pmd) failed = true; @@ -241,7 +249,7 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[]) } if (failed) { - free_pmds(mm, pmds); + free_pmds(mm, pmds, count); return -ENOMEM; } @@ -254,23 +262,38 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[]) * preallocate which never got a corresponding vma will need to be * freed manually. */ +static void mop_up_one_pmd(struct mm_struct *mm, pgd_t *pgdp) +{ + pgd_t pgd = *pgdp; + + if (pgd_val(pgd) != 0) { + pmd_t *pmd = (pmd_t *)pgd_page_vaddr(pgd); + + *pgdp = native_make_pgd(0); + + paravirt_release_pmd(pgd_val(pgd) >> PAGE_SHIFT); + pmd_free(mm, pmd); + mm_dec_nr_pmds(mm); + } +} + static void pgd_mop_up_pmds(struct mm_struct *mm, pgd_t *pgdp) { int i; - for(i = 0; i < PREALLOCATED_PMDS; i++) { - pgd_t pgd = pgdp[i]; + for (i = 0; i < PREALLOCATED_PMDS; i++) + mop_up_one_pmd(mm, &pgdp[i]); - if (pgd_val(pgd) != 0) { - pmd_t *pmd = (pmd_t *)pgd_page_vaddr(pgd); +#ifdef CONFIG_PAGE_TABLE_ISOLATION - pgdp[i] = native_make_pgd(0); + if (!static_cpu_has(X86_FEATURE_PTI)) + return; - paravirt_release_pmd(pgd_val(pgd) >> PAGE_SHIFT); - pmd_free(mm, pmd); - mm_dec_nr_pmds(mm); - } - } + pgdp = kernel_to_user_pgdp(pgdp); + + for (i = 0; i < PREALLOCATED_USER_PMDS; i++) + mop_up_one_pmd(mm, &pgdp[i + KERNEL_PGD_BOUNDARY]); +#endif } static void pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmds[]) @@ -296,6 +319,38 @@ static void pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmds[]) } } +#ifdef CONFIG_PAGE_TABLE_ISOLATION +static void pgd_prepopulate_user_pmd(struct mm_struct *mm, + pgd_t *k_pgd, pmd_t *pmds[]) +{ + pgd_t *s_pgd = kernel_to_user_pgdp(swapper_pg_dir); + pgd_t *u_pgd = kernel_to_user_pgdp(k_pgd); + p4d_t *u_p4d; + pud_t *u_pud; + int i; + + u_p4d = p4d_offset(u_pgd, 0); + u_pud = pud_offset(u_p4d, 0); + + s_pgd += KERNEL_PGD_BOUNDARY; + u_pud += KERNEL_PGD_BOUNDARY; + + for (i = 0; i < PREALLOCATED_USER_PMDS; i++, u_pud++, s_pgd++) { + pmd_t *pmd = pmds[i]; + + memcpy(pmd, (pmd_t *)pgd_page_vaddr(*s_pgd), + sizeof(pmd_t) * PTRS_PER_PMD); + + pud_populate(mm, u_pud, pmd); + } + +} +#else +static void pgd_prepopulate_user_pmd(struct mm_struct *mm, + pgd_t *k_pgd, pmd_t *pmds[]) +{ +} +#endif /* * Xen paravirt assumes pgd table should be in one page. 64 bit kernel also * assumes that pgd should be in one page. @@ -329,9 +384,6 @@ static int __init pgd_cache_init(void) */ pgd_cache = kmem_cache_create("pgd_cache", PGD_SIZE, PGD_ALIGN, SLAB_PANIC, NULL); - if (!pgd_cache) - return -ENOMEM; - return 0; } core_initcall(pgd_cache_init); @@ -343,7 +395,8 @@ static inline pgd_t *_pgd_alloc(void) * We allocate one page for pgd. */ if (!SHARED_KERNEL_PMD) - return (pgd_t *)__get_free_page(PGALLOC_GFP); + return (pgd_t *)__get_free_pages(PGALLOC_GFP, + PGD_ALLOCATION_ORDER); /* * Now PAE kernel is not running as a Xen domain. We can allocate @@ -355,7 +408,7 @@ static inline pgd_t *_pgd_alloc(void) static inline void _pgd_free(pgd_t *pgd) { if (!SHARED_KERNEL_PMD) - free_page((unsigned long)pgd); + free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER); else kmem_cache_free(pgd_cache, pgd); } @@ -375,6 +428,7 @@ static inline void _pgd_free(pgd_t *pgd) pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *pgd; + pmd_t *u_pmds[PREALLOCATED_USER_PMDS]; pmd_t *pmds[PREALLOCATED_PMDS]; pgd = _pgd_alloc(); @@ -384,12 +438,15 @@ pgd_t *pgd_alloc(struct mm_struct *mm) mm->pgd = pgd; - if (preallocate_pmds(mm, pmds) != 0) + if (preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0) goto out_free_pgd; - if (paravirt_pgd_alloc(mm) != 0) + if (preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0) goto out_free_pmds; + if (paravirt_pgd_alloc(mm) != 0) + goto out_free_user_pmds; + /* * Make sure that pre-populating the pmds is atomic with * respect to anything walking the pgd_list, so that they @@ -399,13 +456,16 @@ pgd_t *pgd_alloc(struct mm_struct *mm) pgd_ctor(mm, pgd); pgd_prepopulate_pmd(mm, pgd, pmds); + pgd_prepopulate_user_pmd(mm, pgd, u_pmds); spin_unlock(&pgd_lock); return pgd; +out_free_user_pmds: + free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS); out_free_pmds: - free_pmds(mm, pmds); + free_pmds(mm, pmds, PREALLOCATED_PMDS); out_free_pgd: _pgd_free(pgd); out: @@ -719,28 +779,50 @@ int pmd_clear_huge(pmd_t *pmd) return 0; } +#ifdef CONFIG_X86_64 /** * pud_free_pmd_page - Clear pud entry and free pmd page. * @pud: Pointer to a PUD. + * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ -int pud_free_pmd_page(pud_t *pud) +int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i])) - return 0; + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -749,11 +831,12 @@ int pud_free_pmd_page(pud_t *pud) /** * pmd_free_pte_page - Clear pmd entry and free pte page. * @pmd: Pointer to a PMD. + * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ -int pmd_free_pte_page(pmd_t *pmd) +int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { pte_t *pte; @@ -762,8 +845,30 @@ int pmd_free_pte_page(pmd_t *pmd) pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1; } + +#else /* !CONFIG_X86_64 */ + +int pud_free_pmd_page(pud_t *pud, unsigned long addr) +{ + return pud_none(*pud); +} + +/* + * Disable free page handling on x86-PAE. This assures that ioremap() + * does not update sync'd pmd entries. See vmalloc_sync_one(). + */ +int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) +{ + return pmd_none(*pmd); +} + +#endif /* CONFIG_X86_64 */ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c index 4d418e705878..d58b4aba9510 100644 --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -54,6 +54,16 @@ #define __GFP_NOTRACK 0 #endif +/* + * Define the page-table levels we clone for user-space on 32 + * and 64 bit. + */ +#ifdef CONFIG_X86_64 +#define PTI_LEVEL_KERNEL_IMAGE PTI_CLONE_PMD +#else +#define PTI_LEVEL_KERNEL_IMAGE PTI_CLONE_PTE +#endif + static void __init pti_print_if_insecure(const char *reason) { if (boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN)) @@ -117,7 +127,7 @@ enable: setup_force_cpu_cap(X86_FEATURE_PTI); } -pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd) +pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd) { /* * Changes to the high (kernel) portion of the kernelmode page @@ -176,7 +186,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address) if (pgd_none(*pgd)) { unsigned long new_p4d_page = __get_free_page(gfp); - if (!new_p4d_page) + if (WARN_ON_ONCE(!new_p4d_page)) return NULL; set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page))); @@ -195,13 +205,17 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address) static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) { gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO); - p4d_t *p4d = pti_user_pagetable_walk_p4d(address); + p4d_t *p4d; pud_t *pud; + p4d = pti_user_pagetable_walk_p4d(address); + if (!p4d) + return NULL; + BUILD_BUG_ON(p4d_large(*p4d) != 0); if (p4d_none(*p4d)) { unsigned long new_pud_page = __get_free_page(gfp); - if (!new_pud_page) + if (WARN_ON_ONCE(!new_pud_page)) return NULL; set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page))); @@ -215,7 +229,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) } if (pud_none(*pud)) { unsigned long new_pmd_page = __get_free_page(gfp); - if (!new_pmd_page) + if (WARN_ON_ONCE(!new_pmd_page)) return NULL; set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page))); @@ -224,7 +238,6 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) return pmd_offset(pud, address); } -#ifdef CONFIG_X86_VSYSCALL_EMULATION /* * Walk the shadow copy of the page tables (optionally) trying to allocate * page table pages on the way down. Does not support large pages. @@ -237,9 +250,13 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address) { gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO); - pmd_t *pmd = pti_user_pagetable_walk_pmd(address); + pmd_t *pmd; pte_t *pte; + pmd = pti_user_pagetable_walk_pmd(address); + if (!pmd) + return NULL; + /* We can't do anything sensible if we hit a large mapping. */ if (pmd_large(*pmd)) { WARN_ON(1); @@ -262,6 +279,7 @@ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address) return pte; } +#ifdef CONFIG_X86_VSYSCALL_EMULATION static void __init pti_setup_vsyscall(void) { pte_t *pte, *target_pte; @@ -282,8 +300,14 @@ static void __init pti_setup_vsyscall(void) static void __init pti_setup_vsyscall(void) { } #endif +enum pti_clone_level { + PTI_CLONE_PMD, + PTI_CLONE_PTE, +}; + static void -pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear) +pti_clone_pgtable(unsigned long start, unsigned long end, + enum pti_clone_level level) { unsigned long addr; @@ -291,59 +315,105 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear) * Clone the populated PMDs which cover start to end. These PMD areas * can have holes. */ - for (addr = start; addr < end; addr += PMD_SIZE) { + for (addr = start; addr < end;) { + pte_t *pte, *target_pte; pmd_t *pmd, *target_pmd; pgd_t *pgd; p4d_t *p4d; pud_t *pud; + /* Overflow check */ + if (addr < start) + break; + pgd = pgd_offset_k(addr); if (WARN_ON(pgd_none(*pgd))) return; p4d = p4d_offset(pgd, addr); if (WARN_ON(p4d_none(*p4d))) return; + pud = pud_offset(p4d, addr); - if (pud_none(*pud)) + if (pud_none(*pud)) { + addr += PUD_SIZE; continue; + } + pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(*pmd)) { + addr += PMD_SIZE; continue; + } - target_pmd = pti_user_pagetable_walk_pmd(addr); - if (WARN_ON(!target_pmd)) - return; - - /* - * Only clone present PMDs. This ensures only setting - * _PAGE_GLOBAL on present PMDs. This should only be - * called on well-known addresses anyway, so a non- - * present PMD would be a surprise. - */ - if (WARN_ON(!(pmd_flags(*pmd) & _PAGE_PRESENT))) - return; - - /* - * Setting 'target_pmd' below creates a mapping in both - * the user and kernel page tables. It is effectively - * global, so set it as global in both copies. Note: - * the X86_FEATURE_PGE check is not _required_ because - * the CPU ignores _PAGE_GLOBAL when PGE is not - * supported. The check keeps consistentency with - * code that only set this bit when supported. - */ - if (boot_cpu_has(X86_FEATURE_PGE)) - *pmd = pmd_set_flags(*pmd, _PAGE_GLOBAL); - - /* - * Copy the PMD. That is, the kernelmode and usermode - * tables will share the last-level page tables of this - * address range - */ - *target_pmd = pmd_clear_flags(*pmd, clear); + if (pmd_large(*pmd) || level == PTI_CLONE_PMD) { + target_pmd = pti_user_pagetable_walk_pmd(addr); + if (WARN_ON(!target_pmd)) + return; + + /* + * Only clone present PMDs. This ensures only setting + * _PAGE_GLOBAL on present PMDs. This should only be + * called on well-known addresses anyway, so a non- + * present PMD would be a surprise. + */ + if (WARN_ON(!(pmd_flags(*pmd) & _PAGE_PRESENT))) + return; + + /* + * Setting 'target_pmd' below creates a mapping in both + * the user and kernel page tables. It is effectively + * global, so set it as global in both copies. Note: + * the X86_FEATURE_PGE check is not _required_ because + * the CPU ignores _PAGE_GLOBAL when PGE is not + * supported. The check keeps consistentency with + * code that only set this bit when supported. + */ + if (boot_cpu_has(X86_FEATURE_PGE)) + *pmd = pmd_set_flags(*pmd, _PAGE_GLOBAL); + + /* + * Copy the PMD. That is, the kernelmode and usermode + * tables will share the last-level page tables of this + * address range + */ + *target_pmd = *pmd; + + addr += PMD_SIZE; + + } else if (level == PTI_CLONE_PTE) { + + /* Walk the page-table down to the pte level */ + pte = pte_offset_kernel(pmd, addr); + if (pte_none(*pte)) { + addr += PAGE_SIZE; + continue; + } + + /* Only clone present PTEs */ + if (WARN_ON(!(pte_flags(*pte) & _PAGE_PRESENT))) + return; + + /* Allocate PTE in the user page-table */ + target_pte = pti_user_pagetable_walk_pte(addr); + if (WARN_ON(!target_pte)) + return; + + /* Set GLOBAL bit in both PTEs */ + if (boot_cpu_has(X86_FEATURE_PGE)) + *pte = pte_set_flags(*pte, _PAGE_GLOBAL); + + /* Clone the PTE */ + *target_pte = *pte; + + addr += PAGE_SIZE; + + } else { + BUG(); + } } } +#ifdef CONFIG_X86_64 /* * Clone a single p4d (i.e. a top-level entry on 4-level systems and a * next-level entry on 5-level systems. @@ -354,6 +424,9 @@ static void __init pti_clone_p4d(unsigned long addr) pgd_t *kernel_pgd; user_p4d = pti_user_pagetable_walk_p4d(addr); + if (!user_p4d) + return; + kernel_pgd = pgd_offset_k(addr); kernel_p4d = p4d_offset(kernel_pgd, addr); *user_p4d = *kernel_p4d; @@ -367,6 +440,25 @@ static void __init pti_clone_user_shared(void) pti_clone_p4d(CPU_ENTRY_AREA_BASE); } +#else /* CONFIG_X86_64 */ + +/* + * On 32 bit PAE systems with 1GB of Kernel address space there is only + * one pgd/p4d for the whole kernel. Cloning that would map the whole + * address space into the user page-tables, making PTI useless. So clone + * the page-table on the PMD level to prevent that. + */ +static void __init pti_clone_user_shared(void) +{ + unsigned long start, end; + + start = CPU_ENTRY_AREA_BASE; + end = start + (PAGE_SIZE * CPU_ENTRY_AREA_PAGES); + + pti_clone_pgtable(start, end, PTI_CLONE_PMD); +} +#endif /* CONFIG_X86_64 */ + /* * Clone the ESPFIX P4D into the user space visible page table */ @@ -380,11 +472,11 @@ static void __init pti_setup_espfix64(void) /* * Clone the populated PMDs of the entry and irqentry text and force it RO. */ -static void __init pti_clone_entry_text(void) +static void pti_clone_entry_text(void) { - pti_clone_pmds((unsigned long) __entry_text_start, - (unsigned long) __irqentry_text_end, - _PAGE_RW); + pti_clone_pgtable((unsigned long) __entry_text_start, + (unsigned long) __irqentry_text_end, + PTI_CLONE_PMD); } /* @@ -434,11 +526,18 @@ static inline bool pti_kernel_image_global_ok(void) return true; } +/* + * This is the only user for these and it is not arch-generic + * like the other set_memory.h functions. Just extern them. + */ +extern int set_memory_nonglobal(unsigned long addr, int numpages); +extern int set_memory_global(unsigned long addr, int numpages); + /* * For some configurations, map all of kernel text into the user page * tables. This reduces TLB misses, especially on non-PCID systems. */ -void pti_clone_kernel_text(void) +static void pti_clone_kernel_text(void) { /* * rodata is part of the kernel image and is normally @@ -446,7 +545,8 @@ void pti_clone_kernel_text(void) * clone the areas past rodata, they might contain secrets. */ unsigned long start = PFN_ALIGN(_text); - unsigned long end = (unsigned long)__end_rodata_hpage_align; + unsigned long end_clone = (unsigned long)__end_rodata_aligned; + unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table); if (!pti_kernel_image_global_ok()) return; @@ -458,14 +558,18 @@ void pti_clone_kernel_text(void) * pti_set_kernel_image_nonglobal() did to clear the * global bit. */ - pti_clone_pmds(start, end, _PAGE_RW); + pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE); + + /* + * pti_clone_pgtable() will set the global bit in any PMDs + * that it clones, but we also need to get any PTEs in + * the last level for areas that are not huge-page-aligned. + */ + + /* Set the global bit for normal non-__init kernel text: */ + set_memory_global(start, (end_global - start) >> PAGE_SHIFT); } -/* - * This is the only user for it and it is not arch-generic like - * the other set_memory.h functions. Just extern it. - */ -extern int set_memory_nonglobal(unsigned long addr, int numpages); void pti_set_kernel_image_nonglobal(void) { /* @@ -477,9 +581,11 @@ void pti_set_kernel_image_nonglobal(void) unsigned long start = PFN_ALIGN(_text); unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE); - if (pti_kernel_image_global_ok()) - return; - + /* + * This clears _PAGE_GLOBAL from the entire kernel image. + * pti_clone_kernel_text() map put _PAGE_GLOBAL back for + * areas that are mapped to userspace. + */ set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT); } @@ -493,6 +599,28 @@ void __init pti_init(void) pr_info("enabled\n"); +#ifdef CONFIG_X86_32 + /* + * We check for X86_FEATURE_PCID here. But the init-code will + * clear the feature flag on 32 bit because the feature is not + * supported on 32 bit anyway. To print the warning we need to + * check with cpuid directly again. + */ + if (cpuid_ecx(0x1) & BIT(17)) { + /* Use printk to work around pr_fmt() */ + printk(KERN_WARNING "\n"); + printk(KERN_WARNING "************************************************************\n"); + printk(KERN_WARNING "** WARNING! WARNING! WARNING! WARNING! WARNING! WARNING! **\n"); + printk(KERN_WARNING "** **\n"); + printk(KERN_WARNING "** You are using 32-bit PTI on a 64-bit PCID-capable CPU. **\n"); + printk(KERN_WARNING "** Your performance will increase dramatically if you **\n"); + printk(KERN_WARNING "** switch to a 64-bit kernel! **\n"); + printk(KERN_WARNING "** **\n"); + printk(KERN_WARNING "** WARNING! WARNING! WARNING! WARNING! WARNING! WARNING! **\n"); + printk(KERN_WARNING "************************************************************\n"); + } +#endif + pti_clone_user_shared(); /* Undo all global bits from the init pagetables in head_64.S: */ @@ -502,3 +630,22 @@ void __init pti_init(void) pti_setup_espfix64(); pti_setup_vsyscall(); } + +/* + * Finalize the kernel mappings in the userspace page-table. Some of the + * mappings for the kernel image might have changed since pti_init() + * cloned them. This is because parts of the kernel image have been + * mapped RO and/or NX. These changes need to be cloned again to the + * userspace page-table. + */ +void pti_finalize(void) +{ + /* + * We need to clone everything (again) that maps parts of the + * kernel image. + */ + pti_clone_entry_text(); + pti_clone_kernel_text(); + + debug_checkwx_user(); +} diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 6eb1f34c3c85..752dbf4e0e50 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -35,7 +36,7 @@ * necessary invalidation by clearing out the 'ctx_id' which * forces a TLB flush when the context is loaded. */ -void clear_asid_other(void) +static void clear_asid_other(void) { u16 asid; @@ -185,8 +186,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, { struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); + bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy); unsigned cpu = smp_processor_id(); u64 next_tlb_gen; + bool need_flush; + u16 new_asid; /* * NB: The scheduler will call us with prev == next when switching @@ -240,20 +244,41 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, next->context.ctx_id); /* - * We don't currently support having a real mm loaded without - * our cpu set in mm_cpumask(). We have all the bookkeeping - * in place to figure out whether we would need to flush - * if our cpu were cleared in mm_cpumask(), but we don't - * currently use it. + * Even in lazy TLB mode, the CPU should stay set in the + * mm_cpumask. The TLB shootdown code can figure out from + * from cpu_tlbstate.is_lazy whether or not to send an IPI. */ if (WARN_ON_ONCE(real_prev != &init_mm && !cpumask_test_cpu(cpu, mm_cpumask(next)))) cpumask_set_cpu(cpu, mm_cpumask(next)); - return; + /* + * If the CPU is not in lazy TLB mode, we are just switching + * from one thread in a process to another thread in the same + * process. No TLB flush required. + */ + if (!was_lazy) + return; + + /* + * Read the tlb_gen to check whether a flush is needed. + * If the TLB is up to date, just use it. + * The barrier synchronizes with the tlb_gen increment in + * the TLB shootdown code. + */ + smp_mb(); + next_tlb_gen = atomic64_read(&next->context.tlb_gen); + if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) == + next_tlb_gen) + return; + + /* + * TLB contents went out of date while we were in lazy + * mode. Fall through to the TLB switching code below. + */ + new_asid = prev_asid; + need_flush = true; } else { - u16 new_asid; - bool need_flush; u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id); /* @@ -285,53 +310,60 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, sync_current_stack_to_mm(next); } - /* Stop remote flushes for the previous mm */ - VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(real_prev)) && - real_prev != &init_mm); - cpumask_clear_cpu(cpu, mm_cpumask(real_prev)); + /* + * Stop remote flushes for the previous mm. + * Skip kernel threads; we never send init_mm TLB flushing IPIs, + * but the bitmap manipulation can cause cache line contention. + */ + if (real_prev != &init_mm) { + VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, + mm_cpumask(real_prev))); + cpumask_clear_cpu(cpu, mm_cpumask(real_prev)); + } /* * Start remote flushes and then read tlb_gen. */ - cpumask_set_cpu(cpu, mm_cpumask(next)); + if (next != &init_mm) + cpumask_set_cpu(cpu, mm_cpumask(next)); next_tlb_gen = atomic64_read(&next->context.tlb_gen); choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); + } - if (need_flush) { - this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); - this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); - load_new_mm_cr3(next->pgd, new_asid, true); - - /* - * NB: This gets called via leave_mm() in the idle path - * where RCU functions differently. Tracing normally - * uses RCU, so we need to use the _rcuidle variant. - * - * (There is no good reason for this. The idle code should - * be rearranged to call this before rcu_idle_enter().) - */ - trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); - } else { - /* The new ASID is already up to date. */ - load_new_mm_cr3(next->pgd, new_asid, false); - - /* See above wrt _rcuidle. */ - trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0); - } + if (need_flush) { + this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); + this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); + load_new_mm_cr3(next->pgd, new_asid, true); /* - * Record last user mm's context id, so we can avoid - * flushing branch buffer with IBPB if we switch back - * to the same user. + * NB: This gets called via leave_mm() in the idle path + * where RCU functions differently. Tracing normally + * uses RCU, so we need to use the _rcuidle variant. + * + * (There is no good reason for this. The idle code should + * be rearranged to call this before rcu_idle_enter().) */ - if (next != &init_mm) - this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); + trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); + } else { + /* The new ASID is already up to date. */ + load_new_mm_cr3(next->pgd, new_asid, false); - this_cpu_write(cpu_tlbstate.loaded_mm, next); - this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid); + /* See above wrt _rcuidle. */ + trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0); } + /* + * Record last user mm's context id, so we can avoid + * flushing branch buffer with IBPB if we switch back + * to the same user. + */ + if (next != &init_mm) + this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); + + this_cpu_write(cpu_tlbstate.loaded_mm, next); + this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid); + load_mm_cr4(next); switch_ldt(real_prev, next); } @@ -354,20 +386,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; - if (tlb_defer_switch_to_init_mm()) { - /* - * There's a significant optimization that may be possible - * here. We have accurate enough TLB flush tracking that we - * don't need to maintain coherence of TLB per se when we're - * lazy. We do, however, need to maintain coherence of - * paging-structure caches. We could, in principle, leave our - * old mm loaded and only switch to init_mm when - * tlb_remove_page() happens. - */ - this_cpu_write(cpu_tlbstate.is_lazy, true); - } else { - switch_mm(NULL, &init_mm, NULL); - } + this_cpu_write(cpu_tlbstate.is_lazy, true); } /* @@ -454,6 +473,9 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, * paging-structure cache to avoid speculatively reading * garbage into our TLB. Since switching to init_mm is barely * slower than a minimal flush, just switch to init_mm. + * + * This should be rare, with native_flush_tlb_others skipping + * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); return; @@ -560,6 +582,9 @@ static void flush_tlb_func_remote(void *info) void native_flush_tlb_others(const struct cpumask *cpumask, const struct flush_tlb_info *info) { + cpumask_var_t lazymask; + unsigned int cpu; + count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); if (info->end == TLB_FLUSH_ALL) trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL); @@ -583,8 +608,6 @@ void native_flush_tlb_others(const struct cpumask *cpumask, * that UV should be updated so that smp_call_function_many(), * etc, are optimal on UV. */ - unsigned int cpu; - cpu = smp_processor_id(); cpumask = uv_flush_tlb_others(cpumask, info); if (cpumask) @@ -592,8 +615,29 @@ void native_flush_tlb_others(const struct cpumask *cpumask, (void *)info, 1); return; } - smp_call_function_many(cpumask, flush_tlb_func_remote, + + /* + * A temporary cpumask is used in order to skip sending IPIs + * to CPUs in lazy TLB state, while keeping them in mm_cpumask(mm). + * If the allocation fails, simply IPI every CPU in mm_cpumask. + */ + if (!alloc_cpumask_var(&lazymask, GFP_ATOMIC)) { + smp_call_function_many(cpumask, flush_tlb_func_remote, (void *)info, 1); + return; + } + + cpumask_copy(lazymask, cpumask); + + for_each_cpu(cpu, lazymask) { + if (per_cpu(cpu_tlbstate.is_lazy, cpu)) + cpumask_clear_cpu(cpu, lazymask); + } + + smp_call_function_many(lazymask, flush_tlb_func_remote, + (void *)info, 1); + + free_cpumask_var(lazymask); } /* @@ -646,6 +690,68 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, put_cpu(); } +void tlb_flush_remove_tables_local(void *arg) +{ + struct mm_struct *mm = arg; + + if (this_cpu_read(cpu_tlbstate.loaded_mm) == mm && + this_cpu_read(cpu_tlbstate.is_lazy)) { + /* + * We're in lazy mode. We need to at least flush our + * paging-structure cache to avoid speculatively reading + * garbage into our TLB. Since switching to init_mm is barely + * slower than a minimal flush, just switch to init_mm. + */ + switch_mm_irqs_off(NULL, &init_mm, NULL); + } +} + +static void mm_fill_lazy_tlb_cpu_mask(struct mm_struct *mm, + struct cpumask *lazy_cpus) +{ + int cpu; + + for_each_cpu(cpu, mm_cpumask(mm)) { + if (!per_cpu(cpu_tlbstate.is_lazy, cpu)) + cpumask_set_cpu(cpu, lazy_cpus); + } +} + +void tlb_flush_remove_tables(struct mm_struct *mm) +{ + int cpu = get_cpu(); + cpumask_var_t lazy_cpus; + + if (cpumask_any_but(mm_cpumask(mm), cpu) >= nr_cpu_ids) { + put_cpu(); + return; + } + + if (!zalloc_cpumask_var(&lazy_cpus, GFP_ATOMIC)) { + /* + * If the cpumask allocation fails, do a brute force flush + * on all the CPUs that have this mm loaded. + */ + smp_call_function_many(mm_cpumask(mm), + tlb_flush_remove_tables_local, (void *)mm, 1); + put_cpu(); + return; + } + + /* + * CPUs with !is_lazy either received a TLB flush IPI while the user + * pages in this address range were unmapped, or have context switched + * and reloaded %CR3 since then. + * + * Shootdown IPIs at page table freeing time only need to be sent to + * CPUs that may have out of date TLB contents. + */ + mm_fill_lazy_tlb_cpu_mask(mm, lazy_cpus); + smp_call_function_many(lazy_cpus, + tlb_flush_remove_tables_local, (void *)mm, 1); + free_cpumask_var(lazy_cpus); + put_cpu(); +} static void do_flush_tlb_all(void *info) { diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 55799873ebe5..8f6cc71e0848 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -1441,8 +1441,8 @@ static void emit_prologue(u8 **pprog, u32 stack_depth) /* sub esp,STACK_SIZE */ EMIT2_off32(0x81, 0xEC, STACK_SIZE); - /* sub ebp,SCRATCH_SIZE+4+12*/ - EMIT3(0x83, add_1reg(0xE8, IA32_EBP), SCRATCH_SIZE + 16); + /* sub ebp,SCRATCH_SIZE+12*/ + EMIT3(0x83, add_1reg(0xE8, IA32_EBP), SCRATCH_SIZE + 12); /* xor ebx,ebx */ EMIT2(0x31, add_2reg(0xC0, IA32_EBX, IA32_EBX)); @@ -1475,8 +1475,8 @@ static void emit_epilogue(u8 **pprog, u32 stack_depth) /* mov edx,dword ptr [ebp+off]*/ EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX), STACK_VAR(r0[1])); - /* add ebp,SCRATCH_SIZE+4+12*/ - EMIT3(0x83, add_1reg(0xC0, IA32_EBP), SCRATCH_SIZE + 16); + /* add ebp,SCRATCH_SIZE+12*/ + EMIT3(0x83, add_1reg(0xC0, IA32_EBP), SCRATCH_SIZE + 12); /* mov ebx,dword ptr [ebp-12]*/ EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EBX), -12); diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index e01f7ceb9e7a..ee5d08f25ce4 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -166,14 +166,14 @@ void __init efi_call_phys_epilog(pgd_t *save_pgd) pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE); set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]); - if (!(pgd_val(*pgd) & _PAGE_PRESENT)) + if (!pgd_present(*pgd)) continue; for (i = 0; i < PTRS_PER_P4D; i++) { p4d = p4d_offset(pgd, pgd_idx * PGDIR_SIZE + i * P4D_SIZE); - if (!(p4d_val(*p4d) & _PAGE_PRESENT)) + if (!p4d_present(*p4d)) continue; pud = (pud_t *)p4d_page_vaddr(*p4d); @@ -417,7 +417,7 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va) if (!(md->attribute & EFI_MEMORY_WB)) flags |= _PAGE_PCD; - if (sev_active()) + if (sev_active() && md->type != EFI_MEMORY_MAPPED_IO) flags |= _PAGE_ENC; pfn = md->phys_addr >> PAGE_SHIFT; @@ -636,6 +636,8 @@ void efi_switch_mm(struct mm_struct *mm) #ifdef CONFIG_EFI_MIXED extern efi_status_t efi64_thunk(u32, ...); +static DEFINE_SPINLOCK(efi_runtime_lock); + #define runtime_service32(func) \ ({ \ u32 table = (u32)(unsigned long)efi.systab; \ @@ -657,17 +659,14 @@ extern efi_status_t efi64_thunk(u32, ...); #define efi_thunk(f, ...) \ ({ \ efi_status_t __s; \ - unsigned long __flags; \ u32 __func; \ \ - local_irq_save(__flags); \ arch_efi_call_virt_setup(); \ \ __func = runtime_service32(f); \ __s = efi64_thunk(__func, __VA_ARGS__); \ \ arch_efi_call_virt_teardown(); \ - local_irq_restore(__flags); \ \ __s; \ }) @@ -702,14 +701,17 @@ static efi_status_t efi_thunk_get_time(efi_time_t *tm, efi_time_cap_t *tc) { efi_status_t status; u32 phys_tm, phys_tc; + unsigned long flags; spin_lock(&rtc_lock); + spin_lock_irqsave(&efi_runtime_lock, flags); phys_tm = virt_to_phys_or_null(tm); phys_tc = virt_to_phys_or_null(tc); status = efi_thunk(get_time, phys_tm, phys_tc); + spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock(&rtc_lock); return status; @@ -719,13 +721,16 @@ static efi_status_t efi_thunk_set_time(efi_time_t *tm) { efi_status_t status; u32 phys_tm; + unsigned long flags; spin_lock(&rtc_lock); + spin_lock_irqsave(&efi_runtime_lock, flags); phys_tm = virt_to_phys_or_null(tm); status = efi_thunk(set_time, phys_tm); + spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock(&rtc_lock); return status; @@ -737,8 +742,10 @@ efi_thunk_get_wakeup_time(efi_bool_t *enabled, efi_bool_t *pending, { efi_status_t status; u32 phys_enabled, phys_pending, phys_tm; + unsigned long flags; spin_lock(&rtc_lock); + spin_lock_irqsave(&efi_runtime_lock, flags); phys_enabled = virt_to_phys_or_null(enabled); phys_pending = virt_to_phys_or_null(pending); @@ -747,6 +754,7 @@ efi_thunk_get_wakeup_time(efi_bool_t *enabled, efi_bool_t *pending, status = efi_thunk(get_wakeup_time, phys_enabled, phys_pending, phys_tm); + spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock(&rtc_lock); return status; @@ -757,13 +765,16 @@ efi_thunk_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm) { efi_status_t status; u32 phys_tm; + unsigned long flags; spin_lock(&rtc_lock); + spin_lock_irqsave(&efi_runtime_lock, flags); phys_tm = virt_to_phys_or_null(tm); status = efi_thunk(set_wakeup_time, enabled, phys_tm); + spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock(&rtc_lock); return status; @@ -781,6 +792,9 @@ efi_thunk_get_variable(efi_char16_t *name, efi_guid_t *vendor, efi_status_t status; u32 phys_name, phys_vendor, phys_attr; u32 phys_data_size, phys_data; + unsigned long flags; + + spin_lock_irqsave(&efi_runtime_lock, flags); phys_data_size = virt_to_phys_or_null(data_size); phys_vendor = virt_to_phys_or_null(vendor); @@ -791,6 +805,8 @@ efi_thunk_get_variable(efi_char16_t *name, efi_guid_t *vendor, status = efi_thunk(get_variable, phys_name, phys_vendor, phys_attr, phys_data_size, phys_data); + spin_unlock_irqrestore(&efi_runtime_lock, flags); + return status; } @@ -800,6 +816,34 @@ efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor, { u32 phys_name, phys_vendor, phys_data; efi_status_t status; + unsigned long flags; + + spin_lock_irqsave(&efi_runtime_lock, flags); + + phys_name = virt_to_phys_or_null_size(name, efi_name_size(name)); + phys_vendor = virt_to_phys_or_null(vendor); + phys_data = virt_to_phys_or_null_size(data, data_size); + + /* If data_size is > sizeof(u32) we've got problems */ + status = efi_thunk(set_variable, phys_name, phys_vendor, + attr, data_size, phys_data); + + spin_unlock_irqrestore(&efi_runtime_lock, flags); + + return status; +} + +static efi_status_t +efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor, + u32 attr, unsigned long data_size, + void *data) +{ + u32 phys_name, phys_vendor, phys_data; + efi_status_t status; + unsigned long flags; + + if (!spin_trylock_irqsave(&efi_runtime_lock, flags)) + return EFI_NOT_READY; phys_name = virt_to_phys_or_null_size(name, efi_name_size(name)); phys_vendor = virt_to_phys_or_null(vendor); @@ -809,6 +853,8 @@ efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor, status = efi_thunk(set_variable, phys_name, phys_vendor, attr, data_size, phys_data); + spin_unlock_irqrestore(&efi_runtime_lock, flags); + return status; } @@ -819,6 +865,9 @@ efi_thunk_get_next_variable(unsigned long *name_size, { efi_status_t status; u32 phys_name_size, phys_name, phys_vendor; + unsigned long flags; + + spin_lock_irqsave(&efi_runtime_lock, flags); phys_name_size = virt_to_phys_or_null(name_size); phys_vendor = virt_to_phys_or_null(vendor); @@ -827,6 +876,8 @@ efi_thunk_get_next_variable(unsigned long *name_size, status = efi_thunk(get_next_variable, phys_name_size, phys_name, phys_vendor); + spin_unlock_irqrestore(&efi_runtime_lock, flags); + return status; } @@ -835,10 +886,15 @@ efi_thunk_get_next_high_mono_count(u32 *count) { efi_status_t status; u32 phys_count; + unsigned long flags; + + spin_lock_irqsave(&efi_runtime_lock, flags); phys_count = virt_to_phys_or_null(count); status = efi_thunk(get_next_high_mono_count, phys_count); + spin_unlock_irqrestore(&efi_runtime_lock, flags); + return status; } @@ -847,10 +903,15 @@ efi_thunk_reset_system(int reset_type, efi_status_t status, unsigned long data_size, efi_char16_t *data) { u32 phys_data; + unsigned long flags; + + spin_lock_irqsave(&efi_runtime_lock, flags); phys_data = virt_to_phys_or_null_size(data, data_size); efi_thunk(reset_system, reset_type, status, data_size, phys_data); + + spin_unlock_irqrestore(&efi_runtime_lock, flags); } static efi_status_t @@ -872,10 +933,40 @@ efi_thunk_query_variable_info(u32 attr, u64 *storage_space, { efi_status_t status; u32 phys_storage, phys_remaining, phys_max; + unsigned long flags; + + if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION) + return EFI_UNSUPPORTED; + + spin_lock_irqsave(&efi_runtime_lock, flags); + + phys_storage = virt_to_phys_or_null(storage_space); + phys_remaining = virt_to_phys_or_null(remaining_space); + phys_max = virt_to_phys_or_null(max_variable_size); + + status = efi_thunk(query_variable_info, attr, phys_storage, + phys_remaining, phys_max); + + spin_unlock_irqrestore(&efi_runtime_lock, flags); + + return status; +} + +static efi_status_t +efi_thunk_query_variable_info_nonblocking(u32 attr, u64 *storage_space, + u64 *remaining_space, + u64 *max_variable_size) +{ + efi_status_t status; + u32 phys_storage, phys_remaining, phys_max; + unsigned long flags; if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION) return EFI_UNSUPPORTED; + if (!spin_trylock_irqsave(&efi_runtime_lock, flags)) + return EFI_NOT_READY; + phys_storage = virt_to_phys_or_null(storage_space); phys_remaining = virt_to_phys_or_null(remaining_space); phys_max = virt_to_phys_or_null(max_variable_size); @@ -883,6 +974,8 @@ efi_thunk_query_variable_info(u32 attr, u64 *storage_space, status = efi_thunk(query_variable_info, attr, phys_storage, phys_remaining, phys_max); + spin_unlock_irqrestore(&efi_runtime_lock, flags); + return status; } @@ -908,9 +1001,11 @@ void efi_thunk_runtime_setup(void) efi.get_variable = efi_thunk_get_variable; efi.get_next_variable = efi_thunk_get_next_variable; efi.set_variable = efi_thunk_set_variable; + efi.set_variable_nonblocking = efi_thunk_set_variable_nonblocking; efi.get_next_high_mono_count = efi_thunk_get_next_high_mono_count; efi.reset_system = efi_thunk_reset_system; efi.query_variable_info = efi_thunk_query_variable_info; + efi.query_variable_info_nonblocking = efi_thunk_query_variable_info_nonblocking; efi.update_capsule = efi_thunk_update_capsule; efi.query_capsule_caps = efi_thunk_query_capsule_caps; } diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c index 36c1f8b9f7e0..844d31cb8a0c 100644 --- a/arch/x86/platform/efi/quirks.c +++ b/arch/x86/platform/efi/quirks.c @@ -105,12 +105,11 @@ early_param("efi_no_storage_paranoia", setup_storage_paranoia); */ void efi_delete_dummy_variable(void) { - efi.set_variable((efi_char16_t *)efi_dummy_name, - &EFI_DUMMY_GUID, - EFI_VARIABLE_NON_VOLATILE | - EFI_VARIABLE_BOOTSERVICE_ACCESS | - EFI_VARIABLE_RUNTIME_ACCESS, - 0, NULL); + efi.set_variable_nonblocking((efi_char16_t *)efi_dummy_name, + &EFI_DUMMY_GUID, + EFI_VARIABLE_NON_VOLATILE | + EFI_VARIABLE_BOOTSERVICE_ACCESS | + EFI_VARIABLE_RUNTIME_ACCESS, 0, NULL); } /* @@ -249,7 +248,8 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size) int num_entries; void *new; - if (efi_mem_desc_lookup(addr, &md)) { + if (efi_mem_desc_lookup(addr, &md) || + md.type != EFI_BOOT_SERVICES_DATA) { pr_err("Failed to lookup EFI memory descriptor for %pa\n", &addr); return; } diff --git a/arch/x86/platform/intel-mid/Makefile b/arch/x86/platform/intel-mid/Makefile index fa021dfab088..5cf886c867c2 100644 --- a/arch/x86/platform/intel-mid/Makefile +++ b/arch/x86/platform/intel-mid/Makefile @@ -1,4 +1,4 @@ -obj-$(CONFIG_X86_INTEL_MID) += intel-mid.o intel_mid_vrtc.o mfld.o mrfld.o pwr.o +obj-$(CONFIG_X86_INTEL_MID) += intel-mid.o intel_mid_vrtc.o pwr.o # SFI specific code ifdef CONFIG_X86_INTEL_MID diff --git a/arch/x86/platform/intel-mid/intel-mid.c b/arch/x86/platform/intel-mid/intel-mid.c index 2ebdf31d9996..56f66eafb94f 100644 --- a/arch/x86/platform/intel-mid/intel-mid.c +++ b/arch/x86/platform/intel-mid/intel-mid.c @@ -36,8 +36,6 @@ #include #include -#include "intel_mid_weak_decls.h" - /* * the clockevent devices on Moorestown/Medfield can be APBT or LAPIC clock, * cmdline option x86_intel_mid_timer can be used to override the configuration @@ -61,10 +59,6 @@ enum intel_mid_timer_options intel_mid_timer_options; -/* intel_mid_ops to store sub arch ops */ -static struct intel_mid_ops *intel_mid_ops; -/* getter function for sub arch ops*/ -static void *(*get_intel_mid_ops[])(void) = INTEL_MID_OPS_INIT; enum intel_mid_cpu_type __intel_mid_cpu_chip; EXPORT_SYMBOL_GPL(__intel_mid_cpu_chip); @@ -82,11 +76,6 @@ static void intel_mid_reboot(void) intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 0); } -static unsigned long __init intel_mid_calibrate_tsc(void) -{ - return 0; -} - static void __init intel_mid_setup_bp_timer(void) { apbt_time_init(); @@ -133,6 +122,7 @@ static void intel_mid_arch_setup(void) case 0x3C: case 0x4A: __intel_mid_cpu_chip = INTEL_MID_CPU_CHIP_TANGIER; + x86_platform.legacy.rtc = 1; break; case 0x27: default: @@ -140,17 +130,7 @@ static void intel_mid_arch_setup(void) break; } - if (__intel_mid_cpu_chip < MAX_CPU_OPS(get_intel_mid_ops)) - intel_mid_ops = get_intel_mid_ops[__intel_mid_cpu_chip](); - else { - intel_mid_ops = get_intel_mid_ops[INTEL_MID_CPU_CHIP_PENWELL](); - pr_info("ARCH: Unknown SoC, assuming Penwell!\n"); - } - out: - if (intel_mid_ops->arch_setup) - intel_mid_ops->arch_setup(); - /* * Intel MID platforms are using explicitly defined regulators. * @@ -191,7 +171,6 @@ void __init x86_intel_mid_early_setup(void) x86_cpuinit.setup_percpu_clockev = apbt_setup_secondary_clock; - x86_platform.calibrate_tsc = intel_mid_calibrate_tsc; x86_platform.get_nmi_reason = intel_mid_get_nmi_reason; x86_init.pci.arch_init = intel_mid_pci_init; diff --git a/arch/x86/platform/intel-mid/intel_mid_weak_decls.h b/arch/x86/platform/intel-mid/intel_mid_weak_decls.h deleted file mode 100644 index 3c1c3866d82b..000000000000 --- a/arch/x86/platform/intel-mid/intel_mid_weak_decls.h +++ /dev/null @@ -1,18 +0,0 @@ -/* - * intel_mid_weak_decls.h: Weak declarations of intel-mid.c - * - * (C) Copyright 2013 Intel Corporation - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; version 2 - * of the License. - */ - - -/* For every CPU addition a new get__ops interface needs - * to be added. - */ -extern void *get_penwell_ops(void); -extern void *get_cloverview_ops(void); -extern void *get_tangier_ops(void); diff --git a/arch/x86/platform/intel-mid/mfld.c b/arch/x86/platform/intel-mid/mfld.c deleted file mode 100644 index e42978d4deaf..000000000000 --- a/arch/x86/platform/intel-mid/mfld.c +++ /dev/null @@ -1,70 +0,0 @@ -/* - * mfld.c: Intel Medfield platform setup code - * - * (C) Copyright 2013 Intel Corporation - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; version 2 - * of the License. - */ - -#include - -#include -#include -#include - -#include "intel_mid_weak_decls.h" - -static unsigned long __init mfld_calibrate_tsc(void) -{ - unsigned long fast_calibrate; - u32 lo, hi, ratio, fsb; - - rdmsr(MSR_IA32_PERF_STATUS, lo, hi); - pr_debug("IA32 perf status is 0x%x, 0x%0x\n", lo, hi); - ratio = (hi >> 8) & 0x1f; - pr_debug("ratio is %d\n", ratio); - if (!ratio) { - pr_err("read a zero ratio, should be incorrect!\n"); - pr_err("force tsc ratio to 16 ...\n"); - ratio = 16; - } - rdmsr(MSR_FSB_FREQ, lo, hi); - if ((lo & 0x7) == 0x7) - fsb = FSB_FREQ_83SKU; - else - fsb = FSB_FREQ_100SKU; - fast_calibrate = ratio * fsb; - pr_debug("read penwell tsc %lu khz\n", fast_calibrate); - lapic_timer_frequency = fsb * 1000 / HZ; - - /* - * TSC on Intel Atom SoCs is reliable and of known frequency. - * See tsc_msr.c for details. - */ - setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ); - setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE); - - return fast_calibrate; -} - -static void __init penwell_arch_setup(void) -{ - x86_platform.calibrate_tsc = mfld_calibrate_tsc; -} - -static struct intel_mid_ops penwell_ops = { - .arch_setup = penwell_arch_setup, -}; - -void *get_penwell_ops(void) -{ - return &penwell_ops; -} - -void *get_cloverview_ops(void) -{ - return &penwell_ops; -} diff --git a/arch/x86/platform/intel-mid/mrfld.c b/arch/x86/platform/intel-mid/mrfld.c deleted file mode 100644 index ae7bdeb0e507..000000000000 --- a/arch/x86/platform/intel-mid/mrfld.c +++ /dev/null @@ -1,105 +0,0 @@ -/* - * Intel Merrifield platform specific setup code - * - * (C) Copyright 2013 Intel Corporation - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; version 2 - * of the License. - */ - -#include - -#include -#include - -#include "intel_mid_weak_decls.h" - -static unsigned long __init tangier_calibrate_tsc(void) -{ - unsigned long fast_calibrate; - u32 lo, hi, ratio, fsb, bus_freq; - - /* *********************** */ - /* Compute TSC:Ratio * FSB */ - /* *********************** */ - - /* Compute Ratio */ - rdmsr(MSR_PLATFORM_INFO, lo, hi); - pr_debug("IA32 PLATFORM_INFO is 0x%x : %x\n", hi, lo); - - ratio = (lo >> 8) & 0xFF; - pr_debug("ratio is %d\n", ratio); - if (!ratio) { - pr_err("Read a zero ratio, force tsc ratio to 4 ...\n"); - ratio = 4; - } - - /* Compute FSB */ - rdmsr(MSR_FSB_FREQ, lo, hi); - pr_debug("Actual FSB frequency detected by SOC 0x%x : %x\n", - hi, lo); - - bus_freq = lo & 0x7; - pr_debug("bus_freq = 0x%x\n", bus_freq); - - if (bus_freq == 0) - fsb = FSB_FREQ_100SKU; - else if (bus_freq == 1) - fsb = FSB_FREQ_100SKU; - else if (bus_freq == 2) - fsb = FSB_FREQ_133SKU; - else if (bus_freq == 3) - fsb = FSB_FREQ_167SKU; - else if (bus_freq == 4) - fsb = FSB_FREQ_83SKU; - else if (bus_freq == 5) - fsb = FSB_FREQ_400SKU; - else if (bus_freq == 6) - fsb = FSB_FREQ_267SKU; - else if (bus_freq == 7) - fsb = FSB_FREQ_333SKU; - else { - BUG(); - pr_err("Invalid bus_freq! Setting to minimal value!\n"); - fsb = FSB_FREQ_100SKU; - } - - /* TSC = FSB Freq * Resolved HFM Ratio */ - fast_calibrate = ratio * fsb; - pr_debug("calculate tangier tsc %lu KHz\n", fast_calibrate); - - /* ************************************ */ - /* Calculate Local APIC Timer Frequency */ - /* ************************************ */ - lapic_timer_frequency = (fsb * 1000) / HZ; - - pr_debug("Setting lapic_timer_frequency = %d\n", - lapic_timer_frequency); - - /* - * TSC on Intel Atom SoCs is reliable and of known frequency. - * See tsc_msr.c for details. - */ - setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ); - setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE); - - return fast_calibrate; -} - -static void __init tangier_arch_setup(void) -{ - x86_platform.calibrate_tsc = tangier_calibrate_tsc; - x86_platform.legacy.rtc = 1; -} - -/* tangier arch ops */ -static struct intel_mid_ops tangier_ops = { - .arch_setup = tangier_arch_setup, -}; - -void *get_tangier_ops(void) -{ - return &tangier_ops; -} diff --git a/arch/x86/platform/olpc/olpc.c b/arch/x86/platform/olpc/olpc.c index 7c3077e58fa0..f0e920fb98ad 100644 --- a/arch/x86/platform/olpc/olpc.c +++ b/arch/x86/platform/olpc/olpc.c @@ -311,10 +311,8 @@ static int __init add_xo1_platform_devices(void) return PTR_ERR(pdev); pdev = platform_device_register_simple("olpc-xo1", -1, NULL, 0); - if (IS_ERR(pdev)) - return PTR_ERR(pdev); - return 0; + return PTR_ERR_OR_ZERO(pdev); } static int olpc_xo1_ec_probe(struct platform_device *pdev) diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c index ca446da48fd2..e26dfad507c8 100644 --- a/arch/x86/platform/uv/tlb_uv.c +++ b/arch/x86/platform/uv/tlb_uv.c @@ -1607,8 +1607,6 @@ static int parse_tunables_write(struct bau_control *bcp, char *instr, *tunables[cnt].tunp = val; continue; } - if (q == p) - break; } return 0; } diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S index ce8da3a0412c..fd369a6e9ff8 100644 --- a/arch/x86/power/hibernate_asm_64.S +++ b/arch/x86/power/hibernate_asm_64.S @@ -137,7 +137,7 @@ ENTRY(restore_registers) /* Saved in save_processor_state. */ lgdt saved_context_gdt_desc(%rax) - xorq %rax, %rax + xorl %eax, %eax /* tell the hibernation core that we've just restored the memory */ movq %rax, in_suspend(%rip) diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile index 2e9ee023e6bc..81a8e33115ad 100644 --- a/arch/x86/purgatory/Makefile +++ b/arch/x86/purgatory/Makefile @@ -6,7 +6,7 @@ purgatory-y := purgatory.o stack.o setup-x86_$(BITS).o sha256.o entry64.o string targets += $(purgatory-y) PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) -$(obj)/sha256.o: $(srctree)/lib/sha256.c +$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE $(call if_changed_rule,cc_o_c) LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c index 220e97841e49..3a6c8ebc8032 100644 --- a/arch/x86/tools/relocs.c +++ b/arch/x86/tools/relocs.c @@ -67,6 +67,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = { "__tracedata_(start|end)|" "__(start|stop)_notes|" "__end_rodata|" + "__end_rodata_aligned|" "__initramfs_start|" "(jiffies|jiffies_64)|" #if ELF_BITS == 64 diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c index 744afdc18cf3..56c44d865f7b 100644 --- a/arch/x86/um/mem_32.c +++ b/arch/x86/um/mem_32.c @@ -16,7 +16,7 @@ static int __init gate_vma_init(void) if (!FIXADDR_USER_START) return 0; - gate_vma.vm_mm = NULL; + vma_init(&gate_vma, NULL); gate_vma.vm_start = FIXADDR_USER_START; gate_vma.vm_end = FIXADDR_USER_END; gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; diff --git a/arch/x86/um/vdso/.gitignore b/arch/x86/um/vdso/.gitignore index 9cac6d072199..f8b69d84238e 100644 --- a/arch/x86/um/vdso/.gitignore +++ b/arch/x86/um/vdso/.gitignore @@ -1,2 +1 @@ -vdso-syms.lds vdso.lds diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile index b2d6967262b2..822ccdba93ad 100644 --- a/arch/x86/um/vdso/Makefile +++ b/arch/x86/um/vdso/Makefile @@ -53,22 +53,6 @@ $(vobjs): KBUILD_CFLAGS += $(CFL) CFLAGS_REMOVE_vdso-note.o = -pg -fprofile-arcs -ftest-coverage CFLAGS_REMOVE_um_vdso.o = -pg -fprofile-arcs -ftest-coverage -targets += vdso-syms.lds -extra-$(VDSO64-y) += vdso-syms.lds - -# -# Match symbols in the DSO that look like VDSO*; produce a file of constants. -# -sed-vdsosym := -e 's/^00*/0/' \ - -e 's/^\([0-9a-fA-F]*\) . \(VDSO[a-zA-Z0-9_]*\)$$/\2 = 0x\1;/p' -quiet_cmd_vdsosym = VDSOSYM $@ -define cmd_vdsosym - $(NM) $< | LC_ALL=C sed -n $(sed-vdsosym) | LC_ALL=C sort > $@ -endef - -$(obj)/%-syms.lds: $(obj)/%.so.dbg FORCE - $(call if_changed,vdsosym) - # # The DSO images are built using a special linker script. # diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index c9081c6671f0..3b5318505c69 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -64,6 +64,13 @@ struct shared_info xen_dummy_shared_info; __read_mostly int xen_have_vector_callback; EXPORT_SYMBOL_GPL(xen_have_vector_callback); +/* + * NB: needs to live in .data because it's used by xen_prepare_pvh which runs + * before clearing the bss. + */ +uint32_t xen_start_flags __attribute__((section(".data"))) = 0; +EXPORT_SYMBOL(xen_start_flags); + /* * Point at some empty memory to start with. We map the real shared_info * page as soon as fixmap is up and running. diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 357969a3697c..105a57d73701 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -119,6 +119,27 @@ static void __init xen_banner(void) version >> 16, version & 0xffff, extra.extraversion, xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : ""); } + +static void __init xen_pv_init_platform(void) +{ + set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info); + HYPERVISOR_shared_info = (void *)fix_to_virt(FIX_PARAVIRT_BOOTMAP); + + /* xen clock uses per-cpu vcpu_info, need to init it for boot cpu */ + xen_vcpu_info_reset(0); + + /* pvclock is in shared info area */ + xen_init_time_ops(); +} + +static void __init xen_pv_guest_late_init(void) +{ +#ifndef CONFIG_SMP + /* Setup shared vcpu info for non-smp configurations */ + xen_setup_vcpu_info_placement(); +#endif +} + /* Check if running on Xen version (major, minor) or later */ bool xen_running_on_version_or_later(unsigned int major, unsigned int minor) @@ -947,34 +968,8 @@ static void xen_write_msr(unsigned int msr, unsigned low, unsigned high) xen_write_msr_safe(msr, low, high); } -void xen_setup_shared_info(void) -{ - set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info); - - HYPERVISOR_shared_info = - (struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP); - - xen_setup_mfn_list_list(); - - if (system_state == SYSTEM_BOOTING) { -#ifndef CONFIG_SMP - /* - * In UP this is as good a place as any to set up shared info. - * Limit this to boot only, at restore vcpu setup is done via - * xen_vcpu_restore(). - */ - xen_setup_vcpu_info_placement(); -#endif - /* - * Now that shared info is set up we can start using routines - * that point to pvclock area. - */ - xen_init_time_ops(); - } -} - /* This is called once we have the cpu_possible_mask */ -void __ref xen_setup_vcpu_info_placement(void) +void __init xen_setup_vcpu_info_placement(void) { int cpu; @@ -1203,15 +1198,24 @@ asmlinkage __visible void __init xen_start_kernel(void) return; xen_domain_type = XEN_PV_DOMAIN; + xen_start_flags = xen_start_info->flags; xen_setup_features(); - xen_setup_machphys_mapping(); - /* Install Xen paravirt ops */ pv_info = xen_info; pv_init_ops.patch = paravirt_patch_default; pv_cpu_ops = xen_cpu_ops; + xen_init_irq_ops(); + + /* + * Setup xen_vcpu early because it is needed for + * local_irq_disable(), irqs_disabled(), e.g. in printk(). + * + * Don't do the full vcpu_info placement stuff until we have + * the cpu_possible_mask and a non-dummy shared_info. + */ + xen_vcpu_info_reset(0); x86_platform.get_nmi_reason = xen_get_nmi_reason; @@ -1219,15 +1223,19 @@ asmlinkage __visible void __init xen_start_kernel(void) x86_init.irqs.intr_mode_init = x86_init_noop; x86_init.oem.arch_setup = xen_arch_setup; x86_init.oem.banner = xen_banner; + x86_init.hyper.init_platform = xen_pv_init_platform; + x86_init.hyper.guest_late_init = xen_pv_guest_late_init; /* * Set up some pagetable state before starting to set any ptes. */ + xen_setup_machphys_mapping(); xen_init_mmu_ops(); /* Prevent unwanted bits from being set in PTEs. */ __supported_pte_mask &= ~_PAGE_GLOBAL; + __default_kernel_pte_mask &= ~_PAGE_GLOBAL; /* * Prevent page tables from being allocated in highmem, even @@ -1248,20 +1256,9 @@ asmlinkage __visible void __init xen_start_kernel(void) get_cpu_cap(&boot_cpu_data); x86_configure_nx(); - xen_init_irq_ops(); - /* Let's presume PV guests always boot on vCPU with id 0. */ per_cpu(xen_vcpu_id, 0) = 0; - /* - * Setup xen_vcpu early because idt_setup_early_handler needs it for - * local_irq_disable(), irqs_disabled(). - * - * Don't do the full vcpu_info placement stuff until we have - * the cpu_possible_mask and a non-dummy shared_info. - */ - xen_vcpu_info_reset(0); - idt_setup_early_handler(); xen_init_capabilities(); diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c index aa1c6a6831a9..c85d1a88f476 100644 --- a/arch/x86/xen/enlighten_pvh.c +++ b/arch/x86/xen/enlighten_pvh.c @@ -97,6 +97,7 @@ void __init xen_prepare_pvh(void) } xen_pvh = 1; + xen_start_flags = pvh_start_info.flags; msr = cpuid_ebx(xen_cpuid_base() + 2); pfn = __pa(hypercall_page); diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c index 74179852e46c..7515a19fd324 100644 --- a/arch/x86/xen/irq.c +++ b/arch/x86/xen/irq.c @@ -128,8 +128,6 @@ static const struct pv_irq_ops xen_irq_ops __initconst = { void __init xen_init_irq_ops(void) { - /* For PVH we use default pv_irq_ops settings. */ - if (!xen_feature(XENFEAT_hvm_callback_vector)) - pv_irq_ops = xen_irq_ops; + pv_irq_ops = xen_irq_ops; x86_init.irqs.intr_init = xen_init_IRQ; } diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 2c30cabfda90..52206ad81e4b 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -1230,8 +1230,7 @@ static void __init xen_pagetable_p2m_free(void) * We roundup to the PMD, which means that if anybody at this stage is * using the __ka address of xen_start_info or * xen_start_info->shared_info they are in going to crash. Fortunatly - * we have already revectored in xen_setup_kernel_pagetable and in - * xen_setup_shared_info. + * we have already revectored in xen_setup_kernel_pagetable. */ size = roundup(size, PMD_SIZE); @@ -1292,8 +1291,7 @@ static void __init xen_pagetable_init(void) /* Remap memory freed due to conflicts with E820 map */ xen_remap_memory(); - - xen_setup_shared_info(); + xen_setup_mfn_list_list(); } static void xen_write_cr2(unsigned long cr2) { diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c index 2e20ae2fa2d6..e3b18ad49889 100644 --- a/arch/x86/xen/smp_pv.c +++ b/arch/x86/xen/smp_pv.c @@ -32,6 +32,7 @@ #include #include +#include #include #include @@ -70,6 +71,8 @@ static void cpu_bringup(void) cpu_data(cpu).x86_max_cores = 1; set_cpu_sibling_map(cpu); + speculative_store_bypass_ht_init(); + xen_setup_cpu_clockevents(); notify_cpu_starting(cpu); @@ -250,6 +253,8 @@ static void __init xen_pv_smp_prepare_cpus(unsigned int max_cpus) } set_cpu_sibling_map(0); + speculative_store_bypass_ht_init(); + xen_pmu_init(0); if (xen_smp_intr_init(0) || xen_smp_intr_init_pv(0)) diff --git a/arch/x86/xen/suspend_pv.c b/arch/x86/xen/suspend_pv.c index a2e0f110af56..8303b58c79a9 100644 --- a/arch/x86/xen/suspend_pv.c +++ b/arch/x86/xen/suspend_pv.c @@ -27,8 +27,9 @@ void xen_pv_pre_suspend(void) void xen_pv_post_suspend(int suspend_cancelled) { xen_build_mfn_list_list(); - - xen_setup_shared_info(); + set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info); + HYPERVISOR_shared_info = (void *)fix_to_virt(FIX_PARAVIRT_BOOTMAP); + xen_setup_mfn_list_list(); if (suspend_cancelled) { xen_start_info->store_mfn = diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c index e0f1bcf01d63..c84f1e039d84 100644 --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -31,6 +31,8 @@ /* Xen may fire a timer up to this many ns early */ #define TIMER_SLOP 100000 +static u64 xen_sched_clock_offset __read_mostly; + /* Get the TSC speed from Xen */ static unsigned long xen_tsc_khz(void) { @@ -40,7 +42,7 @@ static unsigned long xen_tsc_khz(void) return pvclock_tsc_khz(info); } -u64 xen_clocksource_read(void) +static u64 xen_clocksource_read(void) { struct pvclock_vcpu_time_info *src; u64 ret; @@ -57,6 +59,11 @@ static u64 xen_clocksource_get_cycles(struct clocksource *cs) return xen_clocksource_read(); } +static u64 xen_sched_clock(void) +{ + return xen_clocksource_read() - xen_sched_clock_offset; +} + static void xen_read_wallclock(struct timespec64 *ts) { struct shared_info *s = HYPERVISOR_shared_info; @@ -367,7 +374,7 @@ void xen_timer_resume(void) } static const struct pv_time_ops xen_time_ops __initconst = { - .sched_clock = xen_clocksource_read, + .sched_clock = xen_sched_clock, .steal_clock = xen_steal_clock, }; @@ -503,8 +510,9 @@ static void __init xen_time_init(void) pvclock_gtod_register_notifier(&xen_pvclock_gtod_notifier); } -void __ref xen_init_time_ops(void) +void __init xen_init_time_ops(void) { + xen_sched_clock_offset = xen_clocksource_read(); pv_time_ops = xen_time_ops; x86_init.timers.timer_init = xen_time_init; @@ -542,11 +550,11 @@ void __init xen_hvm_init_time_ops(void) return; if (!xen_feature(XENFEAT_hvm_safe_pvclock)) { - printk(KERN_INFO "Xen doesn't support pvclock on HVM," - "disable pv timer\n"); + pr_info("Xen doesn't support pvclock on HVM, disable pv timer"); return; } + xen_sched_clock_offset = xen_clocksource_read(); pv_time_ops = xen_time_ops; x86_init.timers.setup_percpu_clockev = xen_time_init; x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents; diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h index 3b34745d0a52..e78684597f57 100644 --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -31,7 +31,6 @@ extern struct shared_info xen_dummy_shared_info; extern struct shared_info *HYPERVISOR_shared_info; void xen_setup_mfn_list_list(void); -void xen_setup_shared_info(void); void xen_build_mfn_list_list(void); void xen_setup_machphys_mapping(void); void xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn); @@ -68,12 +67,11 @@ void xen_init_irq_ops(void); void xen_setup_timer(int cpu); void xen_setup_runstate_info(int cpu); void xen_teardown_timer(int cpu); -u64 xen_clocksource_read(void); void xen_setup_cpu_clockevents(void); void xen_save_time_memory_area(void); void xen_restore_time_memory_area(void); -void __ref xen_init_time_ops(void); -void __init xen_hvm_init_time_ops(void); +void xen_init_time_ops(void); +void xen_hvm_init_time_ops(void); irqreturn_t xen_debug_interrupt(int irq, void *dev_id); diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index 17df332269b2..d575e8701955 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -17,7 +17,6 @@ config XTENSA select GENERIC_SCHED_CLOCK select GENERIC_STRNCPY_FROM_USER if KASAN select HAVE_ARCH_KASAN if MMU - select HAVE_CC_STACKPROTECTOR select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS select HAVE_EXIT_THREAD @@ -28,6 +27,7 @@ config XTENSA select HAVE_MEMBLOCK select HAVE_OPROFILE select HAVE_PERF_EVENTS + select HAVE_STACKPROTECTOR select IRQ_DOMAIN select MODULES_USE_ELF_RELA select NO_BOOTMEM diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index e7a23f2a519a..7de0149e1cf7 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -197,107 +197,9 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) - -/** - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc(v) atomic_add(1,(v)) - -/** - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc_return(v) atomic_add_return(1,(v)) - -/** - * atomic_dec - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec(v) atomic_sub(1,(v)) - -/** - * atomic_dec_return - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec_return(v) atomic_sub_return(1,(v)) - -/** - * atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic_dec_and_test(v) (atomic_sub_return(1,(v)) == 0) - -/** - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_add_return(1,(v)) == 0) - -/** - * atomic_add_negative - add and test if negative - * @v: pointer of type atomic_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic_add_negative(i,v) (atomic_add_return((i),(v)) < 0) - #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * __atomic_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #endif /* __KERNEL__ */ #endif /* _XTENSA_ATOMIC_H */ diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 397d6a1a4224..a0d50be5a8cb 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -88,7 +88,7 @@ static inline void __invalidate_icache_page_alias(unsigned long virt, * * Pages can get remapped. Because this might change the 'color' of that page, * we have to flush the cache before the PTE is changed. - * (see also Documentation/cachetlb.txt) + * (see also Documentation/core-api/cachetlb.rst) */ #if defined(CONFIG_MMU) && \ @@ -152,7 +152,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/cachetlb.txt */ +/* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/hw_breakpoint.h b/arch/xtensa/include/asm/hw_breakpoint.h index dbe3053b284a..9f119c1ca0b5 100644 --- a/arch/xtensa/include/asm/hw_breakpoint.h +++ b/arch/xtensa/include/asm/hw_breakpoint.h @@ -30,13 +30,16 @@ struct arch_hw_breakpoint { u16 type; }; +struct perf_event_attr; struct perf_event; struct pt_regs; struct task_struct; int hw_breakpoint_slots(int type); -int arch_check_bp_in_kernelspace(struct perf_event *bp); -int arch_validate_hwbkpt_settings(struct perf_event *bp); +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw); int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/xtensa/kernel/hw_breakpoint.c b/arch/xtensa/kernel/hw_breakpoint.c index b35656ab7dbd..c2e387c19cda 100644 --- a/arch/xtensa/kernel/hw_breakpoint.c +++ b/arch/xtensa/kernel/hw_breakpoint.c @@ -33,14 +33,13 @@ int hw_breakpoint_slots(int type) } } -int arch_check_bp_in_kernelspace(struct perf_event *bp) +int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw) { unsigned int len; unsigned long va; - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - va = info->address; - len = bp->attr.bp_len; + va = hw->address; + len = hw->len; return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -48,50 +47,41 @@ int arch_check_bp_in_kernelspace(struct perf_event *bp) /* * Construct an arch_hw_breakpoint from a perf_event. */ -static int arch_build_bp_info(struct perf_event *bp) +int hw_breakpoint_arch_parse(struct perf_event *bp, + const struct perf_event_attr *attr, + struct arch_hw_breakpoint *hw) { - struct arch_hw_breakpoint *info = counter_arch_bp(bp); - /* Type */ - switch (bp->attr.bp_type) { + switch (attr->bp_type) { case HW_BREAKPOINT_X: - info->type = XTENSA_BREAKPOINT_EXECUTE; + hw->type = XTENSA_BREAKPOINT_EXECUTE; break; case HW_BREAKPOINT_R: - info->type = XTENSA_BREAKPOINT_LOAD; + hw->type = XTENSA_BREAKPOINT_LOAD; break; case HW_BREAKPOINT_W: - info->type = XTENSA_BREAKPOINT_STORE; + hw->type = XTENSA_BREAKPOINT_STORE; break; case HW_BREAKPOINT_RW: - info->type = XTENSA_BREAKPOINT_LOAD | XTENSA_BREAKPOINT_STORE; + hw->type = XTENSA_BREAKPOINT_LOAD | XTENSA_BREAKPOINT_STORE; break; default: return -EINVAL; } /* Len */ - info->len = bp->attr.bp_len; - if (info->len < 1 || info->len > 64 || !is_power_of_2(info->len)) + hw->len = attr->bp_len; + if (hw->len < 1 || hw->len > 64 || !is_power_of_2(hw->len)) return -EINVAL; /* Address */ - info->address = bp->attr.bp_addr; - if (info->address & (info->len - 1)) + hw->address = attr->bp_addr; + if (hw->address & (hw->len - 1)) return -EINVAL; return 0; } -int arch_validate_hwbkpt_settings(struct perf_event *bp) -{ - int ret; - - /* Build the arch_hw_breakpoint. */ - ret = arch_build_bp_info(bp); - return ret; -} - int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data) { diff --git a/block/Kconfig b/block/Kconfig index 28ec55752b68..eb50fd4977c2 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -114,7 +114,7 @@ config BLK_DEV_THROTTLING one needs to mount and use blkio cgroup controller for creating cgroups and specifying per device IO rate policies. - See Documentation/cgroups/blkio-controller.txt for more information. + See Documentation/cgroup-v1/blkio-controller.txt for more information. config BLK_DEV_THROTTLING_LOW bool "Block throttling .low limit interface support (EXPERIMENTAL)" diff --git a/block/bio.c b/block/bio.c index 9710e275f230..047c5dca6d90 100644 --- a/block/bio.c +++ b/block/bio.c @@ -903,25 +903,27 @@ int bio_add_page(struct bio *bio, struct page *page, EXPORT_SYMBOL(bio_add_page); /** - * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio + * __bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to * @iter: iov iterator describing the region to be mapped * - * Pins as many pages from *iter and appends them to @bio's bvec array. The + * Pins pages from *iter and appends them to @bio's bvec array. The * pages will have to be released using put_page() when done. + * For multi-segment *iter, this function only adds pages from the + * the next non-empty segment of the iov iterator. */ -int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) +static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { - unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; + unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt, idx; struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; - size_t offset, diff; + size_t offset; ssize_t size; size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; - nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE; + idx = nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE; /* * Deep magic below: We need to walk the pinned pages backwards @@ -934,21 +936,46 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) bio->bi_iter.bi_size += size; bio->bi_vcnt += nr_pages; - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; + while (idx--) { + bv[idx].bv_page = pages[idx]; + bv[idx].bv_len = PAGE_SIZE; + bv[idx].bv_offset = 0; } bv[0].bv_offset += offset; bv[0].bv_len -= offset; - if (diff) - bv[bio->bi_vcnt - 1].bv_len -= diff; + bv[nr_pages - 1].bv_len -= nr_pages * PAGE_SIZE - offset - size; iov_iter_advance(iter, size); return 0; } + +/** + * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio + * @bio: bio to add pages to + * @iter: iov iterator describing the region to be mapped + * + * Pins pages from *iter and appends them to @bio's bvec array. The + * pages will have to be released using put_page() when done. + * The function tries, but does not guarantee, to pin as many pages as + * fit into the bio, or are requested in *iter, whatever is smaller. + * If MM encounters an error pinning the requested pages, it stops. + * Error is returned only if 0 pages could be pinned. + */ +int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) +{ + unsigned short orig_vcnt = bio->bi_vcnt; + + do { + int ret = __bio_iov_iter_get_pages(bio, iter); + + if (unlikely(ret)) + return bio->bi_vcnt > orig_vcnt ? 0 : ret; + + } while (iov_iter_count(iter) && !bio_full(bio)); + + return 0; +} EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages); static void submit_bio_wait_endio(struct bio *bio) @@ -1807,9 +1834,6 @@ again: if (!bio_integrity_endio(bio)) return; - if (WARN_ONCE(bio->bi_next, "driver left bi_next not NULL")) - bio->bi_next = NULL; - /* * Need to have a real endio function for chained bios, otherwise * various corner cases will break (like stacking block devices that @@ -1869,6 +1893,7 @@ struct bio *bio_split(struct bio *bio, int sectors, bio_integrity_trim(split); bio_advance(bio, split->bi_iter.bi_size); + bio->bi_iter.bi_done = 0; if (bio_flagged(bio, BIO_TRACE_COMPLETION)) bio_set_flag(split, BIO_TRACE_COMPLETION); diff --git a/block/blk-core.c b/block/blk-core.c index cf0ee764b908..ee33590f54eb 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -273,10 +273,6 @@ static void req_bio_endio(struct request *rq, struct bio *bio, bio_advance(bio, nbytes); /* don't actually finish bio if it's part of flush sequence */ - /* - * XXX this code looks suspicious - it's not consistent with advancing - * req->bio in caller - */ if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) bio_endio(bio); } @@ -2159,11 +2155,12 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part) if (part->policy && op_is_write(bio_op(bio))) { char b[BDEVNAME_SIZE]; - printk(KERN_ERR + WARN_ONCE(1, "generic_make_request: Trying to write " "to read-only block-device %s (partno %d)\n", bio_devname(bio, b), part->partno); - return true; + /* Older lvm-tools actually trigger this */ + return false; } return false; @@ -3081,10 +3078,8 @@ bool blk_update_request(struct request *req, blk_status_t error, struct bio *bio = req->bio; unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); - if (bio_bytes == bio->bi_iter.bi_size) { + if (bio_bytes == bio->bi_iter.bi_size) req->bio = bio->bi_next; - bio->bi_next = NULL; - } /* Completion has already been traced */ bio_clear_flag(bio, BIO_TRACE_COMPLETION); @@ -3479,6 +3474,10 @@ static void __blk_rq_prep_clone(struct request *dst, struct request *src) dst->cpu = src->cpu; dst->__sector = blk_rq_pos(src); dst->__data_len = blk_rq_bytes(src); + if (src->rq_flags & RQF_SPECIAL_PAYLOAD) { + dst->rq_flags |= RQF_SPECIAL_PAYLOAD; + dst->special_vec = src->special_vec; + } dst->nr_phys_segments = src->nr_phys_segments; dst->ioprio = src->ioprio; dst->extra_len = src->extra_len; diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index ffa622366922..1c4532e92938 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -356,7 +356,7 @@ static const char *const blk_mq_rq_state_name_array[] = { static const char *blk_mq_rq_state_name(enum mq_rq_state rq_state) { - if (WARN_ON_ONCE((unsigned int)rq_state > + if (WARN_ON_ONCE((unsigned int)rq_state >= ARRAY_SIZE(blk_mq_rq_state_name_array))) return "(?)"; return blk_mq_rq_state_name_array[rq_state]; diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 70356a2a11ab..3de0836163c2 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -271,7 +271,7 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) * test and set the bit before assining ->rqs[]. */ rq = tags->rqs[bitnr]; - if (rq && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) + if (rq && blk_mq_request_started(rq)) iter_data->fn(rq, iter_data->data, reserved); return true; @@ -311,35 +311,6 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset, } EXPORT_SYMBOL(blk_mq_tagset_busy_iter); -int blk_mq_tagset_iter(struct blk_mq_tag_set *set, void *data, - int (fn)(void *, struct request *)) -{ - int i, j, ret = 0; - - if (WARN_ON_ONCE(!fn)) - goto out; - - for (i = 0; i < set->nr_hw_queues; i++) { - struct blk_mq_tags *tags = set->tags[i]; - - if (!tags) - continue; - - for (j = 0; j < tags->nr_tags; j++) { - if (!tags->static_rqs[j]) - continue; - - ret = fn(data, tags->static_rqs[j]); - if (ret) - goto out; - } - } - -out: - return ret; -} -EXPORT_SYMBOL_GPL(blk_mq_tagset_iter); - void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, void *priv) { diff --git a/block/blk-mq.c b/block/blk-mq.c index e9da5e6a8526..654b0dc7e001 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -558,10 +558,8 @@ static void __blk_mq_complete_request(struct request *rq) bool shared = false; int cpu; - if (cmpxchg(&rq->state, MQ_RQ_IN_FLIGHT, MQ_RQ_COMPLETE) != - MQ_RQ_IN_FLIGHT) + if (!blk_mq_mark_complete(rq)) return; - if (rq->internal_tag != -1) blk_mq_sched_completed_request(rq); @@ -671,6 +669,7 @@ static void __blk_mq_requeue_request(struct request *rq) if (blk_mq_request_started(rq)) { WRITE_ONCE(rq->state, MQ_RQ_IDLE); + rq->rq_flags &= ~RQF_TIMED_OUT; if (q->dma_drain_size && blk_rq_bytes(rq)) rq->nr_phys_segments--; } @@ -770,6 +769,7 @@ EXPORT_SYMBOL(blk_mq_tag_to_rq); static void blk_mq_rq_timed_out(struct request *req, bool reserved) { + req->rq_flags |= RQF_TIMED_OUT; if (req->q->mq_ops->timeout) { enum blk_eh_timer_return ret; @@ -788,6 +788,8 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next) if (blk_mq_rq_state(rq) != MQ_RQ_IN_FLIGHT) return false; + if (rq->rq_flags & RQF_TIMED_OUT) + return false; deadline = blk_rq_deadline(rq); if (time_after_eq(jiffies, deadline)) @@ -1071,6 +1073,9 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx, #define BLK_MQ_RESOURCE_DELAY 3 /* ms units */ +/* + * Returns true if we did some work AND can potentially do more. + */ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, bool got_budget) { @@ -1201,8 +1206,17 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, blk_mq_run_hw_queue(hctx, true); else if (needs_restart && (ret == BLK_STS_RESOURCE)) blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY); + + return false; } + /* + * If the host/device is unable to accept more work, inform the + * caller of that. + */ + if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) + return false; + return (queued + errors) != 0; } @@ -2349,7 +2363,6 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) mutex_lock(&set->tag_list_lock); list_del_rcu(&q->tag_set_list); - INIT_LIST_HEAD(&q->tag_set_list); if (list_is_singular(&set->tag_list)) { /* just transitioned to unshared */ set->flags &= ~BLK_MQ_F_TAG_SHARED; @@ -2357,8 +2370,8 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) blk_mq_update_tag_set_depth(set, false); } mutex_unlock(&set->tag_list_lock); - synchronize_rcu(); + INIT_LIST_HEAD(&q->tag_set_list); } static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set, diff --git a/block/blk-softirq.c b/block/blk-softirq.c index 01e2b353a2b9..15c1f5e12eb8 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -144,6 +144,7 @@ do_local: local_irq_restore(flags); } +EXPORT_SYMBOL(__blk_complete_request); /** * blk_complete_request - end I/O on a request diff --git a/block/blk-tag.c b/block/blk-tag.c index 24b20d86bcbc..fbc153aef166 100644 --- a/block/blk-tag.c +++ b/block/blk-tag.c @@ -188,7 +188,6 @@ int blk_queue_init_tags(struct request_queue *q, int depth, */ q->queue_tags = tags; queue_flag_set_unlocked(QUEUE_FLAG_QUEUED, q); - INIT_LIST_HEAD(&q->tag_busy_list); return 0; } EXPORT_SYMBOL(blk_queue_init_tags); @@ -374,27 +373,6 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) rq->tag = tag; bqt->tag_index[tag] = rq; blk_start_request(rq); - list_add(&rq->queuelist, &q->tag_busy_list); return 0; } EXPORT_SYMBOL(blk_queue_start_tag); - -/** - * blk_queue_invalidate_tags - invalidate all pending tags - * @q: the request queue for the device - * - * Description: - * Hardware conditions may dictate a need to stop all pending requests. - * In this case, we will safely clear the block side of the tag queue and - * readd all requests to the request queue in the right order. - **/ -void blk_queue_invalidate_tags(struct request_queue *q) -{ - struct list_head *tmp, *n; - - lockdep_assert_held(q->queue_lock); - - list_for_each_safe(tmp, n, &q->tag_busy_list) - blk_requeue_request(q, list_entry_rq(tmp)); -} -EXPORT_SYMBOL(blk_queue_invalidate_tags); diff --git a/block/blk-timeout.c b/block/blk-timeout.c index 4b8a48d48ba1..f2cfd56e1606 100644 --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -210,6 +210,7 @@ void blk_add_timer(struct request *req) if (!req->timeout) req->timeout = q->rq_timeout; + req->rq_flags &= ~RQF_TIMED_OUT; blk_rq_set_deadline(req, jiffies + req->timeout); /* diff --git a/block/bsg.c b/block/bsg.c index 132e657e2d91..3da540faf673 100644 --- a/block/bsg.c +++ b/block/bsg.c @@ -267,8 +267,6 @@ bsg_map_hdr(struct request_queue *q, struct sg_io_v4 *hdr, fmode_t mode) } else if (hdr->din_xfer_len) { ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->din_xferp), hdr->din_xfer_len, GFP_KERNEL); - } else { - ret = blk_rq_map_user(q, rq, NULL, NULL, 0, GFP_KERNEL); } if (ret) @@ -693,6 +691,8 @@ static struct bsg_device *bsg_add_device(struct inode *inode, struct bsg_device *bd; unsigned char buf[32]; + lockdep_assert_held(&bsg_mutex); + if (!blk_get_queue(rq)) return ERR_PTR(-ENXIO); @@ -707,14 +707,12 @@ static struct bsg_device *bsg_add_device(struct inode *inode, bsg_set_block(bd, file); atomic_set(&bd->ref_count, 1); - mutex_lock(&bsg_mutex); hlist_add_head(&bd->dev_list, bsg_dev_idx_hash(iminor(inode))); strncpy(bd->name, dev_name(rq->bsg_dev.class_dev), sizeof(bd->name) - 1); bsg_dbg(bd, "bound to <%s>, max queue %d\n", format_dev_t(buf, inode->i_rdev), bd->max_queue); - mutex_unlock(&bsg_mutex); return bd; } @@ -722,7 +720,7 @@ static struct bsg_device *__bsg_get_device(int minor, struct request_queue *q) { struct bsg_device *bd; - mutex_lock(&bsg_mutex); + lockdep_assert_held(&bsg_mutex); hlist_for_each_entry(bd, bsg_dev_idx_hash(minor), dev_list) { if (bd->queue == q) { @@ -732,7 +730,6 @@ static struct bsg_device *__bsg_get_device(int minor, struct request_queue *q) } bd = NULL; found: - mutex_unlock(&bsg_mutex); return bd; } @@ -746,17 +743,18 @@ static struct bsg_device *bsg_get_device(struct inode *inode, struct file *file) */ mutex_lock(&bsg_mutex); bcd = idr_find(&bsg_minor_idr, iminor(inode)); - mutex_unlock(&bsg_mutex); - if (!bcd) - return ERR_PTR(-ENODEV); + if (!bcd) { + bd = ERR_PTR(-ENODEV); + goto out_unlock; + } bd = __bsg_get_device(iminor(inode), bcd->queue); - if (bd) - return bd; - - bd = bsg_add_device(inode, bcd->queue, file); + if (!bd) + bd = bsg_add_device(inode, bcd->queue, file); +out_unlock: + mutex_unlock(&bsg_mutex); return bd; } diff --git a/block/sed-opal.c b/block/sed-opal.c index 945f4b8610e0..e0de4dd448b3 100644 --- a/block/sed-opal.c +++ b/block/sed-opal.c @@ -877,7 +877,7 @@ static size_t response_get_string(const struct parsed_resp *resp, int n, return 0; } - if (n > resp->num) { + if (n >= resp->num) { pr_debug("Response has %d tokens. Can't access %d\n", resp->num, n); return 0; @@ -916,7 +916,7 @@ static u64 response_get_u64(const struct parsed_resp *resp, int n) return 0; } - if (n > resp->num) { + if (n >= resp->num) { pr_debug("Response has %d tokens. Can't access %d\n", resp->num, n); return 0; diff --git a/certs/Kconfig b/certs/Kconfig index 5f7663df6e8e..c94e93d8bccf 100644 --- a/certs/Kconfig +++ b/certs/Kconfig @@ -13,7 +13,7 @@ config MODULE_SIG_KEY If this option is unchanged from its default "certs/signing_key.pem", then the kernel will automatically generate the private key and - certificate as described in Documentation/module-signing.txt + certificate as described in Documentation/admin-guide/module-signing.rst config SYSTEM_TRUSTED_KEYRING bool "Provide system-wide ring of trusted keys" diff --git a/certs/blacklist.h b/certs/blacklist.h index 150d82da8e99..1efd6fa0dc60 100644 --- a/certs/blacklist.h +++ b/certs/blacklist.h @@ -1,3 +1,3 @@ #include -extern const char __initdata *const blacklist_hashes[]; +extern const char __initconst *const blacklist_hashes[]; diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 49fa8582138b..c166f424871c 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -1060,12 +1060,19 @@ void af_alg_async_cb(struct crypto_async_request *_req, int err) } EXPORT_SYMBOL_GPL(af_alg_async_cb); -__poll_t af_alg_poll_mask(struct socket *sock, __poll_t events) +/** + * af_alg_poll - poll system call handler + */ +__poll_t af_alg_poll(struct file *file, struct socket *sock, + poll_table *wait) { struct sock *sk = sock->sk; struct alg_sock *ask = alg_sk(sk); struct af_alg_ctx *ctx = ask->private; - __poll_t mask = 0; + __poll_t mask; + + sock_poll_wait(file, sk_sleep(sk), wait); + mask = 0; if (!ctx->more || ctx->used) mask |= EPOLLIN | EPOLLRDNORM; @@ -1075,7 +1082,7 @@ __poll_t af_alg_poll_mask(struct socket *sock, __poll_t events) return mask; } -EXPORT_SYMBOL_GPL(af_alg_poll_mask); +EXPORT_SYMBOL_GPL(af_alg_poll); /** * af_alg_alloc_areq - allocate struct af_alg_async_req @@ -1148,8 +1155,10 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags, /* make one iovec available as scatterlist */ err = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, seglen); - if (err < 0) + if (err < 0) { + rsgl->sg_num_bytes = 0; return err; + } /* chain the new scatterlist with previous one */ if (areq->last_rsgl) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 825524f27438..c40a8c7ee8ae 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -375,7 +375,7 @@ static struct proto_ops algif_aead_ops = { .sendmsg = aead_sendmsg, .sendpage = af_alg_sendpage, .recvmsg = aead_recvmsg, - .poll_mask = af_alg_poll_mask, + .poll = af_alg_poll, }; static int aead_check_key(struct socket *sock) @@ -471,7 +471,7 @@ static struct proto_ops algif_aead_ops_nokey = { .sendmsg = aead_sendmsg_nokey, .sendpage = aead_sendpage_nokey, .recvmsg = aead_recvmsg_nokey, - .poll_mask = af_alg_poll_mask, + .poll = af_alg_poll, }; static void *aead_bind(const char *name, u32 type, u32 mask) diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index 4c04eb9888ad..cfdaab2b7d76 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -206,7 +206,7 @@ static struct proto_ops algif_skcipher_ops = { .sendmsg = skcipher_sendmsg, .sendpage = af_alg_sendpage, .recvmsg = skcipher_recvmsg, - .poll_mask = af_alg_poll_mask, + .poll = af_alg_poll, }; static int skcipher_check_key(struct socket *sock) @@ -302,7 +302,7 @@ static struct proto_ops algif_skcipher_ops_nokey = { .sendmsg = skcipher_sendmsg_nokey, .sendpage = skcipher_sendpage_nokey, .recvmsg = skcipher_recvmsg_nokey, - .poll_mask = af_alg_poll_mask, + .poll = af_alg_poll, }; static void *skcipher_bind(const char *name, u32 type, u32 mask) diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c index 39aecad286fe..26539e9a8bda 100644 --- a/crypto/asymmetric_keys/asymmetric_type.c +++ b/crypto/asymmetric_keys/asymmetric_type.c @@ -1,6 +1,6 @@ /* Asymmetric public-key cryptography key type * - * See Documentation/security/asymmetric-keys.txt + * See Documentation/crypto/asymmetric-keys.txt * * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c index 11b7ba170904..28198314bc39 100644 --- a/crypto/asymmetric_keys/signature.c +++ b/crypto/asymmetric_keys/signature.c @@ -1,6 +1,6 @@ /* Signature verification with an asymmetric key * - * See Documentation/security/asymmetric-keys.txt + * See Documentation/crypto/asymmetric-keys.txt * * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c index 7d81e6bb461a..b6cabac4b62b 100644 --- a/crypto/asymmetric_keys/x509_cert_parser.c +++ b/crypto/asymmetric_keys/x509_cert_parser.c @@ -249,6 +249,15 @@ int x509_note_signature(void *context, size_t hdrlen, return -EINVAL; } + if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0) { + /* Discard the BIT STRING metadata */ + if (vlen < 1 || *(const u8 *)value != 0) + return -EBADMSG; + + value++; + vlen--; + } + ctx->cert->raw_sig = value; ctx->cert->raw_sig_size = vlen; return 0; diff --git a/crypto/morus640.c b/crypto/morus640.c index 9fbcde307daf..5eede3749e64 100644 --- a/crypto/morus640.c +++ b/crypto/morus640.c @@ -274,8 +274,9 @@ static void crypto_morus640_decrypt_chunk(struct morus640_state *state, u8 *dst, union morus640_block_in tail; memcpy(tail.bytes, src, size); + memset(tail.bytes + size, 0, MORUS640_BLOCK_SIZE - size); - crypto_morus640_load_a(&m, src); + crypto_morus640_load_a(&m, tail.bytes); crypto_morus640_core(state, &m); crypto_morus640_store_a(tail.bytes, &m); memset(tail.bytes + size, 0, MORUS640_BLOCK_SIZE - size); diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c index 264ec12c0b9c..7f6735d9003f 100644 --- a/crypto/sha3_generic.c +++ b/crypto/sha3_generic.c @@ -152,7 +152,7 @@ static SHA3_INLINE void keccakf_round(u64 st[25]) st[24] ^= bc[ 4]; } -static void __optimize("O3") keccakf(u64 st[25]) +static void keccakf(u64 st[25]) { int round; diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c index 38a286975c31..9706613eecf9 100644 --- a/drivers/acpi/acpi_lpss.c +++ b/drivers/acpi/acpi_lpss.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "internal.h" @@ -878,6 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev) #define LPSS_GPIODEF0_DMA_LLP BIT(13) static DEFINE_MUTEX(lpss_iosf_mutex); +static bool lpss_iosf_d3_entered; static void lpss_iosf_enter_d3_state(void) { @@ -920,6 +922,9 @@ static void lpss_iosf_enter_d3_state(void) iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE, LPSS_IOSF_GPIODEF0, value1, mask1); + + lpss_iosf_d3_entered = true; + exit: mutex_unlock(&lpss_iosf_mutex); } @@ -934,6 +939,11 @@ static void lpss_iosf_exit_d3_state(void) mutex_lock(&lpss_iosf_mutex); + if (!lpss_iosf_d3_entered) + goto exit; + + lpss_iosf_d3_entered = false; + iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE, LPSS_IOSF_GPIODEF0, value1, mask1); @@ -943,6 +953,7 @@ static void lpss_iosf_exit_d3_state(void) iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE, LPSS_IOSF_PMCSR, value2, mask2); +exit: mutex_unlock(&lpss_iosf_mutex); } @@ -961,7 +972,8 @@ static int acpi_lpss_suspend(struct device *dev, bool wakeup) * wrong status for devices being about to be powered off. See * lpss_iosf_enter_d3_state() for further information. */ - if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available()) + if (acpi_target_system_state() == ACPI_STATE_S0 && + lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available()) lpss_iosf_enter_d3_state(); return ret; diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c index fc0c2e2328cd..fe9d46d81750 100644 --- a/drivers/acpi/acpica/hwsleep.c +++ b/drivers/acpi/acpica/hwsleep.c @@ -51,16 +51,23 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state) return_ACPI_STATUS(status); } - /* - * 1) Disable all GPEs - * 2) Enable all wakeup GPEs - */ + /* Disable all GPEs */ status = acpi_hw_disable_all_gpes(); if (ACPI_FAILURE(status)) { return_ACPI_STATUS(status); } + /* + * If the target sleep state is S5, clear all GPEs and fixed events too + */ + if (sleep_state == ACPI_STATE_S5) { + status = acpi_hw_clear_acpi_status(); + if (ACPI_FAILURE(status)) { + return_ACPI_STATUS(status); + } + } acpi_gbl_system_awake_and_running = FALSE; + /* Enable all wakeup GPEs */ status = acpi_hw_enable_all_wakeup_gpes(); if (ACPI_FAILURE(status)) { return_ACPI_STATUS(status); diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c index bc5f05906bd1..44f35ab3347d 100644 --- a/drivers/acpi/acpica/psloop.c +++ b/drivers/acpi/acpica/psloop.c @@ -497,6 +497,18 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state) status = acpi_ps_create_op(walk_state, aml_op_start, &op); if (ACPI_FAILURE(status)) { + /* + * ACPI_PARSE_MODULE_LEVEL means that we are loading a table by + * executing it as a control method. However, if we encounter + * an error while loading the table, we need to keep trying to + * load the table rather than aborting the table load. Set the + * status to AE_OK to proceed with the table load. + */ + if ((walk_state-> + parse_flags & ACPI_PARSE_MODULE_LEVEL) + && status == AE_ALREADY_EXISTS) { + status = AE_OK; + } if (status == AE_CTRL_PARSE_CONTINUE) { continue; } @@ -694,6 +706,25 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state) acpi_ps_next_parse_state(walk_state, op, status); if (status == AE_CTRL_PENDING) { status = AE_OK; + } else + if ((walk_state-> + parse_flags & ACPI_PARSE_MODULE_LEVEL) + && status != AE_CTRL_TRANSFER + && ACPI_FAILURE(status)) { + /* + * ACPI_PARSE_MODULE_LEVEL flag means that we are currently + * loading a table by executing it as a control method. + * However, if we encounter an error while loading the table, + * we need to keep trying to load the table rather than + * aborting the table load (setting the status to AE_OK + * continues the table load). If we get a failure at this + * point, it means that the dispatcher got an error while + * processing Op (most likely an AML operand error) or a + * control method was called from module level and the + * dispatcher returned AE_CTRL_TRANSFER. In the latter case, + * leave the status alone, there's nothing wrong with it. + */ + status = AE_OK; } } diff --git a/drivers/acpi/acpica/uterror.c b/drivers/acpi/acpica/uterror.c index 5a64ddaed8a3..e47430272692 100644 --- a/drivers/acpi/acpica/uterror.c +++ b/drivers/acpi/acpica/uterror.c @@ -182,19 +182,19 @@ acpi_ut_prefixed_namespace_error(const char *module_name, switch (lookup_status) { case AE_ALREADY_EXISTS: - acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR); + acpi_os_printf(ACPI_MSG_BIOS_ERROR); message = "Failure creating"; break; case AE_NOT_FOUND: - acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR); + acpi_os_printf(ACPI_MSG_BIOS_ERROR); message = "Could not resolve"; break; default: - acpi_os_printf("\n" ACPI_MSG_ERROR); + acpi_os_printf(ACPI_MSG_ERROR); message = "Failure resolving"; break; } diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c index b0113a5802a3..d79ad844c78f 100644 --- a/drivers/acpi/battery.c +++ b/drivers/acpi/battery.c @@ -717,10 +717,11 @@ void battery_hook_register(struct acpi_battery_hook *hook) */ pr_err("extension failed to load: %s", hook->name); __battery_hook_unregister(hook, 0); - return; + goto end; } } pr_info("new extension: %s\n", hook->name); +end: mutex_unlock(&hook_mutex); } EXPORT_SYMBOL_GPL(battery_hook_register); @@ -732,7 +733,7 @@ EXPORT_SYMBOL_GPL(battery_hook_register); */ static void battery_hook_add_battery(struct acpi_battery *battery) { - struct acpi_battery_hook *hook_node; + struct acpi_battery_hook *hook_node, *tmp; mutex_lock(&hook_mutex); INIT_LIST_HEAD(&battery->list); @@ -744,15 +745,15 @@ static void battery_hook_add_battery(struct acpi_battery *battery) * when a battery gets hotplugged or initialized * during the battery module initialization. */ - list_for_each_entry(hook_node, &battery_hook_list, list) { + list_for_each_entry_safe(hook_node, tmp, &battery_hook_list, list) { if (hook_node->add_battery(battery->bat)) { /* * The notification of the extensions has failed, to * prevent further errors we will unload the extension. */ - __battery_hook_unregister(hook_node, 0); pr_err("error in extension, unloading: %s", hook_node->name); + __battery_hook_unregister(hook_node, 0); } } mutex_unlock(&hook_mutex); diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index bb94cf0731fe..917f77f4cb55 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -2037,6 +2037,17 @@ static inline void acpi_ec_query_exit(void) } } +static const struct dmi_system_id acpi_ec_no_wakeup[] = { + { + .ident = "Thinkpad X1 Carbon 6th", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_FAMILY, "Thinkpad X1 Carbon 6th"), + }, + }, + { }, +}; + int __init acpi_ec_init(void) { int result; @@ -2047,6 +2058,15 @@ int __init acpi_ec_init(void) if (result) return result; + /* + * Disable EC wakeup on following systems to prevent periodic + * wakeup from EC GPE. + */ + if (dmi_check_system(acpi_ec_no_wakeup)) { + ec_no_wakeup = true; + pr_debug("Disabling EC wakeup on suspend-to-idle\n"); + } + /* Drivers must be started after acpi_ec_query_init() */ dsdt_fail = acpi_bus_register_driver(&acpi_ec_driver); /* diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c index d15814e1727f..7c479002e798 100644 --- a/drivers/acpi/nfit/core.c +++ b/drivers/acpi/nfit/core.c @@ -408,6 +408,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, const guid_t *guid; int rc, i; + if (cmd_rc) + *cmd_rc = -EINVAL; func = cmd; if (cmd == ND_CMD_CALL) { call_pkg = buf; @@ -518,6 +520,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, * If we return an error (like elsewhere) then caller wouldn't * be able to rely upon data returned to make calculation. */ + if (cmd_rc) + *cmd_rc = 0; return 0; } @@ -1273,7 +1277,7 @@ static ssize_t scrub_show(struct device *dev, mutex_lock(&acpi_desc->init_mutex); rc = sprintf(buf, "%d%s", acpi_desc->scrub_count, - work_busy(&acpi_desc->dwork.work) + acpi_desc->scrub_busy && !acpi_desc->cancel ? "+\n" : "\n"); mutex_unlock(&acpi_desc->init_mutex); } @@ -2939,6 +2943,32 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc, return 0; } +static void __sched_ars(struct acpi_nfit_desc *acpi_desc, unsigned int tmo) +{ + lockdep_assert_held(&acpi_desc->init_mutex); + + acpi_desc->scrub_busy = 1; + /* note this should only be set from within the workqueue */ + if (tmo) + acpi_desc->scrub_tmo = tmo; + queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ); +} + +static void sched_ars(struct acpi_nfit_desc *acpi_desc) +{ + __sched_ars(acpi_desc, 0); +} + +static void notify_ars_done(struct acpi_nfit_desc *acpi_desc) +{ + lockdep_assert_held(&acpi_desc->init_mutex); + + acpi_desc->scrub_busy = 0; + acpi_desc->scrub_count++; + if (acpi_desc->scrub_count_state) + sysfs_notify_dirent(acpi_desc->scrub_count_state); +} + static void acpi_nfit_scrub(struct work_struct *work) { struct acpi_nfit_desc *acpi_desc; @@ -2949,14 +2979,10 @@ static void acpi_nfit_scrub(struct work_struct *work) mutex_lock(&acpi_desc->init_mutex); query_rc = acpi_nfit_query_poison(acpi_desc); tmo = __acpi_nfit_scrub(acpi_desc, query_rc); - if (tmo) { - queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ); - acpi_desc->scrub_tmo = tmo; - } else { - acpi_desc->scrub_count++; - if (acpi_desc->scrub_count_state) - sysfs_notify_dirent(acpi_desc->scrub_count_state); - } + if (tmo) + __sched_ars(acpi_desc, tmo); + else + notify_ars_done(acpi_desc); memset(acpi_desc->ars_status, 0, acpi_desc->max_ars); mutex_unlock(&acpi_desc->init_mutex); } @@ -3037,7 +3063,7 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc) break; } - queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0); + sched_ars(acpi_desc); return 0; } @@ -3239,7 +3265,7 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags) } } if (scheduled) { - queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0); + sched_ars(acpi_desc); dev_dbg(dev, "ars_scan triggered\n"); } mutex_unlock(&acpi_desc->init_mutex); diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h index 7d15856a739f..a97ff42fe311 100644 --- a/drivers/acpi/nfit/nfit.h +++ b/drivers/acpi/nfit/nfit.h @@ -203,6 +203,7 @@ struct acpi_nfit_desc { unsigned int max_ars; unsigned int scrub_count; unsigned int scrub_mode; + unsigned int scrub_busy:1; unsigned int cancel:1; unsigned long dimm_cmd_force_en; unsigned long bus_cmd_force_en; diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index 7ca41bf023c9..8df9abfa947b 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -45,6 +45,8 @@ #include #include +#include "acpica/accommon.h" +#include "acpica/acnamesp.h" #include "internal.h" #define _COMPONENT ACPI_OS_SERVICES @@ -1490,6 +1492,76 @@ int acpi_check_region(resource_size_t start, resource_size_t n, } EXPORT_SYMBOL(acpi_check_region); +static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level, + void *_res, void **return_value) +{ + struct acpi_mem_space_context **mem_ctx; + union acpi_operand_object *handler_obj; + union acpi_operand_object *region_obj2; + union acpi_operand_object *region_obj; + struct resource *res = _res; + acpi_status status; + + region_obj = acpi_ns_get_attached_object(handle); + if (!region_obj) + return AE_OK; + + handler_obj = region_obj->region.handler; + if (!handler_obj) + return AE_OK; + + if (region_obj->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) + return AE_OK; + + if (!(region_obj->region.flags & AOPOBJ_SETUP_COMPLETE)) + return AE_OK; + + region_obj2 = acpi_ns_get_secondary_object(region_obj); + if (!region_obj2) + return AE_OK; + + mem_ctx = (void *)®ion_obj2->extra.region_context; + + if (!(mem_ctx[0]->address >= res->start && + mem_ctx[0]->address < res->end)) + return AE_OK; + + status = handler_obj->address_space.setup(region_obj, + ACPI_REGION_DEACTIVATE, + NULL, (void **)mem_ctx); + if (ACPI_SUCCESS(status)) + region_obj->region.flags &= ~(AOPOBJ_SETUP_COMPLETE); + + return status; +} + +/** + * acpi_release_memory - Release any mappings done to a memory region + * @handle: Handle to namespace node + * @res: Memory resource + * @level: A level that terminates the search + * + * Walks through @handle and unmaps all SystemMemory Operation Regions that + * overlap with @res and that have already been activated (mapped). + * + * This is a helper that allows drivers to place special requirements on memory + * region that may overlap with operation regions, primarily allowing them to + * safely map the region as non-cached memory. + * + * The unmapped Operation Regions will be automatically remapped next time they + * are called, so the drivers do not need to do anything else. + */ +acpi_status acpi_release_memory(acpi_handle handle, struct resource *res, + u32 level) +{ + if (!(res->flags & IORESOURCE_MEM)) + return AE_TYPE; + + return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level, + acpi_deactivate_mem_region, NULL, res, NULL); +} +EXPORT_SYMBOL_GPL(acpi_release_memory); + /* * Let drivers know whether the resource checks are effective */ diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index e5ea1974d1e3..d1e26cb599bf 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -481,8 +481,14 @@ static int topology_get_acpi_cpu_tag(struct acpi_table_header *table, if (cpu_node) { cpu_node = acpi_find_processor_package_id(table, cpu_node, level, flag); - /* Only the first level has a guaranteed id */ - if (level == 0) + /* + * As per specification if the processor structure represents + * an actual processor, then ACPI processor ID must be valid. + * For processor containers ACPI_PPTT_ACPI_PROCESSOR_ID_VALID + * should be set if the UID is valid + */ + if (level == 0 || + cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID) return cpu_node->acpi_processor_id; return ACPI_PTR_DIFF(cpu_node, table); } diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig index 2b16e7c8fff3..39b181d6bd0d 100644 --- a/drivers/ata/Kconfig +++ b/drivers/ata/Kconfig @@ -398,7 +398,6 @@ config SATA_DWC_VDEBUG config SATA_HIGHBANK tristate "Calxeda Highbank SATA support" - depends on HAS_DMA depends on ARCH_HIGHBANK || COMPILE_TEST help This option enables support for the Calxeda Highbank SoC's @@ -408,7 +407,6 @@ config SATA_HIGHBANK config SATA_MV tristate "Marvell SATA support" - depends on HAS_DMA depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \ ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST select GENERIC_PHY diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index 738fb22978dd..b2b9eba1d214 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -400,6 +400,7 @@ static const struct pci_device_id ahci_pci_tbl[] = { { PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */ { PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */ { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_mobile }, /* ApolloLake AHCI */ + { PCI_VDEVICE(INTEL, 0x34d3), board_ahci_mobile }, /* Ice Lake LP AHCI */ /* JMicron 360/1/3/5/6, match class to avoid IDE function */ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, @@ -1280,6 +1281,59 @@ static bool ahci_broken_suspend(struct pci_dev *pdev) return strcmp(buf, dmi->driver_data) < 0; } +static bool ahci_broken_lpm(struct pci_dev *pdev) +{ + static const struct dmi_system_id sysids[] = { + /* Various Lenovo 50 series have LPM issues with older BIOSen */ + { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X250"), + }, + .driver_data = "20180406", /* 1.31 */ + }, + { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L450"), + }, + .driver_data = "20180420", /* 1.28 */ + }, + { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T450s"), + }, + .driver_data = "20180315", /* 1.33 */ + }, + { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"), + }, + /* + * Note date based on release notes, 2.35 has been + * reported to be good, but I've been unable to get + * a hold of the reporter to get the DMI BIOS date. + * TODO: fix this. + */ + .driver_data = "20180310", /* 2.35 */ + }, + { } /* terminate list */ + }; + const struct dmi_system_id *dmi = dmi_first_match(sysids); + int year, month, date; + char buf[9]; + + if (!dmi) + return false; + + dmi_get_date(DMI_BIOS_DATE, &year, &month, &date); + snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date); + + return strcmp(buf, dmi->driver_data) < 0; +} + static bool ahci_broken_online(struct pci_dev *pdev) { #define ENCODE_BUSDEVFN(bus, slot, func) \ @@ -1694,6 +1748,12 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) "quirky BIOS, skipping spindown on poweroff\n"); } + if (ahci_broken_lpm(pdev)) { + pi.flags |= ATA_FLAG_NO_LPM; + dev_warn(&pdev->dev, + "BIOS update required for Link Power Management support\n"); + } + if (ahci_broken_suspend(pdev)) { hpriv->flags |= AHCI_HFLAG_NO_SUSPEND; dev_warn(&pdev->dev, diff --git a/drivers/ata/ahci_mvebu.c b/drivers/ata/ahci_mvebu.c index 0045dacd814b..72d90b4c3aae 100644 --- a/drivers/ata/ahci_mvebu.c +++ b/drivers/ata/ahci_mvebu.c @@ -82,7 +82,7 @@ static void ahci_mvebu_regret_option(struct ahci_host_priv *hpriv) * * Return: 0 on success; Error code otherwise. */ -int ahci_mvebu_stop_engine(struct ata_port *ap) +static int ahci_mvebu_stop_engine(struct ata_port *ap) { void __iomem *port_mmio = ahci_port_base(ap); u32 tmp, port_fbs; diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c index 965842a08743..09620c2ffa0f 100644 --- a/drivers/ata/libahci.c +++ b/drivers/ata/libahci.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -1146,10 +1147,12 @@ static ssize_t ahci_led_store(struct ata_port *ap, const char *buf, /* get the slot number from the message */ pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8; - if (pmp < EM_MAX_SLOTS) + if (pmp < EM_MAX_SLOTS) { + pmp = array_index_nospec(pmp, EM_MAX_SLOTS); emp = &pp->em_priv[pmp]; - else + } else { return -EINVAL; + } /* mask off the activity bits if we are in sw_activity * mode, user should turn off sw_activity before setting diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 27d15ed7fa3d..cc71c63df381 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -2493,6 +2493,9 @@ int ata_dev_configure(struct ata_device *dev) (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2) dev->horkage |= ATA_HORKAGE_NOLPM; + if (ap->flags & ATA_FLAG_NO_LPM) + dev->horkage |= ATA_HORKAGE_NOLPM; + if (dev->horkage & ATA_HORKAGE_NOLPM) { ata_dev_warn(dev, "LPM support broken, forcing max_power\n"); dev->link->ap->target_lpm_policy = ATA_LPM_MAX_POWER; diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index d5412145d76d..01306c018398 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -614,8 +614,7 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) { struct ata_queued_cmd *qc; - for (i = 0; i < ATA_MAX_QUEUE; i++) { - qc = __ata_qc_from_tag(ap, i); + ata_qc_for_each_raw(ap, qc, i) { if (qc->flags & ATA_QCFLAG_ACTIVE && qc->scsicmd == scmd) break; @@ -818,14 +817,13 @@ EXPORT_SYMBOL_GPL(ata_port_wait_eh); static int ata_eh_nr_in_flight(struct ata_port *ap) { + struct ata_queued_cmd *qc; unsigned int tag; int nr = 0; /* count only non-internal commands */ - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - if (ata_tag_internal(tag)) - continue; - if (ata_qc_from_tag(ap, tag)) + ata_qc_for_each(ap, qc, tag) { + if (qc) nr++; } @@ -847,13 +845,13 @@ void ata_eh_fastdrain_timerfn(struct timer_list *t) goto out_unlock; if (cnt == ap->fastdrain_cnt) { + struct ata_queued_cmd *qc; unsigned int tag; /* No progress during the last interval, tag all * in-flight qcs as timed out and freeze the port. */ - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); + ata_qc_for_each(ap, qc, tag) { if (qc) qc->err_mask |= AC_ERR_TIMEOUT; } @@ -999,6 +997,7 @@ void ata_port_schedule_eh(struct ata_port *ap) static int ata_do_link_abort(struct ata_port *ap, struct ata_link *link) { + struct ata_queued_cmd *qc; int tag, nr_aborted = 0; WARN_ON(!ap->ops->error_handler); @@ -1007,9 +1006,7 @@ static int ata_do_link_abort(struct ata_port *ap, struct ata_link *link) ata_eh_set_pending(ap, 0); /* include internal tag in iteration */ - for (tag = 0; tag <= ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); - + ata_qc_for_each_with_internal(ap, qc, tag) { if (qc && (!link || qc->dev->link == link)) { qc->flags |= ATA_QCFLAG_FAILED; ata_qc_complete(qc); @@ -1712,9 +1709,7 @@ void ata_eh_analyze_ncq_error(struct ata_link *link) return; /* has LLDD analyzed already? */ - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - qc = __ata_qc_from_tag(ap, tag); - + ata_qc_for_each_raw(ap, qc, tag) { if (!(qc->flags & ATA_QCFLAG_FAILED)) continue; @@ -2136,6 +2131,7 @@ static void ata_eh_link_autopsy(struct ata_link *link) { struct ata_port *ap = link->ap; struct ata_eh_context *ehc = &link->eh_context; + struct ata_queued_cmd *qc; struct ata_device *dev; unsigned int all_err_mask = 0, eflags = 0; int tag, nr_failed = 0, nr_quiet = 0; @@ -2168,9 +2164,7 @@ static void ata_eh_link_autopsy(struct ata_link *link) all_err_mask |= ehc->i.err_mask; - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); - + ata_qc_for_each_raw(ap, qc, tag) { if (!(qc->flags & ATA_QCFLAG_FAILED) || ata_dev_phys_link(qc->dev) != link) continue; @@ -2436,6 +2430,7 @@ static void ata_eh_link_report(struct ata_link *link) { struct ata_port *ap = link->ap; struct ata_eh_context *ehc = &link->eh_context; + struct ata_queued_cmd *qc; const char *frozen, *desc; char tries_buf[6] = ""; int tag, nr_failed = 0; @@ -2447,9 +2442,7 @@ static void ata_eh_link_report(struct ata_link *link) if (ehc->i.desc[0] != '\0') desc = ehc->i.desc; - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); - + ata_qc_for_each_raw(ap, qc, tag) { if (!(qc->flags & ATA_QCFLAG_FAILED) || ata_dev_phys_link(qc->dev) != link || ((qc->flags & ATA_QCFLAG_QUIET) && @@ -2511,8 +2504,7 @@ static void ata_eh_link_report(struct ata_link *link) ehc->i.serror & SERR_DEV_XCHG ? "DevExch " : ""); #endif - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + ata_qc_for_each_raw(ap, qc, tag) { struct ata_taskfile *cmd = &qc->tf, *res = &qc->result_tf; char data_buf[20] = ""; char cdb_buf[70] = ""; @@ -3992,12 +3984,11 @@ int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset, */ void ata_eh_finish(struct ata_port *ap) { + struct ata_queued_cmd *qc; int tag; /* retry or finish qcs */ - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); - + ata_qc_for_each_raw(ap, qc, tag) { if (!(qc->flags & ATA_QCFLAG_FAILED)) continue; diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c index 6a91d04351d9..aad1b01447de 100644 --- a/drivers/ata/libata-scsi.c +++ b/drivers/ata/libata-scsi.c @@ -3805,10 +3805,20 @@ static unsigned int ata_scsi_zbc_out_xlat(struct ata_queued_cmd *qc) */ goto invalid_param_len; } - if (block > dev->n_sectors) - goto out_of_range; all = cdb[14] & 0x1; + if (all) { + /* + * Ignore the block address (zone ID) as defined by ZBC. + */ + block = 0; + } else if (block >= dev->n_sectors) { + /* + * Block must be a valid zone ID (a zone start LBA). + */ + fp = 2; + goto invalid_fld; + } if (ata_ncq_enabled(qc->dev) && ata_fpdma_zac_mgmt_out_supported(qc->dev)) { @@ -3837,10 +3847,6 @@ static unsigned int ata_scsi_zbc_out_xlat(struct ata_queued_cmd *qc) invalid_fld: ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff); return 1; - out_of_range: - /* "Logical Block Address out of range" */ - ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x00); - return 1; invalid_param_len: /* "Parameter list length error" */ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0); diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c index b8d9cfc60374..4dc528bf8e85 100644 --- a/drivers/ata/sata_fsl.c +++ b/drivers/ata/sata_fsl.c @@ -395,12 +395,6 @@ static inline unsigned int sata_fsl_tag(unsigned int tag, { /* We let libATA core do actual (queue) tag allocation */ - /* all non NCQ/queued commands should have tag#0 */ - if (ata_tag_internal(tag)) { - DPRINTK("mapping internal cmds to tag#0\n"); - return 0; - } - if (unlikely(tag >= SATA_FSL_QUEUE_DEPTH)) { DPRINTK("tag %d invalid : out of range\n", tag); return 0; @@ -1229,8 +1223,7 @@ static void sata_fsl_host_intr(struct ata_port *ap) /* Workaround for data length mismatch errata */ if (unlikely(hstatus & INT_ON_DATA_LENGTH_MISMATCH)) { - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { - qc = ata_qc_from_tag(ap, tag); + ata_qc_for_each_with_internal(ap, qc, tag) { if (qc && ata_is_atapi(qc->tf.protocol)) { u32 hcontrol; /* Set HControl[27] to clear error registers */ diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c index 10ae11aa1926..72c9b922a77b 100644 --- a/drivers/ata/sata_nv.c +++ b/drivers/ata/sata_nv.c @@ -675,7 +675,6 @@ static int nv_adma_slave_config(struct scsi_device *sdev) struct ata_port *ap = ata_shost_to_port(sdev->host); struct nv_adma_port_priv *pp = ap->private_data; struct nv_adma_port_priv *port0, *port1; - struct scsi_device *sdev0, *sdev1; struct pci_dev *pdev = to_pci_dev(ap->host->dev); unsigned long segment_boundary, flags; unsigned short sg_tablesize; @@ -736,8 +735,6 @@ static int nv_adma_slave_config(struct scsi_device *sdev) port0 = ap->host->ports[0]->private_data; port1 = ap->host->ports[1]->private_data; - sdev0 = ap->host->ports[0]->link.device[0].sdev; - sdev1 = ap->host->ports[1]->link.device[0].sdev; if ((port0->flags & NV_ADMA_ATAPI_SETUP_COMPLETE) || (port1->flags & NV_ADMA_ATAPI_SETUP_COMPLETE)) { /* diff --git a/drivers/atm/iphase.c b/drivers/atm/iphase.c index ff81a576347e..82532c299bb5 100644 --- a/drivers/atm/iphase.c +++ b/drivers/atm/iphase.c @@ -1618,7 +1618,7 @@ static int rx_init(struct atm_dev *dev) skb_queue_head_init(&iadev->rx_dma_q); iadev->rx_free_desc_qhead = NULL; - iadev->rx_open = kcalloc(4, iadev->num_vc, GFP_KERNEL); + iadev->rx_open = kcalloc(iadev->num_vc, sizeof(void *), GFP_KERNEL); if (!iadev->rx_open) { printk(KERN_ERR DEV_LABEL "itf %d couldn't get free page\n", dev->number); diff --git a/drivers/atm/zatm.c b/drivers/atm/zatm.c index a8d2eb0ceb8d..2c288d1f42bb 100644 --- a/drivers/atm/zatm.c +++ b/drivers/atm/zatm.c @@ -1483,6 +1483,8 @@ static int zatm_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg) return -EFAULT; if (pool < 0 || pool > ZATM_LAST_POOL) return -EINVAL; + pool = array_index_nospec(pool, + ZATM_LAST_POOL + 1); if (copy_from_user(&info, &((struct zatm_pool_req __user *) arg)->info, sizeof(info))) return -EFAULT; diff --git a/drivers/base/Makefile b/drivers/base/Makefile index b074f242a435..704f44295810 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -8,10 +8,7 @@ obj-y := component.o core.o bus.o dd.o syscore.o \ topology.o container.o property.o cacheinfo.o \ devcon.o obj-$(CONFIG_DEVTMPFS) += devtmpfs.o -obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ -obj-$(CONFIG_HAS_DMA) += dma-mapping.o -obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o obj-$(CONFIG_ISA_BUS_API) += isa.o obj-y += firmware_loader/ obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/core.c b/drivers/base/core.c index 36622b52e419..df3e1a44707a 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -236,6 +236,13 @@ struct device_link *device_link_add(struct device *consumer, link->rpm_active = true; } pm_runtime_new_link(consumer); + /* + * If the link is being added by the consumer driver at probe + * time, balance the decrementation of the supplier's runtime PM + * usage counter after consumer probe in driver_probe_device(). + */ + if (consumer->links.status == DL_DEV_PROBING) + pm_runtime_get_noresume(supplier); } get_device(supplier); link->supplier = supplier; @@ -255,12 +262,12 @@ struct device_link *device_link_add(struct device *consumer, switch (consumer->links.status) { case DL_DEV_PROBING: /* - * Balance the decrementation of the supplier's - * runtime PM usage counter after consumer probe - * in driver_probe_device(). + * Some callers expect the link creation during + * consumer driver probe to resume the supplier + * even without DL_FLAG_RPM_ACTIVE. */ if (flags & DL_FLAG_PM_RUNTIME) - pm_runtime_get_sync(supplier); + pm_runtime_resume(supplier); link->status = DL_STATE_CONSUMER_PROBE; break; diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 1435d7281c66..6ebcd65d64b6 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -434,14 +434,6 @@ re_probe: goto probe_failed; } - /* - * Ensure devices are listed in devices_kset in correct order - * It's important to move Dev to the end of devices_kset before - * calling .probe, because it could be recursive and parent Dev - * should always go first - */ - devices_kset_move_last(dev); - if (dev->bus->probe) { ret = dev->bus->probe(dev); if (ret) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 4925af5c4cf0..9e8484189034 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -2235,7 +2235,7 @@ static void genpd_dev_pm_sync(struct device *dev) } static int __genpd_dev_pm_attach(struct device *dev, struct device_node *np, - unsigned int index) + unsigned int index, bool power_on) { struct of_phandle_args pd_args; struct generic_pm_domain *pd; @@ -2271,9 +2271,11 @@ static int __genpd_dev_pm_attach(struct device *dev, struct device_node *np, dev->pm_domain->detach = genpd_dev_pm_detach; dev->pm_domain->sync = genpd_dev_pm_sync; - genpd_lock(pd); - ret = genpd_power_on(pd, 0); - genpd_unlock(pd); + if (power_on) { + genpd_lock(pd); + ret = genpd_power_on(pd, 0); + genpd_unlock(pd); + } if (ret) genpd_remove_device(pd, dev); @@ -2307,7 +2309,7 @@ int genpd_dev_pm_attach(struct device *dev) "#power-domain-cells") != 1) return 0; - return __genpd_dev_pm_attach(dev, dev->of_node, 0); + return __genpd_dev_pm_attach(dev, dev->of_node, 0, true); } EXPORT_SYMBOL_GPL(genpd_dev_pm_attach); @@ -2359,14 +2361,14 @@ struct device *genpd_dev_pm_attach_by_id(struct device *dev, } /* Try to attach the device to the PM domain at the specified index. */ - ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index); + ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index, false); if (ret < 1) { device_unregister(genpd_dev); return ret ? ERR_PTR(ret) : NULL; } - pm_runtime_set_active(genpd_dev); pm_runtime_enable(genpd_dev); + genpd_queue_power_off_work(dev_to_genpd(genpd_dev)); return genpd_dev; } @@ -2487,10 +2489,9 @@ EXPORT_SYMBOL_GPL(of_genpd_parse_idle_states); * power domain corresponding to a DT node's "required-opps" property. * * @dev: Device for which the performance-state needs to be found. - * @opp_node: DT node where the "required-opps" property is present. This can be + * @np: DT node where the "required-opps" property is present. This can be * the device node itself (if it doesn't have an OPP table) or a node * within the OPP table of a device (if device has an OPP table). - * @state: Pointer to return performance state. * * Returns performance state corresponding to the "required-opps" property of * a DT node. This calls platform specific genpd->opp_to_performance_state() @@ -2499,7 +2500,7 @@ EXPORT_SYMBOL_GPL(of_genpd_parse_idle_states); * Returns performance state on success and 0 on failure. */ unsigned int of_genpd_opp_to_performance_state(struct device *dev, - struct device_node *opp_node) + struct device_node *np) { struct generic_pm_domain *genpd; struct dev_pm_opp *opp; @@ -2514,7 +2515,7 @@ unsigned int of_genpd_opp_to_performance_state(struct device *dev, genpd_lock(genpd); - opp = of_dev_pm_opp_find_required_opp(&genpd->dev, opp_node); + opp = of_dev_pm_opp_find_required_opp(&genpd->dev, np); if (IS_ERR(opp)) { dev_err(dev, "Failed to find required OPP: %ld\n", PTR_ERR(opp)); diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index a47e4987ee46..d146fedc38bb 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1244,8 +1244,8 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio, unsigned long _drbd_start_io_acct(device, req); /* process discards always from our submitter thread */ - if ((bio_op(bio) & REQ_OP_WRITE_ZEROES) || - (bio_op(bio) & REQ_OP_DISCARD)) + if (bio_op(bio) == REQ_OP_WRITE_ZEROES || + bio_op(bio) == REQ_OP_DISCARD) goto queue_for_submitter_thread; if (rw == WRITE && req->private_bio && req->i.size diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c index 1476cb3439f4..5e793dd7adfb 100644 --- a/drivers/block/drbd/drbd_worker.c +++ b/drivers/block/drbd/drbd_worker.c @@ -282,8 +282,8 @@ void drbd_request_endio(struct bio *bio) what = COMPLETED_OK; } - bio_put(req->private_bio); req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status)); + bio_put(bio); /* not req_mod(), we need irqsave here! */ spin_lock_irqsave(&device->resource->req_lock, flags); diff --git a/drivers/block/loop.c b/drivers/block/loop.c index d6b6f434fd4b..4cb1d1be3cfb 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1613,6 +1613,7 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode, arg = (unsigned long) compat_ptr(arg); case LOOP_SET_FD: case LOOP_CHANGE_FD: + case LOOP_SET_BLOCK_SIZE: err = lo_ioctl(bdev, mode, cmd, arg); break; default: diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 3b7083b8ecbb..3fb95c8d9fd8 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -76,6 +76,7 @@ struct link_dead_args { #define NBD_HAS_CONFIG_REF 4 #define NBD_BOUND 5 #define NBD_DESTROY_ON_DISCONNECT 6 +#define NBD_DISCONNECT_ON_CLOSE 7 struct nbd_config { u32 flags; @@ -111,12 +112,16 @@ struct nbd_device { struct task_struct *task_setup; }; +#define NBD_CMD_REQUEUED 1 + struct nbd_cmd { struct nbd_device *nbd; + struct mutex lock; int index; int cookie; - struct completion send_complete; blk_status_t status; + unsigned long flags; + u32 cmd_cookie; }; #if IS_ENABLED(CONFIG_DEBUG_FS) @@ -138,12 +143,42 @@ static void nbd_config_put(struct nbd_device *nbd); static void nbd_connect_reply(struct genl_info *info, int index); static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info); static void nbd_dead_link_work(struct work_struct *work); +static void nbd_disconnect_and_put(struct nbd_device *nbd); static inline struct device *nbd_to_dev(struct nbd_device *nbd) { return disk_to_dev(nbd->disk); } +static void nbd_requeue_cmd(struct nbd_cmd *cmd) +{ + struct request *req = blk_mq_rq_from_pdu(cmd); + + if (!test_and_set_bit(NBD_CMD_REQUEUED, &cmd->flags)) + blk_mq_requeue_request(req, true); +} + +#define NBD_COOKIE_BITS 32 + +static u64 nbd_cmd_handle(struct nbd_cmd *cmd) +{ + struct request *req = blk_mq_rq_from_pdu(cmd); + u32 tag = blk_mq_unique_tag(req); + u64 cookie = cmd->cmd_cookie; + + return (cookie << NBD_COOKIE_BITS) | tag; +} + +static u32 nbd_handle_to_tag(u64 handle) +{ + return (u32)handle; +} + +static u32 nbd_handle_to_cookie(u64 handle) +{ + return (u32)(handle >> NBD_COOKIE_BITS); +} + static const char *nbdcmd_to_ascii(int cmd) { switch (cmd) { @@ -317,6 +352,9 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req, } config = nbd->config; + if (!mutex_trylock(&cmd->lock)) + return BLK_EH_RESET_TIMER; + if (config->num_connections > 1) { dev_err_ratelimited(nbd_to_dev(nbd), "Connection timed out, retrying (%d/%d alive)\n", @@ -341,7 +379,8 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req, nbd_mark_nsock_dead(nbd, nsock, 1); mutex_unlock(&nsock->tx_lock); } - blk_mq_requeue_request(req, true); + mutex_unlock(&cmd->lock); + nbd_requeue_cmd(cmd); nbd_config_put(nbd); return BLK_EH_DONE; } @@ -351,6 +390,7 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req, } set_bit(NBD_TIMEDOUT, &config->runtime_flags); cmd->status = BLK_STS_IOERR; + mutex_unlock(&cmd->lock); sock_shutdown(nbd); nbd_config_put(nbd); done: @@ -428,9 +468,9 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) struct iov_iter from; unsigned long size = blk_rq_bytes(req); struct bio *bio; + u64 handle; u32 type; u32 nbd_cmd_flags = 0; - u32 tag = blk_mq_unique_tag(req); int sent = nsock->sent, skip = 0; iov_iter_kvec(&from, WRITE | ITER_KVEC, &iov, 1, sizeof(request)); @@ -472,6 +512,8 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) goto send_pages; } iov_iter_advance(&from, sent); + } else { + cmd->cmd_cookie++; } cmd->index = index; cmd->cookie = nsock->cookie; @@ -480,7 +522,8 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) request.from = cpu_to_be64((u64)blk_rq_pos(req) << 9); request.len = htonl(size); } - memcpy(request.handle, &tag, sizeof(tag)); + handle = nbd_cmd_handle(cmd); + memcpy(request.handle, &handle, sizeof(handle)); dev_dbg(nbd_to_dev(nbd), "request %p: sending control (%s@%llu,%uB)\n", req, nbdcmd_to_ascii(type), @@ -498,6 +541,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) nsock->pending = req; nsock->sent = sent; } + set_bit(NBD_CMD_REQUEUED, &cmd->flags); return BLK_STS_RESOURCE; } dev_err_ratelimited(disk_to_dev(nbd->disk), @@ -539,6 +583,7 @@ send_pages: */ nsock->pending = req; nsock->sent = sent; + set_bit(NBD_CMD_REQUEUED, &cmd->flags); return BLK_STS_RESOURCE; } dev_err(disk_to_dev(nbd->disk), @@ -571,10 +616,12 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index) struct nbd_reply reply; struct nbd_cmd *cmd; struct request *req = NULL; + u64 handle; u16 hwq; u32 tag; struct kvec iov = {.iov_base = &reply, .iov_len = sizeof(reply)}; struct iov_iter to; + int ret = 0; reply.magic = 0; iov_iter_kvec(&to, READ | ITER_KVEC, &iov, 1, sizeof(reply)); @@ -592,8 +639,8 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index) return ERR_PTR(-EPROTO); } - memcpy(&tag, reply.handle, sizeof(u32)); - + memcpy(&handle, reply.handle, sizeof(handle)); + tag = nbd_handle_to_tag(handle); hwq = blk_mq_unique_tag_to_hwq(tag); if (hwq < nbd->tag_set.nr_hw_queues) req = blk_mq_tag_to_rq(nbd->tag_set.tags[hwq], @@ -604,11 +651,25 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index) return ERR_PTR(-ENOENT); } cmd = blk_mq_rq_to_pdu(req); + + mutex_lock(&cmd->lock); + if (cmd->cmd_cookie != nbd_handle_to_cookie(handle)) { + dev_err(disk_to_dev(nbd->disk), "Double reply on req %p, cmd_cookie %u, handle cookie %u\n", + req, cmd->cmd_cookie, nbd_handle_to_cookie(handle)); + ret = -ENOENT; + goto out; + } + if (test_bit(NBD_CMD_REQUEUED, &cmd->flags)) { + dev_err(disk_to_dev(nbd->disk), "Raced with timeout on req %p\n", + req); + ret = -ENOENT; + goto out; + } if (ntohl(reply.error)) { dev_err(disk_to_dev(nbd->disk), "Other side returned error (%d)\n", ntohl(reply.error)); cmd->status = BLK_STS_IOERR; - return cmd; + goto out; } dev_dbg(nbd_to_dev(nbd), "request %p: got reply\n", req); @@ -633,18 +694,18 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index) if (nbd_disconnected(config) || config->num_connections <= 1) { cmd->status = BLK_STS_IOERR; - return cmd; + goto out; } - return ERR_PTR(-EIO); + ret = -EIO; + goto out; } dev_dbg(nbd_to_dev(nbd), "request %p: got %d bytes data\n", req, bvec.bv_len); } - } else { - /* See the comment in nbd_queue_rq. */ - wait_for_completion(&cmd->send_complete); } - return cmd; +out: + mutex_unlock(&cmd->lock); + return ret ? ERR_PTR(ret) : cmd; } static void recv_work(struct work_struct *work) @@ -803,7 +864,7 @@ again: */ blk_mq_start_request(req); if (unlikely(nsock->pending && nsock->pending != req)) { - blk_mq_requeue_request(req, true); + nbd_requeue_cmd(cmd); ret = 0; goto out; } @@ -816,7 +877,7 @@ again: dev_err_ratelimited(disk_to_dev(nbd->disk), "Request send failed, requeueing\n"); nbd_mark_nsock_dead(nbd, nsock, 1); - blk_mq_requeue_request(req, true); + nbd_requeue_cmd(cmd); ret = 0; } out: @@ -840,7 +901,8 @@ static blk_status_t nbd_queue_rq(struct blk_mq_hw_ctx *hctx, * that the server is misbehaving (or there was an error) before we're * done sending everything over the wire. */ - init_completion(&cmd->send_complete); + mutex_lock(&cmd->lock); + clear_bit(NBD_CMD_REQUEUED, &cmd->flags); /* We can be called directly from the user space process, which means we * could possibly have signals pending so our sendmsg will fail. In @@ -852,7 +914,7 @@ static blk_status_t nbd_queue_rq(struct blk_mq_hw_ctx *hctx, ret = BLK_STS_IOERR; else if (!ret) ret = BLK_STS_OK; - complete(&cmd->send_complete); + mutex_unlock(&cmd->lock); return ret; } @@ -1305,6 +1367,12 @@ out: static void nbd_release(struct gendisk *disk, fmode_t mode) { struct nbd_device *nbd = disk->private_data; + struct block_device *bdev = bdget_disk(disk, 0); + + if (test_bit(NBD_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) && + bdev->bd_openers == 0) + nbd_disconnect_and_put(nbd); + nbd_config_put(nbd); nbd_put(nbd); } @@ -1452,6 +1520,8 @@ static int nbd_init_request(struct blk_mq_tag_set *set, struct request *rq, { struct nbd_cmd *cmd = blk_mq_rq_to_pdu(rq); cmd->nbd = set->driver_data; + cmd->flags = 0; + mutex_init(&cmd->lock); return 0; } @@ -1705,6 +1775,10 @@ again: &config->runtime_flags); put_dev = true; } + if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) { + set_bit(NBD_DISCONNECT_ON_CLOSE, + &config->runtime_flags); + } } if (info->attrs[NBD_ATTR_SOCKETS]) { @@ -1749,6 +1823,17 @@ out: return ret; } +static void nbd_disconnect_and_put(struct nbd_device *nbd) +{ + mutex_lock(&nbd->config_lock); + nbd_disconnect(nbd); + nbd_clear_sock(nbd); + mutex_unlock(&nbd->config_lock); + if (test_and_clear_bit(NBD_HAS_CONFIG_REF, + &nbd->config->runtime_flags)) + nbd_config_put(nbd); +} + static int nbd_genl_disconnect(struct sk_buff *skb, struct genl_info *info) { struct nbd_device *nbd; @@ -1781,13 +1866,7 @@ static int nbd_genl_disconnect(struct sk_buff *skb, struct genl_info *info) nbd_put(nbd); return 0; } - mutex_lock(&nbd->config_lock); - nbd_disconnect(nbd); - nbd_clear_sock(nbd); - mutex_unlock(&nbd->config_lock); - if (test_and_clear_bit(NBD_HAS_CONFIG_REF, - &nbd->config->runtime_flags)) - nbd_config_put(nbd); + nbd_disconnect_and_put(nbd); nbd_config_put(nbd); nbd_put(nbd); return 0; @@ -1798,7 +1877,7 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info) struct nbd_device *nbd = NULL; struct nbd_config *config; int index; - int ret = -EINVAL; + int ret = 0; bool put_dev = false; if (!netlink_capable(skb, CAP_SYS_ADMIN)) @@ -1838,6 +1917,7 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info) !nbd->task_recv) { dev_err(nbd_to_dev(nbd), "not configured, cannot reconfigure\n"); + ret = -EINVAL; goto out; } @@ -1862,6 +1942,14 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info) &config->runtime_flags)) refcount_inc(&nbd->refs); } + + if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) { + set_bit(NBD_DISCONNECT_ON_CLOSE, + &config->runtime_flags); + } else { + clear_bit(NBD_DISCONNECT_ON_CLOSE, + &config->runtime_flags); + } } if (info->attrs[NBD_ATTR_SOCKETS]) { diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c index 7948049f6c43..042c778e5a4e 100644 --- a/drivers/block/null_blk.c +++ b/drivers/block/null_blk.c @@ -1365,7 +1365,7 @@ static blk_qc_t null_queue_bio(struct request_queue *q, struct bio *bio) static enum blk_eh_timer_return null_rq_timed_out_fn(struct request *rq) { pr_info("null: rq %p timed out\n", rq); - blk_mq_complete_request(rq); + __blk_complete_request(rq); return BLK_EH_DONE; } diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index af354047ac4b..d81c653b9bf6 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -61,7 +61,7 @@ static int atomic_inc_return_safe(atomic_t *v) { unsigned int counter; - counter = (unsigned int)__atomic_add_unless(v, 1, 0); + counter = (unsigned int)atomic_fetch_add_unless(v, 1, 0); if (counter <= (unsigned int)INT_MAX) return (int)counter; @@ -2339,6 +2339,7 @@ static bool is_zero_bvecs(struct bio_vec *bvecs, u32 bytes) static int rbd_obj_issue_copyup(struct rbd_obj_request *obj_req, u32 bytes) { unsigned int num_osd_ops = obj_req->osd_req->r_num_ops; + int ret; dout("%s obj_req %p bytes %u\n", __func__, obj_req, bytes); rbd_assert(obj_req->osd_req->r_ops[0].op == CEPH_OSD_OP_STAT); @@ -2353,6 +2354,11 @@ static int rbd_obj_issue_copyup(struct rbd_obj_request *obj_req, u32 bytes) if (!obj_req->osd_req) return -ENOMEM; + ret = osd_req_op_cls_init(obj_req->osd_req, 0, CEPH_OSD_OP_CALL, "rbd", + "copyup"); + if (ret) + return ret; + /* * Only send non-zero copyup data to save some I/O and network * bandwidth -- zero copyup data is equivalent to the object not @@ -2362,9 +2368,6 @@ static int rbd_obj_issue_copyup(struct rbd_obj_request *obj_req, u32 bytes) dout("%s obj_req %p detected zeroes\n", __func__, obj_req); bytes = 0; } - - osd_req_op_cls_init(obj_req->osd_req, 0, CEPH_OSD_OP_CALL, "rbd", - "copyup"); osd_req_op_cls_request_data_bvecs(obj_req->osd_req, 0, obj_req->copyup_bvecs, obj_req->copyup_bvec_count, @@ -3397,7 +3400,6 @@ static void cancel_tasks_sync(struct rbd_device *rbd_dev) { dout("%s rbd_dev %p\n", __func__, rbd_dev); - cancel_delayed_work_sync(&rbd_dev->watch_dwork); cancel_work_sync(&rbd_dev->acquired_lock_work); cancel_work_sync(&rbd_dev->released_lock_work); cancel_delayed_work_sync(&rbd_dev->lock_dwork); @@ -3415,6 +3417,7 @@ static void rbd_unregister_watch(struct rbd_device *rbd_dev) rbd_dev->watch_state = RBD_WATCH_STATE_UNREGISTERED; mutex_unlock(&rbd_dev->watch_mutex); + cancel_delayed_work_sync(&rbd_dev->watch_dwork); ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc); } diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 7436b2d27fa3..a390c6d4f72d 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -298,7 +298,8 @@ static void reset_bdev(struct zram *zram) zram->backing_dev = NULL; zram->old_block_size = 0; zram->bdev = NULL; - + zram->disk->queue->backing_dev_info->capabilities |= + BDI_CAP_SYNCHRONOUS_IO; kvfree(zram->bitmap); zram->bitmap = NULL; } @@ -400,6 +401,18 @@ static ssize_t backing_dev_store(struct device *dev, zram->backing_dev = backing_dev; zram->bitmap = bitmap; zram->nr_pages = nr_pages; + /* + * With writeback feature, zram does asynchronous IO so it's no longer + * synchronous device so let's remove synchronous io flag. Othewise, + * upper layer(e.g., swap) could wait IO completion rather than + * (submit and return), which will cause system sluggish. + * Furthermore, when the IO function returns(e.g., swap_readpage), + * upper layer expects IO was done so it could deallocate the page + * freely but in fact, IO is going on so finally could cause + * use-after-free when the IO is really done. + */ + zram->disk->queue->backing_dev_info->capabilities &= + ~BDI_CAP_SYNCHRONOUS_IO; up_write(&zram->init_lock); pr_info("setup backing device %s\n", file_name); diff --git a/drivers/bluetooth/hci_nokia.c b/drivers/bluetooth/hci_nokia.c index 14d159e2042d..2dc33e65d2d0 100644 --- a/drivers/bluetooth/hci_nokia.c +++ b/drivers/bluetooth/hci_nokia.c @@ -29,7 +29,7 @@ #include #include #include -#include +#include #include #include diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c index 1cc29629d238..80d60f43db56 100644 --- a/drivers/bus/ti-sysc.c +++ b/drivers/bus/ti-sysc.c @@ -169,9 +169,9 @@ static int sysc_get_clocks(struct sysc *ddata) const char *name; int nr_fck = 0, nr_ick = 0, i, error = 0; - ddata->clock_roles = devm_kzalloc(ddata->dev, - sizeof(*ddata->clock_roles) * + ddata->clock_roles = devm_kcalloc(ddata->dev, SYSC_MAX_CLOCKS, + sizeof(*ddata->clock_roles), GFP_KERNEL); if (!ddata->clock_roles) return -ENOMEM; @@ -200,8 +200,8 @@ static int sysc_get_clocks(struct sysc *ddata) return -EINVAL; } - ddata->clocks = devm_kzalloc(ddata->dev, - sizeof(*ddata->clocks) * ddata->nr_clocks, + ddata->clocks = devm_kcalloc(ddata->dev, + ddata->nr_clocks, sizeof(*ddata->clocks), GFP_KERNEL); if (!ddata->clocks) return -ENOMEM; diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index 410c30c42120..212f447938ae 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -81,7 +81,7 @@ config PRINTER corresponding drivers into the kernel. To compile this driver as a module, choose M here and read - . The module will be called lp. + . The module will be called lp. If you have several parallel ports, you can specify which ports to use with the "lp" kernel command line option. (Try "man bootparam" diff --git a/drivers/char/agp/alpha-agp.c b/drivers/char/agp/alpha-agp.c index 53fe633df1e8..c9bf2c219841 100644 --- a/drivers/char/agp/alpha-agp.c +++ b/drivers/char/agp/alpha-agp.c @@ -11,7 +11,7 @@ #include "agp.h" -static int alpha_core_agp_vm_fault(struct vm_fault *vmf) +static vm_fault_t alpha_core_agp_vm_fault(struct vm_fault *vmf) { alpha_agp_info *agp = agp_bridge->dev_private_data; dma_addr_t dma_addr; diff --git a/drivers/char/agp/amd64-agp.c b/drivers/char/agp/amd64-agp.c index e50c29c97ca7..c69e39fdd02b 100644 --- a/drivers/char/agp/amd64-agp.c +++ b/drivers/char/agp/amd64-agp.c @@ -156,7 +156,7 @@ static u64 amd64_configure(struct pci_dev *hammer, u64 gatt_table) /* Address to map to */ pci_read_config_dword(hammer, AMD64_GARTAPERTUREBASE, &tmp); - aperturebase = tmp << 25; + aperturebase = (u64)tmp << 25; aper_base = (aperturebase & PCI_BASE_ADDRESS_MEM_MASK); enable_gart_translation(hammer, gatt_table); @@ -277,7 +277,7 @@ static int fix_northbridge(struct pci_dev *nb, struct pci_dev *agp, u16 cap) pci_read_config_dword(nb, AMD64_GARTAPERTURECTL, &nb_order); nb_order = (nb_order >> 1) & 7; pci_read_config_dword(nb, AMD64_GARTAPERTUREBASE, &nb_base); - nb_aper = nb_base << 25; + nb_aper = (u64)nb_base << 25; /* Northbridge seems to contain crap. Try the AGP bridge. */ diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 91bb98c42a1c..aaf9e5afaad4 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -516,11 +516,18 @@ EXPORT_SYMBOL_GPL(hwrng_register); void hwrng_unregister(struct hwrng *rng) { + int err; + mutex_lock(&rng_mutex); list_del(&rng->list); - if (current_rng == rng) - enable_best_rng(); + if (current_rng == rng) { + err = enable_best_rng(); + if (err) { + drop_current_rng(); + cur_rng_set_by_user = 0; + } + } if (list_empty(&rng_list)) { mutex_unlock(&rng_mutex); diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c index ad353be871bf..90ec010bffbd 100644 --- a/drivers/char/ipmi/ipmi_si_intf.c +++ b/drivers/char/ipmi/ipmi_si_intf.c @@ -2088,8 +2088,10 @@ static int try_smi_init(struct smi_info *new_smi) return 0; out_err: - ipmi_unregister_smi(new_smi->intf); - new_smi->intf = NULL; + if (new_smi->intf) { + ipmi_unregister_smi(new_smi->intf); + new_smi->intf = NULL; + } kfree(init_name); diff --git a/drivers/char/ipmi/kcs_bmc.c b/drivers/char/ipmi/kcs_bmc.c index fbfc05e3f3d1..bb882ab161fe 100644 --- a/drivers/char/ipmi/kcs_bmc.c +++ b/drivers/char/ipmi/kcs_bmc.c @@ -210,34 +210,23 @@ static void kcs_bmc_handle_cmd(struct kcs_bmc *kcs_bmc) int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc) { unsigned long flags; - int ret = 0; + int ret = -ENODATA; u8 status; spin_lock_irqsave(&kcs_bmc->lock, flags); - if (!kcs_bmc->running) { - kcs_force_abort(kcs_bmc); - ret = -ENODEV; - goto out_unlock; - } - - status = read_status(kcs_bmc) & (KCS_STATUS_IBF | KCS_STATUS_CMD_DAT); - - switch (status) { - case KCS_STATUS_IBF | KCS_STATUS_CMD_DAT: - kcs_bmc_handle_cmd(kcs_bmc); - break; - - case KCS_STATUS_IBF: - kcs_bmc_handle_data(kcs_bmc); - break; + status = read_status(kcs_bmc); + if (status & KCS_STATUS_IBF) { + if (!kcs_bmc->running) + kcs_force_abort(kcs_bmc); + else if (status & KCS_STATUS_CMD_DAT) + kcs_bmc_handle_cmd(kcs_bmc); + else + kcs_bmc_handle_data(kcs_bmc); - default: - ret = -ENODATA; - break; + ret = 0; } -out_unlock: spin_unlock_irqrestore(&kcs_bmc->lock, flags); return ret; diff --git a/drivers/char/mem.c b/drivers/char/mem.c index ffeb60d3434c..df66a9dd0aae 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -708,6 +708,7 @@ static int mmap_zero(struct file *file, struct vm_area_struct *vma) #endif if (vma->vm_flags & VM_SHARED) return shmem_zero_setup(vma); + vma_set_anonymous(vma); return 0; } diff --git a/drivers/char/random.c b/drivers/char/random.c index a8fb0020ba5c..bd449ad52442 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -402,7 +402,8 @@ static struct poolinfo { /* * Static global variables */ -static DECLARE_WAIT_QUEUE_HEAD(random_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_list_lock); @@ -721,8 +722,8 @@ retry: /* should we wake readers? */ if (entropy_bits >= random_read_wakeup_bits && - wq_has_sleeper(&random_wait)) { - wake_up_interruptible_poll(&random_wait, POLLIN); + wq_has_sleeper(&random_read_wait)) { + wake_up_interruptible(&random_read_wait); kill_fasync(&fasync, SIGIO, POLL_IN); } /* If the input pool is getting full, send some @@ -1396,7 +1397,7 @@ retry: trace_debit_entropy(r->name, 8 * ibytes); if (ibytes && (r->entropy_count >> ENTROPY_SHIFT) < random_write_wakeup_bits) { - wake_up_interruptible_poll(&random_wait, POLLOUT); + wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1838,7 +1839,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes) if (nonblock) return -EAGAIN; - wait_event_interruptible(random_wait, + wait_event_interruptible(random_read_wait, ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits); if (signal_pending(current)) @@ -1875,17 +1876,14 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) return ret; } -static struct wait_queue_head * -random_get_poll_head(struct file *file, __poll_t events) -{ - return &random_wait; -} - static __poll_t -random_poll_mask(struct file *file, __poll_t events) +random_poll(struct file *file, poll_table * wait) { - __poll_t mask = 0; + __poll_t mask; + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits) mask |= EPOLLIN | EPOLLRDNORM; if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits) @@ -1897,14 +1895,22 @@ static int write_pool(struct entropy_store *r, const char __user *buffer, size_t count) { size_t bytes; - __u32 buf[16]; + __u32 t, buf[16]; const char __user *p = buffer; while (count > 0) { + int b, i = 0; + bytes = min(count, sizeof(buf)); if (copy_from_user(&buf, p, bytes)) return -EFAULT; + for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) { + if (!arch_get_random_int(&t)) + break; + buf[i] ^= t; + } + count -= bytes; p += bytes; @@ -1992,8 +1998,7 @@ static int random_fasync(int fd, struct file *filp, int on) const struct file_operations random_fops = { .read = random_read, .write = random_write, - .get_poll_head = random_get_poll_head, - .poll_mask = random_poll_mask, + .poll = random_poll, .unlocked_ioctl = random_ioctl, .fasync = random_fasync, .llseek = noop_llseek, @@ -2326,7 +2331,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, * We'll be woken up again once below random_write_wakeup_thresh, * or when the calling thread is about to terminate. */ - wait_event_interruptible(random_wait, kthread_should_stop() || + wait_event_interruptible(random_write_wait, kthread_should_stop() || ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits); mix_pool_bytes(poolp, buffer, count); credit_entropy_bits(poolp, entropy); diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile index ae40cbe770f0..0bb25dd009d1 100644 --- a/drivers/clk/Makefile +++ b/drivers/clk/Makefile @@ -96,7 +96,7 @@ obj-$(CONFIG_ARCH_SPRD) += sprd/ obj-$(CONFIG_ARCH_STI) += st/ obj-$(CONFIG_ARCH_STRATIX10) += socfpga/ obj-$(CONFIG_ARCH_SUNXI) += sunxi/ -obj-$(CONFIG_ARCH_SUNXI) += sunxi-ng/ +obj-$(CONFIG_SUNXI_CCU) += sunxi-ng/ obj-$(CONFIG_ARCH_TEGRA) += tegra/ obj-y += ti/ obj-$(CONFIG_CLK_UNIPHIER) += uniphier/ diff --git a/drivers/clk/clk-aspeed.c b/drivers/clk/clk-aspeed.c index 38b366b00c57..7b70a074095d 100644 --- a/drivers/clk/clk-aspeed.c +++ b/drivers/clk/clk-aspeed.c @@ -24,7 +24,7 @@ #define ASPEED_MPLL_PARAM 0x20 #define ASPEED_HPLL_PARAM 0x24 #define AST2500_HPLL_BYPASS_EN BIT(20) -#define AST2400_HPLL_STRAPPED BIT(18) +#define AST2400_HPLL_PROGRAMMED BIT(18) #define AST2400_HPLL_BYPASS_EN BIT(17) #define ASPEED_MISC_CTRL 0x2c #define UART_DIV13_EN BIT(12) @@ -91,8 +91,8 @@ static const struct aspeed_gate_data aspeed_gates[] = { [ASPEED_CLK_GATE_GCLK] = { 1, 7, "gclk-gate", NULL, 0 }, /* 2D engine */ [ASPEED_CLK_GATE_MCLK] = { 2, -1, "mclk-gate", "mpll", CLK_IS_CRITICAL }, /* SDRAM */ [ASPEED_CLK_GATE_VCLK] = { 3, 6, "vclk-gate", NULL, 0 }, /* Video Capture */ - [ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", 0 }, /* PCIe/PCI */ - [ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, 0 }, /* DAC */ + [ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", CLK_IS_CRITICAL }, /* PCIe/PCI */ + [ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, CLK_IS_CRITICAL }, /* DAC */ [ASPEED_CLK_GATE_REFCLK] = { 6, -1, "refclk-gate", "clkin", CLK_IS_CRITICAL }, [ASPEED_CLK_GATE_USBPORT2CLK] = { 7, 3, "usb-port2-gate", NULL, 0 }, /* USB2.0 Host port 2 */ [ASPEED_CLK_GATE_LCLK] = { 8, 5, "lclk-gate", NULL, 0 }, /* LPC */ @@ -212,9 +212,22 @@ static int aspeed_clk_is_enabled(struct clk_hw *hw) { struct aspeed_clk_gate *gate = to_aspeed_clk_gate(hw); u32 clk = BIT(gate->clock_idx); + u32 rst = BIT(gate->reset_idx); u32 enval = (gate->flags & CLK_GATE_SET_TO_DISABLE) ? 0 : clk; u32 reg; + /* + * If the IP is in reset, treat the clock as not enabled, + * this happens with some clocks such as the USB one when + * coming from cold reset. Without this, aspeed_clk_enable() + * will fail to lift the reset. + */ + if (gate->reset_idx >= 0) { + regmap_read(gate->map, ASPEED_RESET_CTRL, ®); + if (reg & rst) + return 0; + } + regmap_read(gate->map, ASPEED_CLK_STOP_CTRL, ®); return ((reg & clk) == enval) ? 1 : 0; @@ -565,29 +578,45 @@ builtin_platform_driver(aspeed_clk_driver); static void __init aspeed_ast2400_cc(struct regmap *map) { struct clk_hw *hw; - u32 val, freq, div; + u32 val, div, clkin, hpll; + const u16 hpll_rates[][4] = { + {384, 360, 336, 408}, + {400, 375, 350, 425}, + }; + int rate; /* * CLKIN is the crystal oscillator, 24, 48 or 25MHz selected by * strapping */ regmap_read(map, ASPEED_STRAP, &val); - if (val & CLKIN_25MHZ_EN) - freq = 25000000; - else if (val & AST2400_CLK_SOURCE_SEL) - freq = 48000000; - else - freq = 24000000; - hw = clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, freq); - pr_debug("clkin @%u MHz\n", freq / 1000000); + rate = (val >> 8) & 3; + if (val & CLKIN_25MHZ_EN) { + clkin = 25000000; + hpll = hpll_rates[1][rate]; + } else if (val & AST2400_CLK_SOURCE_SEL) { + clkin = 48000000; + hpll = hpll_rates[0][rate]; + } else { + clkin = 24000000; + hpll = hpll_rates[0][rate]; + } + hw = clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, clkin); + pr_debug("clkin @%u MHz\n", clkin / 1000000); /* * High-speed PLL clock derived from the crystal. This the CPU clock, - * and we assume that it is enabled + * and we assume that it is enabled. It can be configured through the + * HPLL_PARAM register, or set to a specified frequency by strapping. */ regmap_read(map, ASPEED_HPLL_PARAM, &val); - WARN(val & AST2400_HPLL_STRAPPED, "hpll is strapped not configured"); - aspeed_clk_data->hws[ASPEED_CLK_HPLL] = aspeed_ast2400_calc_pll("hpll", val); + if (val & AST2400_HPLL_PROGRAMMED) + hw = aspeed_ast2400_calc_pll("hpll", val); + else + hw = clk_hw_register_fixed_rate(NULL, "hpll", "clkin", 0, + hpll * 1000000); + + aspeed_clk_data->hws[ASPEED_CLK_HPLL] = hw; /* * Strap bits 11:10 define the CPU/AHB clock frequency ratio (aka HCLK) diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index a24a6afb50b6..e2ed078abd90 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -6,7 +6,7 @@ * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. * - * Standard functionality for the common clock API. See Documentation/clk.txt + * Standard functionality for the common clock API. See Documentation/driver-api/clk.rst */ #include @@ -24,7 +24,6 @@ #include #include #include -#include #include "clk.h" @@ -2559,7 +2558,7 @@ static const struct { unsigned long flag; const char *name; } clk_flags[] = { -#define ENTRY(f) { f, __stringify(f) } +#define ENTRY(f) { f, #f } ENTRY(CLK_SET_RATE_GATE), ENTRY(CLK_SET_PARENT_GATE), ENTRY(CLK_SET_RATE_PARENT), @@ -2747,7 +2746,7 @@ static int __clk_core_init(struct clk_core *core) goto out; } - /* check that clk_ops are sane. See Documentation/clk.txt */ + /* check that clk_ops are sane. See Documentation/driver-api/clk.rst */ if (core->ops->set_rate && !((core->ops->round_rate || core->ops->determine_rate) && core->ops->recalc_rate)) { diff --git a/drivers/clk/clkdev.c b/drivers/clk/clkdev.c index 7513411140b6..9ab3db8b3988 100644 --- a/drivers/clk/clkdev.c +++ b/drivers/clk/clkdev.c @@ -35,9 +35,6 @@ static struct clk *__of_clk_get(struct device_node *np, int index, struct clk *clk; int rc; - if (index < 0) - return ERR_PTR(-EINVAL); - rc = of_parse_phandle_with_args(np, "clocks", "#clock-cells", index, &clkspec); if (rc) @@ -199,7 +196,7 @@ struct clk *clk_get(struct device *dev, const char *con_id) const char *dev_id = dev ? dev_name(dev) : NULL; struct clk *clk; - if (dev) { + if (dev && dev->of_node) { clk = __of_clk_get_by_name(dev->of_node, dev_id, con_id); if (!IS_ERR(clk) || PTR_ERR(clk) == -EPROBE_DEFER) return clk; diff --git a/drivers/clk/davinci/da8xx-cfgchip.c b/drivers/clk/davinci/da8xx-cfgchip.c index aae62a5b8734..d1bbee19ed0f 100644 --- a/drivers/clk/davinci/da8xx-cfgchip.c +++ b/drivers/clk/davinci/da8xx-cfgchip.c @@ -672,7 +672,7 @@ static int of_da8xx_usb_phy_clk_init(struct device *dev, struct regmap *regmap) usb1 = da8xx_cfgchip_register_usb1_clk48(dev, regmap); if (IS_ERR(usb1)) { - if (PTR_ERR(usb0) == -EPROBE_DEFER) + if (PTR_ERR(usb1) == -EPROBE_DEFER) return -EPROBE_DEFER; dev_warn(dev, "Failed to register usb1_clk48 (%ld)\n", diff --git a/drivers/clk/davinci/psc.h b/drivers/clk/davinci/psc.h index 6a42529d31a9..cc5614567a70 100644 --- a/drivers/clk/davinci/psc.h +++ b/drivers/clk/davinci/psc.h @@ -107,7 +107,7 @@ extern const struct davinci_psc_init_data of_da850_psc1_init_data; #ifdef CONFIG_ARCH_DAVINCI_DM355 extern const struct davinci_psc_init_data dm355_psc_init_data; #endif -#ifdef CONFIG_ARCH_DAVINCI_DM356 +#ifdef CONFIG_ARCH_DAVINCI_DM365 extern const struct davinci_psc_init_data dm365_psc_init_data; #endif #ifdef CONFIG_ARCH_DAVINCI_DM644x diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h index 542192376ebf..502bcbb61b04 100644 --- a/drivers/clk/ingenic/cgu.h +++ b/drivers/clk/ingenic/cgu.h @@ -194,7 +194,7 @@ struct ingenic_cgu { /** * struct ingenic_clk - private data for a clock - * @hw: see Documentation/clk.txt + * @hw: see Documentation/driver-api/clk.rst * @cgu: a pointer to the CGU data * @idx: the index of this clock in cgu->clock_info */ diff --git a/drivers/clk/meson/clk-audio-divider.c b/drivers/clk/meson/clk-audio-divider.c index 58f546e04807..e4cf96ba704e 100644 --- a/drivers/clk/meson/clk-audio-divider.c +++ b/drivers/clk/meson/clk-audio-divider.c @@ -51,7 +51,7 @@ static unsigned long audio_divider_recalc_rate(struct clk_hw *hw, struct meson_clk_audio_div_data *adiv = meson_clk_audio_div_data(clk); unsigned long divider; - divider = meson_parm_read(clk->map, &adiv->div); + divider = meson_parm_read(clk->map, &adiv->div) + 1; return DIV_ROUND_UP_ULL((u64)parent_rate, divider); } diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c index 240658404367..177fffb9ebef 100644 --- a/drivers/clk/meson/gxbb.c +++ b/drivers/clk/meson/gxbb.c @@ -498,6 +498,7 @@ static struct clk_regmap gxbb_fclk_div2 = { .ops = &clk_regmap_gate_ops, .parent_names = (const char *[]){ "fclk_div2_div" }, .num_parents = 1, + .flags = CLK_IS_CRITICAL, }, }; diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c index 6860bd5a37c5..44e4e27eddad 100644 --- a/drivers/clk/mvebu/armada-37xx-periph.c +++ b/drivers/clk/mvebu/armada-37xx-periph.c @@ -35,6 +35,7 @@ #define CLK_SEL 0x10 #define CLK_DIS 0x14 +#define ARMADA_37XX_DVFS_LOAD_1 1 #define LOAD_LEVEL_NR 4 #define ARMADA_37XX_NB_L0L1 0x18 @@ -507,6 +508,40 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate, return -EINVAL; } +/* + * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz + * respectively) to L0 frequency (1.2 Ghz) requires a significant + * amount of time to let VDD stabilize to the appropriate + * voltage. This amount of time is large enough that it cannot be + * covered by the hardware countdown register. Due to this, the CPU + * might start operating at L0 before the voltage is stabilized, + * leading to CPU stalls. + * + * To work around this problem, we prevent switching directly from the + * L2/L3 frequencies to the L0 frequency, and instead switch to the L1 + * frequency in-between. The sequence therefore becomes: + * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ) + * 2. Sleep 20ms for stabling VDD voltage + * 3. Then switch from L1(600MHZ) to L0(1200Mhz). + */ +static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base) +{ + unsigned int cur_level; + + if (rate != 1200 * 1000 * 1000) + return; + + regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level); + cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK; + if (cur_level <= ARMADA_37XX_DVFS_LOAD_1) + return; + + regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD, + ARMADA_37XX_NB_CPU_LOAD_MASK, + ARMADA_37XX_DVFS_LOAD_1); + msleep(20); +} + static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate, unsigned long parent_rate) { @@ -537,6 +572,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate, */ reg = ARMADA_37XX_NB_CPU_LOAD; mask = ARMADA_37XX_NB_CPU_LOAD_MASK; + + clk_pm_cpu_set_rate_wa(rate, base); + regmap_update_bits(base, reg, mask, load_level); return rate; diff --git a/drivers/clk/qcom/gcc-msm8996.c b/drivers/clk/qcom/gcc-msm8996.c index 9f35b3fe1d97..ff8d66fd94e6 100644 --- a/drivers/clk/qcom/gcc-msm8996.c +++ b/drivers/clk/qcom/gcc-msm8996.c @@ -2781,6 +2781,7 @@ static struct clk_branch gcc_ufs_rx_cfg_clk = { static struct clk_branch gcc_ufs_tx_symbol_0_clk = { .halt_reg = 0x75018, + .halt_check = BRANCH_HALT_SKIP, .clkr = { .enable_reg = 0x75018, .enable_mask = BIT(0), diff --git a/drivers/clk/qcom/mmcc-msm8996.c b/drivers/clk/qcom/mmcc-msm8996.c index 1a25ee4f3658..4b20d1b67a1b 100644 --- a/drivers/clk/qcom/mmcc-msm8996.c +++ b/drivers/clk/qcom/mmcc-msm8996.c @@ -2910,6 +2910,7 @@ static struct gdsc mmagic_bimc_gdsc = { .name = "mmagic_bimc", }, .pwrsts = PWRSTS_OFF_ON, + .flags = ALWAYS_ON, }; static struct gdsc mmagic_video_gdsc = { diff --git a/drivers/clk/sunxi-ng/Makefile b/drivers/clk/sunxi-ng/Makefile index acaa14cfa25c..49454700f2e5 100644 --- a/drivers/clk/sunxi-ng/Makefile +++ b/drivers/clk/sunxi-ng/Makefile @@ -1,24 +1,24 @@ # SPDX-License-Identifier: GPL-2.0 # Common objects -lib-$(CONFIG_SUNXI_CCU) += ccu_common.o -lib-$(CONFIG_SUNXI_CCU) += ccu_mmc_timing.o -lib-$(CONFIG_SUNXI_CCU) += ccu_reset.o +obj-y += ccu_common.o +obj-y += ccu_mmc_timing.o +obj-y += ccu_reset.o # Base clock types -lib-$(CONFIG_SUNXI_CCU) += ccu_div.o -lib-$(CONFIG_SUNXI_CCU) += ccu_frac.o -lib-$(CONFIG_SUNXI_CCU) += ccu_gate.o -lib-$(CONFIG_SUNXI_CCU) += ccu_mux.o -lib-$(CONFIG_SUNXI_CCU) += ccu_mult.o -lib-$(CONFIG_SUNXI_CCU) += ccu_phase.o -lib-$(CONFIG_SUNXI_CCU) += ccu_sdm.o +obj-y += ccu_div.o +obj-y += ccu_frac.o +obj-y += ccu_gate.o +obj-y += ccu_mux.o +obj-y += ccu_mult.o +obj-y += ccu_phase.o +obj-y += ccu_sdm.o # Multi-factor clocks -lib-$(CONFIG_SUNXI_CCU) += ccu_nk.o -lib-$(CONFIG_SUNXI_CCU) += ccu_nkm.o -lib-$(CONFIG_SUNXI_CCU) += ccu_nkmp.o -lib-$(CONFIG_SUNXI_CCU) += ccu_nm.o -lib-$(CONFIG_SUNXI_CCU) += ccu_mp.o +obj-y += ccu_nk.o +obj-y += ccu_nkm.o +obj-y += ccu_nkmp.o +obj-y += ccu_nm.o +obj-y += ccu_mp.o # SoC support obj-$(CONFIG_SUN50I_A64_CCU) += ccu-sun50i-a64.o @@ -38,12 +38,3 @@ obj-$(CONFIG_SUN8I_R40_CCU) += ccu-sun8i-r40.o obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80.o obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-de.o obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-usb.o - -# The lib-y file goals is supposed to work only in arch/*/lib or lib/. In our -# case, we want to use that goal, but even though lib.a will be properly -# generated, it will not be linked in, eventually resulting in a linker error -# for missing symbols. -# -# We can work around that by explicitly adding lib.a to the obj-y goal. This is -# an undocumented behaviour, but works well for now. -obj-$(CONFIG_SUNXI_CCU) += lib.a diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile index 00caf37e52f9..c070cc7992e9 100644 --- a/drivers/clocksource/Makefile +++ b/drivers/clocksource/Makefile @@ -49,7 +49,7 @@ obj-$(CONFIG_CLKSRC_SAMSUNG_PWM) += samsung_pwm_timer.o obj-$(CONFIG_FSL_FTM_TIMER) += fsl_ftm_timer.o obj-$(CONFIG_VF_PIT_TIMER) += vf_pit_timer.o obj-$(CONFIG_CLKSRC_QCOM) += qcom-timer.o -obj-$(CONFIG_MTK_TIMER) += mtk_timer.o +obj-$(CONFIG_MTK_TIMER) += timer-mediatek.o obj-$(CONFIG_CLKSRC_PISTACHIO) += time-pistachio.o obj-$(CONFIG_CLKSRC_TI_32K) += timer-ti-32k.o obj-$(CONFIG_CLKSRC_NPS) += timer-nps.o diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 57cb2f00fc07..d8c7f5750cdb 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -735,7 +735,7 @@ static void __arch_timer_setup(unsigned type, clk->features |= CLOCK_EVT_FEAT_DYNIRQ; clk->name = "arch_mem_timer"; clk->rating = 400; - clk->cpumask = cpu_all_mask; + clk->cpumask = cpu_possible_mask; if (arch_timer_mem_use_virtual) { clk->set_state_shutdown = arch_timer_shutdown_virt_mem; clk->set_state_oneshot_stopped = arch_timer_shutdown_virt_mem; diff --git a/drivers/clocksource/mtk_timer.c b/drivers/clocksource/mtk_timer.c deleted file mode 100644 index f9b724fd9950..000000000000 --- a/drivers/clocksource/mtk_timer.c +++ /dev/null @@ -1,268 +0,0 @@ -/* - * Mediatek SoCs General-Purpose Timer handling. - * - * Copyright (C) 2014 Matthias Brugger - * - * Matthias Brugger - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define GPT_IRQ_EN_REG 0x00 -#define GPT_IRQ_ENABLE(val) BIT((val) - 1) -#define GPT_IRQ_ACK_REG 0x08 -#define GPT_IRQ_ACK(val) BIT((val) - 1) - -#define TIMER_CTRL_REG(val) (0x10 * (val)) -#define TIMER_CTRL_OP(val) (((val) & 0x3) << 4) -#define TIMER_CTRL_OP_ONESHOT (0) -#define TIMER_CTRL_OP_REPEAT (1) -#define TIMER_CTRL_OP_FREERUN (3) -#define TIMER_CTRL_CLEAR (2) -#define TIMER_CTRL_ENABLE (1) -#define TIMER_CTRL_DISABLE (0) - -#define TIMER_CLK_REG(val) (0x04 + (0x10 * (val))) -#define TIMER_CLK_SRC(val) (((val) & 0x1) << 4) -#define TIMER_CLK_SRC_SYS13M (0) -#define TIMER_CLK_SRC_RTC32K (1) -#define TIMER_CLK_DIV1 (0x0) -#define TIMER_CLK_DIV2 (0x1) - -#define TIMER_CNT_REG(val) (0x08 + (0x10 * (val))) -#define TIMER_CMP_REG(val) (0x0C + (0x10 * (val))) - -#define GPT_CLK_EVT 1 -#define GPT_CLK_SRC 2 - -struct mtk_clock_event_device { - void __iomem *gpt_base; - u32 ticks_per_jiffy; - struct clock_event_device dev; -}; - -static void __iomem *gpt_sched_reg __read_mostly; - -static u64 notrace mtk_read_sched_clock(void) -{ - return readl_relaxed(gpt_sched_reg); -} - -static inline struct mtk_clock_event_device *to_mtk_clk( - struct clock_event_device *c) -{ - return container_of(c, struct mtk_clock_event_device, dev); -} - -static void mtk_clkevt_time_stop(struct mtk_clock_event_device *evt, u8 timer) -{ - u32 val; - - val = readl(evt->gpt_base + TIMER_CTRL_REG(timer)); - writel(val & ~TIMER_CTRL_ENABLE, evt->gpt_base + - TIMER_CTRL_REG(timer)); -} - -static void mtk_clkevt_time_setup(struct mtk_clock_event_device *evt, - unsigned long delay, u8 timer) -{ - writel(delay, evt->gpt_base + TIMER_CMP_REG(timer)); -} - -static void mtk_clkevt_time_start(struct mtk_clock_event_device *evt, - bool periodic, u8 timer) -{ - u32 val; - - /* Acknowledge interrupt */ - writel(GPT_IRQ_ACK(timer), evt->gpt_base + GPT_IRQ_ACK_REG); - - val = readl(evt->gpt_base + TIMER_CTRL_REG(timer)); - - /* Clear 2 bit timer operation mode field */ - val &= ~TIMER_CTRL_OP(0x3); - - if (periodic) - val |= TIMER_CTRL_OP(TIMER_CTRL_OP_REPEAT); - else - val |= TIMER_CTRL_OP(TIMER_CTRL_OP_ONESHOT); - - writel(val | TIMER_CTRL_ENABLE | TIMER_CTRL_CLEAR, - evt->gpt_base + TIMER_CTRL_REG(timer)); -} - -static int mtk_clkevt_shutdown(struct clock_event_device *clk) -{ - mtk_clkevt_time_stop(to_mtk_clk(clk), GPT_CLK_EVT); - return 0; -} - -static int mtk_clkevt_set_periodic(struct clock_event_device *clk) -{ - struct mtk_clock_event_device *evt = to_mtk_clk(clk); - - mtk_clkevt_time_stop(evt, GPT_CLK_EVT); - mtk_clkevt_time_setup(evt, evt->ticks_per_jiffy, GPT_CLK_EVT); - mtk_clkevt_time_start(evt, true, GPT_CLK_EVT); - return 0; -} - -static int mtk_clkevt_next_event(unsigned long event, - struct clock_event_device *clk) -{ - struct mtk_clock_event_device *evt = to_mtk_clk(clk); - - mtk_clkevt_time_stop(evt, GPT_CLK_EVT); - mtk_clkevt_time_setup(evt, event, GPT_CLK_EVT); - mtk_clkevt_time_start(evt, false, GPT_CLK_EVT); - - return 0; -} - -static irqreturn_t mtk_timer_interrupt(int irq, void *dev_id) -{ - struct mtk_clock_event_device *evt = dev_id; - - /* Acknowledge timer0 irq */ - writel(GPT_IRQ_ACK(GPT_CLK_EVT), evt->gpt_base + GPT_IRQ_ACK_REG); - evt->dev.event_handler(&evt->dev); - - return IRQ_HANDLED; -} - -static void -__init mtk_timer_setup(struct mtk_clock_event_device *evt, u8 timer, u8 option) -{ - writel(TIMER_CTRL_CLEAR | TIMER_CTRL_DISABLE, - evt->gpt_base + TIMER_CTRL_REG(timer)); - - writel(TIMER_CLK_SRC(TIMER_CLK_SRC_SYS13M) | TIMER_CLK_DIV1, - evt->gpt_base + TIMER_CLK_REG(timer)); - - writel(0x0, evt->gpt_base + TIMER_CMP_REG(timer)); - - writel(TIMER_CTRL_OP(option) | TIMER_CTRL_ENABLE, - evt->gpt_base + TIMER_CTRL_REG(timer)); -} - -static void mtk_timer_enable_irq(struct mtk_clock_event_device *evt, u8 timer) -{ - u32 val; - - /* Disable all interrupts */ - writel(0x0, evt->gpt_base + GPT_IRQ_EN_REG); - - /* Acknowledge all spurious pending interrupts */ - writel(0x3f, evt->gpt_base + GPT_IRQ_ACK_REG); - - val = readl(evt->gpt_base + GPT_IRQ_EN_REG); - writel(val | GPT_IRQ_ENABLE(timer), - evt->gpt_base + GPT_IRQ_EN_REG); -} - -static int __init mtk_timer_init(struct device_node *node) -{ - struct mtk_clock_event_device *evt; - struct resource res; - unsigned long rate = 0; - struct clk *clk; - - evt = kzalloc(sizeof(*evt), GFP_KERNEL); - if (!evt) - return -ENOMEM; - - evt->dev.name = "mtk_tick"; - evt->dev.rating = 300; - evt->dev.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; - evt->dev.set_state_shutdown = mtk_clkevt_shutdown; - evt->dev.set_state_periodic = mtk_clkevt_set_periodic; - evt->dev.set_state_oneshot = mtk_clkevt_shutdown; - evt->dev.tick_resume = mtk_clkevt_shutdown; - evt->dev.set_next_event = mtk_clkevt_next_event; - evt->dev.cpumask = cpu_possible_mask; - - evt->gpt_base = of_io_request_and_map(node, 0, "mtk-timer"); - if (IS_ERR(evt->gpt_base)) { - pr_err("Can't get resource\n"); - goto err_kzalloc; - } - - evt->dev.irq = irq_of_parse_and_map(node, 0); - if (evt->dev.irq <= 0) { - pr_err("Can't parse IRQ\n"); - goto err_mem; - } - - clk = of_clk_get(node, 0); - if (IS_ERR(clk)) { - pr_err("Can't get timer clock\n"); - goto err_irq; - } - - if (clk_prepare_enable(clk)) { - pr_err("Can't prepare clock\n"); - goto err_clk_put; - } - rate = clk_get_rate(clk); - - if (request_irq(evt->dev.irq, mtk_timer_interrupt, - IRQF_TIMER | IRQF_IRQPOLL, "mtk_timer", evt)) { - pr_err("failed to setup irq %d\n", evt->dev.irq); - goto err_clk_disable; - } - - evt->ticks_per_jiffy = DIV_ROUND_UP(rate, HZ); - - /* Configure clock source */ - mtk_timer_setup(evt, GPT_CLK_SRC, TIMER_CTRL_OP_FREERUN); - clocksource_mmio_init(evt->gpt_base + TIMER_CNT_REG(GPT_CLK_SRC), - node->name, rate, 300, 32, clocksource_mmio_readl_up); - gpt_sched_reg = evt->gpt_base + TIMER_CNT_REG(GPT_CLK_SRC); - sched_clock_register(mtk_read_sched_clock, 32, rate); - - /* Configure clock event */ - mtk_timer_setup(evt, GPT_CLK_EVT, TIMER_CTRL_OP_REPEAT); - clockevents_config_and_register(&evt->dev, rate, 0x3, - 0xffffffff); - - mtk_timer_enable_irq(evt, GPT_CLK_EVT); - - return 0; - -err_clk_disable: - clk_disable_unprepare(clk); -err_clk_put: - clk_put(clk); -err_irq: - irq_dispose_mapping(evt->dev.irq); -err_mem: - iounmap(evt->gpt_base); - of_address_to_resource(node, 0, &res); - release_mem_region(res.start, resource_size(&res)); -err_kzalloc: - kfree(evt); - - return -EINVAL; -} -TIMER_OF_DECLARE(mtk_mt6577, "mediatek,mt6577-timer", mtk_timer_init); diff --git a/drivers/clocksource/tegra20_timer.c b/drivers/clocksource/tegra20_timer.c index c337a8100a7b..aa624885e0e2 100644 --- a/drivers/clocksource/tegra20_timer.c +++ b/drivers/clocksource/tegra20_timer.c @@ -230,7 +230,7 @@ static int __init tegra20_init_timer(struct device_node *np) return ret; } - tegra_clockevent.cpumask = cpu_all_mask; + tegra_clockevent.cpumask = cpu_possible_mask; tegra_clockevent.irq = tegra_timer_irq.irq; clockevents_config_and_register(&tegra_clockevent, 1000000, 0x1, 0x1fffffff); @@ -259,6 +259,6 @@ static int __init tegra20_init_rtc(struct device_node *np) else clk_prepare_enable(clk); - return register_persistent_clock(NULL, tegra_read_persistent_clock64); + return register_persistent_clock(tegra_read_persistent_clock64); } TIMER_OF_DECLARE(tegra20_rtc, "nvidia,tegra20-rtc", tegra20_init_rtc); diff --git a/drivers/clocksource/timer-atcpit100.c b/drivers/clocksource/timer-atcpit100.c index 5e23d7b4a722..b4bd2f5b801d 100644 --- a/drivers/clocksource/timer-atcpit100.c +++ b/drivers/clocksource/timer-atcpit100.c @@ -185,7 +185,7 @@ static struct timer_of to = { .set_state_oneshot = atcpit100_clkevt_set_oneshot, .tick_resume = atcpit100_clkevt_shutdown, .set_next_event = atcpit100_clkevt_next_event, - .cpumask = cpu_all_mask, + .cpumask = cpu_possible_mask, }, .of_irq = { diff --git a/drivers/clocksource/timer-keystone.c b/drivers/clocksource/timer-keystone.c index 0eee03250cfc..f5b2eda30bf3 100644 --- a/drivers/clocksource/timer-keystone.c +++ b/drivers/clocksource/timer-keystone.c @@ -211,7 +211,7 @@ static int __init keystone_timer_init(struct device_node *np) event_dev->set_state_shutdown = keystone_shutdown; event_dev->set_state_periodic = keystone_set_periodic; event_dev->set_state_oneshot = keystone_shutdown; - event_dev->cpumask = cpu_all_mask; + event_dev->cpumask = cpu_possible_mask; event_dev->owner = THIS_MODULE; event_dev->name = TIMER_NAME; event_dev->irq = irq; diff --git a/drivers/clocksource/timer-mediatek.c b/drivers/clocksource/timer-mediatek.c new file mode 100644 index 000000000000..eb10321f8517 --- /dev/null +++ b/drivers/clocksource/timer-mediatek.c @@ -0,0 +1,328 @@ +/* + * Mediatek SoCs General-Purpose Timer handling. + * + * Copyright (C) 2014 Matthias Brugger + * + * Matthias Brugger + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include "timer-of.h" + +#define TIMER_CLK_EVT (1) +#define TIMER_CLK_SRC (2) + +#define TIMER_SYNC_TICKS (3) + +/* gpt */ +#define GPT_IRQ_EN_REG 0x00 +#define GPT_IRQ_ENABLE(val) BIT((val) - 1) +#define GPT_IRQ_ACK_REG 0x08 +#define GPT_IRQ_ACK(val) BIT((val) - 1) + +#define GPT_CTRL_REG(val) (0x10 * (val)) +#define GPT_CTRL_OP(val) (((val) & 0x3) << 4) +#define GPT_CTRL_OP_ONESHOT (0) +#define GPT_CTRL_OP_REPEAT (1) +#define GPT_CTRL_OP_FREERUN (3) +#define GPT_CTRL_CLEAR (2) +#define GPT_CTRL_ENABLE (1) +#define GPT_CTRL_DISABLE (0) + +#define GPT_CLK_REG(val) (0x04 + (0x10 * (val))) +#define GPT_CLK_SRC(val) (((val) & 0x1) << 4) +#define GPT_CLK_SRC_SYS13M (0) +#define GPT_CLK_SRC_RTC32K (1) +#define GPT_CLK_DIV1 (0x0) +#define GPT_CLK_DIV2 (0x1) + +#define GPT_CNT_REG(val) (0x08 + (0x10 * (val))) +#define GPT_CMP_REG(val) (0x0C + (0x10 * (val))) + +/* system timer */ +#define SYST_BASE (0x40) + +#define SYST_CON (SYST_BASE + 0x0) +#define SYST_VAL (SYST_BASE + 0x4) + +#define SYST_CON_REG(to) (timer_of_base(to) + SYST_CON) +#define SYST_VAL_REG(to) (timer_of_base(to) + SYST_VAL) + +/* + * SYST_CON_EN: Clock enable. Shall be set to + * - Start timer countdown. + * - Allow timeout ticks being updated. + * - Allow changing interrupt functions. + * + * SYST_CON_IRQ_EN: Set to allow interrupt. + * + * SYST_CON_IRQ_CLR: Set to clear interrupt. + */ +#define SYST_CON_EN BIT(0) +#define SYST_CON_IRQ_EN BIT(1) +#define SYST_CON_IRQ_CLR BIT(4) + +static void __iomem *gpt_sched_reg __read_mostly; + +static void mtk_syst_ack_irq(struct timer_of *to) +{ + /* Clear and disable interrupt */ + writel(SYST_CON_IRQ_CLR | SYST_CON_EN, SYST_CON_REG(to)); +} + +static irqreturn_t mtk_syst_handler(int irq, void *dev_id) +{ + struct clock_event_device *clkevt = dev_id; + struct timer_of *to = to_timer_of(clkevt); + + mtk_syst_ack_irq(to); + clkevt->event_handler(clkevt); + + return IRQ_HANDLED; +} + +static int mtk_syst_clkevt_next_event(unsigned long ticks, + struct clock_event_device *clkevt) +{ + struct timer_of *to = to_timer_of(clkevt); + + /* Enable clock to allow timeout tick update later */ + writel(SYST_CON_EN, SYST_CON_REG(to)); + + /* + * Write new timeout ticks. Timer shall start countdown + * after timeout ticks are updated. + */ + writel(ticks, SYST_VAL_REG(to)); + + /* Enable interrupt */ + writel(SYST_CON_EN | SYST_CON_IRQ_EN, SYST_CON_REG(to)); + + return 0; +} + +static int mtk_syst_clkevt_shutdown(struct clock_event_device *clkevt) +{ + /* Disable timer */ + writel(0, SYST_CON_REG(to_timer_of(clkevt))); + + return 0; +} + +static int mtk_syst_clkevt_resume(struct clock_event_device *clkevt) +{ + return mtk_syst_clkevt_shutdown(clkevt); +} + +static int mtk_syst_clkevt_oneshot(struct clock_event_device *clkevt) +{ + return 0; +} + +static u64 notrace mtk_gpt_read_sched_clock(void) +{ + return readl_relaxed(gpt_sched_reg); +} + +static void mtk_gpt_clkevt_time_stop(struct timer_of *to, u8 timer) +{ + u32 val; + + val = readl(timer_of_base(to) + GPT_CTRL_REG(timer)); + writel(val & ~GPT_CTRL_ENABLE, timer_of_base(to) + + GPT_CTRL_REG(timer)); +} + +static void mtk_gpt_clkevt_time_setup(struct timer_of *to, + unsigned long delay, u8 timer) +{ + writel(delay, timer_of_base(to) + GPT_CMP_REG(timer)); +} + +static void mtk_gpt_clkevt_time_start(struct timer_of *to, + bool periodic, u8 timer) +{ + u32 val; + + /* Acknowledge interrupt */ + writel(GPT_IRQ_ACK(timer), timer_of_base(to) + GPT_IRQ_ACK_REG); + + val = readl(timer_of_base(to) + GPT_CTRL_REG(timer)); + + /* Clear 2 bit timer operation mode field */ + val &= ~GPT_CTRL_OP(0x3); + + if (periodic) + val |= GPT_CTRL_OP(GPT_CTRL_OP_REPEAT); + else + val |= GPT_CTRL_OP(GPT_CTRL_OP_ONESHOT); + + writel(val | GPT_CTRL_ENABLE | GPT_CTRL_CLEAR, + timer_of_base(to) + GPT_CTRL_REG(timer)); +} + +static int mtk_gpt_clkevt_shutdown(struct clock_event_device *clk) +{ + mtk_gpt_clkevt_time_stop(to_timer_of(clk), TIMER_CLK_EVT); + + return 0; +} + +static int mtk_gpt_clkevt_set_periodic(struct clock_event_device *clk) +{ + struct timer_of *to = to_timer_of(clk); + + mtk_gpt_clkevt_time_stop(to, TIMER_CLK_EVT); + mtk_gpt_clkevt_time_setup(to, to->of_clk.period, TIMER_CLK_EVT); + mtk_gpt_clkevt_time_start(to, true, TIMER_CLK_EVT); + + return 0; +} + +static int mtk_gpt_clkevt_next_event(unsigned long event, + struct clock_event_device *clk) +{ + struct timer_of *to = to_timer_of(clk); + + mtk_gpt_clkevt_time_stop(to, TIMER_CLK_EVT); + mtk_gpt_clkevt_time_setup(to, event, TIMER_CLK_EVT); + mtk_gpt_clkevt_time_start(to, false, TIMER_CLK_EVT); + + return 0; +} + +static irqreturn_t mtk_gpt_interrupt(int irq, void *dev_id) +{ + struct clock_event_device *clkevt = (struct clock_event_device *)dev_id; + struct timer_of *to = to_timer_of(clkevt); + + /* Acknowledge timer0 irq */ + writel(GPT_IRQ_ACK(TIMER_CLK_EVT), timer_of_base(to) + GPT_IRQ_ACK_REG); + clkevt->event_handler(clkevt); + + return IRQ_HANDLED; +} + +static void +__init mtk_gpt_setup(struct timer_of *to, u8 timer, u8 option) +{ + writel(GPT_CTRL_CLEAR | GPT_CTRL_DISABLE, + timer_of_base(to) + GPT_CTRL_REG(timer)); + + writel(GPT_CLK_SRC(GPT_CLK_SRC_SYS13M) | GPT_CLK_DIV1, + timer_of_base(to) + GPT_CLK_REG(timer)); + + writel(0x0, timer_of_base(to) + GPT_CMP_REG(timer)); + + writel(GPT_CTRL_OP(option) | GPT_CTRL_ENABLE, + timer_of_base(to) + GPT_CTRL_REG(timer)); +} + +static void mtk_gpt_enable_irq(struct timer_of *to, u8 timer) +{ + u32 val; + + /* Disable all interrupts */ + writel(0x0, timer_of_base(to) + GPT_IRQ_EN_REG); + + /* Acknowledge all spurious pending interrupts */ + writel(0x3f, timer_of_base(to) + GPT_IRQ_ACK_REG); + + val = readl(timer_of_base(to) + GPT_IRQ_EN_REG); + writel(val | GPT_IRQ_ENABLE(timer), + timer_of_base(to) + GPT_IRQ_EN_REG); +} + +static struct timer_of to = { + .flags = TIMER_OF_IRQ | TIMER_OF_BASE | TIMER_OF_CLOCK, + + .clkevt = { + .name = "mtk-clkevt", + .rating = 300, + .cpumask = cpu_possible_mask, + }, + + .of_irq = { + .flags = IRQF_TIMER | IRQF_IRQPOLL, + }, +}; + +static int __init mtk_syst_init(struct device_node *node) +{ + int ret; + + to.clkevt.features = CLOCK_EVT_FEAT_DYNIRQ | CLOCK_EVT_FEAT_ONESHOT; + to.clkevt.set_state_shutdown = mtk_syst_clkevt_shutdown; + to.clkevt.set_state_oneshot = mtk_syst_clkevt_oneshot; + to.clkevt.tick_resume = mtk_syst_clkevt_resume; + to.clkevt.set_next_event = mtk_syst_clkevt_next_event; + to.of_irq.handler = mtk_syst_handler; + + ret = timer_of_init(node, &to); + if (ret) + goto err; + + clockevents_config_and_register(&to.clkevt, timer_of_rate(&to), + TIMER_SYNC_TICKS, 0xffffffff); + + return 0; +err: + timer_of_cleanup(&to); + return ret; +} + +static int __init mtk_gpt_init(struct device_node *node) +{ + int ret; + + to.clkevt.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; + to.clkevt.set_state_shutdown = mtk_gpt_clkevt_shutdown; + to.clkevt.set_state_periodic = mtk_gpt_clkevt_set_periodic; + to.clkevt.set_state_oneshot = mtk_gpt_clkevt_shutdown; + to.clkevt.tick_resume = mtk_gpt_clkevt_shutdown; + to.clkevt.set_next_event = mtk_gpt_clkevt_next_event; + to.of_irq.handler = mtk_gpt_interrupt; + + ret = timer_of_init(node, &to); + if (ret) + goto err; + + /* Configure clock source */ + mtk_gpt_setup(&to, TIMER_CLK_SRC, GPT_CTRL_OP_FREERUN); + clocksource_mmio_init(timer_of_base(&to) + GPT_CNT_REG(TIMER_CLK_SRC), + node->name, timer_of_rate(&to), 300, 32, + clocksource_mmio_readl_up); + gpt_sched_reg = timer_of_base(&to) + GPT_CNT_REG(TIMER_CLK_SRC); + sched_clock_register(mtk_gpt_read_sched_clock, 32, timer_of_rate(&to)); + + /* Configure clock event */ + mtk_gpt_setup(&to, TIMER_CLK_EVT, GPT_CTRL_OP_REPEAT); + clockevents_config_and_register(&to.clkevt, timer_of_rate(&to), + TIMER_SYNC_TICKS, 0xffffffff); + + mtk_gpt_enable_irq(&to, TIMER_CLK_EVT); + + return 0; +err: + timer_of_cleanup(&to); + return ret; +} +TIMER_OF_DECLARE(mtk_mt6577, "mediatek,mt6577-timer", mtk_gpt_init); +TIMER_OF_DECLARE(mtk_mt6765, "mediatek,mt6765-timer", mtk_syst_init); diff --git a/drivers/clocksource/timer-sprd.c b/drivers/clocksource/timer-sprd.c index ef9ebeafb3ed..430cb99d8d79 100644 --- a/drivers/clocksource/timer-sprd.c +++ b/drivers/clocksource/timer-sprd.c @@ -156,4 +156,54 @@ static int __init sprd_timer_init(struct device_node *np) return 0; } +static struct timer_of suspend_to = { + .flags = TIMER_OF_BASE | TIMER_OF_CLOCK, +}; + +static u64 sprd_suspend_timer_read(struct clocksource *cs) +{ + return ~(u64)readl_relaxed(timer_of_base(&suspend_to) + + TIMER_VALUE_SHDW_LO) & cs->mask; +} + +static int sprd_suspend_timer_enable(struct clocksource *cs) +{ + sprd_timer_update_counter(timer_of_base(&suspend_to), + TIMER_VALUE_LO_MASK); + sprd_timer_enable(timer_of_base(&suspend_to), TIMER_CTL_PERIOD_MODE); + + return 0; +} + +static void sprd_suspend_timer_disable(struct clocksource *cs) +{ + sprd_timer_disable(timer_of_base(&suspend_to)); +} + +static struct clocksource suspend_clocksource = { + .name = "sprd_suspend_timer", + .rating = 200, + .read = sprd_suspend_timer_read, + .enable = sprd_suspend_timer_enable, + .disable = sprd_suspend_timer_disable, + .mask = CLOCKSOURCE_MASK(32), + .flags = CLOCK_SOURCE_IS_CONTINUOUS | CLOCK_SOURCE_SUSPEND_NONSTOP, +}; + +static int __init sprd_suspend_timer_init(struct device_node *np) +{ + int ret; + + ret = timer_of_init(np, &suspend_to); + if (ret) + return ret; + + clocksource_register_hz(&suspend_clocksource, + timer_of_rate(&suspend_to)); + + return 0; +} + TIMER_OF_DECLARE(sc9860_timer, "sprd,sc9860-timer", sprd_timer_init); +TIMER_OF_DECLARE(sc9860_persistent_timer, "sprd,sc9860-suspend-timer", + sprd_suspend_timer_init); diff --git a/drivers/clocksource/timer-stm32.c b/drivers/clocksource/timer-stm32.c index e5cdc3af684c..2717f88c7904 100644 --- a/drivers/clocksource/timer-stm32.c +++ b/drivers/clocksource/timer-stm32.c @@ -304,8 +304,10 @@ static int __init stm32_timer_init(struct device_node *node) to->private_data = kzalloc(sizeof(struct stm32_timer_private), GFP_KERNEL); - if (!to->private_data) + if (!to->private_data) { + ret = -ENOMEM; goto deinit; + } rstc = of_reset_control_get(node, NULL); if (!IS_ERR(rstc)) { diff --git a/drivers/clocksource/timer-ti-32k.c b/drivers/clocksource/timer-ti-32k.c index 880a861ab3c8..29e2e1a78a43 100644 --- a/drivers/clocksource/timer-ti-32k.c +++ b/drivers/clocksource/timer-ti-32k.c @@ -78,8 +78,7 @@ static struct ti_32k ti_32k_timer = { .rating = 250, .read = ti_32k_read_cycles, .mask = CLOCKSOURCE_MASK(32), - .flags = CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_SUSPEND_NONSTOP, + .flags = CLOCK_SOURCE_IS_CONTINUOUS, }, }; diff --git a/drivers/clocksource/zevio-timer.c b/drivers/clocksource/zevio-timer.c index a6a0338eea77..f74689334f7c 100644 --- a/drivers/clocksource/zevio-timer.c +++ b/drivers/clocksource/zevio-timer.c @@ -162,7 +162,7 @@ static int __init zevio_timer_add(struct device_node *node) timer->clkevt.set_state_oneshot = zevio_timer_set_oneshot; timer->clkevt.tick_resume = zevio_timer_set_oneshot; timer->clkevt.rating = 200; - timer->clkevt.cpumask = cpu_all_mask; + timer->clkevt.cpumask = cpu_possible_mask; timer->clkevt.features = CLOCK_EVT_FEAT_ONESHOT; timer->clkevt.irq = irqnr; diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 1de5ec8d5ea3..d4ed0022b0dd 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -294,6 +294,7 @@ struct pstate_funcs { static struct pstate_funcs pstate_funcs __read_mostly; static int hwp_active __read_mostly; +static int hwp_mode_bdw __read_mostly; static bool per_cpu_limits __read_mostly; static bool hwp_boost __read_mostly; @@ -310,12 +311,20 @@ static DEFINE_MUTEX(intel_pstate_limits_lock); #ifdef CONFIG_ACPI -static bool intel_pstate_get_ppc_enable_status(void) +static bool intel_pstate_acpi_pm_profile_server(void) { if (acpi_gbl_FADT.preferred_profile == PM_ENTERPRISE_SERVER || acpi_gbl_FADT.preferred_profile == PM_PERFORMANCE_SERVER) return true; + return false; +} + +static bool intel_pstate_get_ppc_enable_status(void) +{ + if (intel_pstate_acpi_pm_profile_server()) + return true; + return acpi_ppc; } @@ -458,6 +467,11 @@ static inline void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *pol static inline void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) { } + +static inline bool intel_pstate_acpi_pm_profile_server(void) +{ + return false; +} #endif static inline void update_turbo_state(void) @@ -1413,7 +1427,15 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); cpu->pstate.scaling = pstate_funcs.get_scaling(); cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; - cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; + + if (hwp_active && !hwp_mode_bdw) { + unsigned int phy_max, current_max; + + intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max); + cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling; + } else { + cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; + } if (pstate_funcs.get_aperf_mperf_shift) cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift(); @@ -1832,7 +1854,7 @@ static int intel_pstate_init_cpu(unsigned int cpunum) intel_pstate_hwp_enable(cpu); id = x86_match_cpu(intel_pstate_hwp_boost_ids); - if (id) + if (id && intel_pstate_acpi_pm_profile_server()) hwp_boost = true; } @@ -2385,6 +2407,18 @@ static bool __init intel_pstate_no_acpi_pss(void) return true; } +static bool __init intel_pstate_no_acpi_pcch(void) +{ + acpi_status status; + acpi_handle handle; + + status = acpi_get_handle(NULL, "\\_SB", &handle); + if (ACPI_FAILURE(status)) + return true; + + return !acpi_has_method(handle, "PCCH"); +} + static bool __init intel_pstate_has_acpi_ppc(void) { int i; @@ -2444,7 +2478,10 @@ static bool __init intel_pstate_platform_pwr_mgmt_exists(void) switch (plat_info[idx].data) { case PSS: - return intel_pstate_no_acpi_pss(); + if (!intel_pstate_no_acpi_pss()) + return false; + + return intel_pstate_no_acpi_pcch(); case PPC: return intel_pstate_has_acpi_ppc() && !force_load; } @@ -2467,28 +2504,36 @@ static inline bool intel_pstate_has_acpi_ppc(void) { return false; } static inline void intel_pstate_request_control_from_smm(void) {} #endif /* CONFIG_ACPI */ +#define INTEL_PSTATE_HWP_BROADWELL 0x01 + +#define ICPU_HWP(model, hwp_mode) \ + { X86_VENDOR_INTEL, 6, model, X86_FEATURE_HWP, hwp_mode } + static const struct x86_cpu_id hwp_support_ids[] __initconst = { - { X86_VENDOR_INTEL, 6, X86_MODEL_ANY, X86_FEATURE_HWP }, + ICPU_HWP(INTEL_FAM6_BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL), + ICPU_HWP(INTEL_FAM6_BROADWELL_XEON_D, INTEL_PSTATE_HWP_BROADWELL), + ICPU_HWP(X86_MODEL_ANY, 0), {} }; static int __init intel_pstate_init(void) { + const struct x86_cpu_id *id; int rc; if (no_load) return -ENODEV; - if (x86_match_cpu(hwp_support_ids)) { + id = x86_match_cpu(hwp_support_ids); + if (id) { copy_cpu_funcs(&core_funcs); if (!no_hwp) { hwp_active++; + hwp_mode_bdw = id->driver_data; intel_pstate.attr = hwp_cpufreq_attrs; goto hwp_cpu_matched; } } else { - const struct x86_cpu_id *id; - id = x86_match_cpu(intel_pstate_cpu_ids); if (!id) return -ENODEV; diff --git a/drivers/cpufreq/pcc-cpufreq.c b/drivers/cpufreq/pcc-cpufreq.c index 3f0ce2ae35ee..0c56c9759672 100644 --- a/drivers/cpufreq/pcc-cpufreq.c +++ b/drivers/cpufreq/pcc-cpufreq.c @@ -580,6 +580,10 @@ static int __init pcc_cpufreq_init(void) { int ret; + /* Skip initialization if another cpufreq driver is there. */ + if (cpufreq_get_current_driver()) + return 0; + if (acpi_disabled) return 0; diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c index d049fe4b80c4..efc9a7ae4857 100644 --- a/drivers/cpufreq/qcom-cpufreq-kryo.c +++ b/drivers/cpufreq/qcom-cpufreq-kryo.c @@ -42,6 +42,8 @@ enum _msm8996_version { NUM_OF_MSM8996_VERSIONS, }; +struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev; + static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void) { size_t len; @@ -74,7 +76,6 @@ static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void) static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) { struct opp_table *opp_tables[NR_CPUS] = {0}; - struct platform_device *cpufreq_dt_pdev; enum _msm8996_version msm8996_version; struct nvmem_cell *speedbin_nvmem; struct device_node *np; @@ -86,8 +87,8 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) int ret; cpu_dev = get_cpu_device(0); - if (NULL == cpu_dev) - ret = -ENODEV; + if (!cpu_dev) + return -ENODEV; msm8996_version = qcom_cpufreq_kryo_get_msm_id(); if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { @@ -96,8 +97,8 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) } np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); - if (IS_ERR(np)) - return PTR_ERR(np); + if (!np) + return -ENOENT; ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu"); if (!ret) { @@ -115,6 +116,8 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) speedbin = nvmem_cell_read(speedbin_nvmem, &len); nvmem_cell_put(speedbin_nvmem); + if (IS_ERR(speedbin)) + return PTR_ERR(speedbin); switch (msm8996_version) { case MSM8996_V3: @@ -127,6 +130,7 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) BUG(); break; } + kfree(speedbin); for_each_possible_cpu(cpu) { cpu_dev = get_cpu_device(cpu); @@ -162,8 +166,15 @@ free_opp: return ret; } +static int qcom_cpufreq_kryo_remove(struct platform_device *pdev) +{ + platform_device_unregister(cpufreq_dt_pdev); + return 0; +} + static struct platform_driver qcom_cpufreq_kryo_driver = { .probe = qcom_cpufreq_kryo_probe, + .remove = qcom_cpufreq_kryo_remove, .driver = { .name = "qcom-cpufreq-kryo", }, @@ -172,6 +183,7 @@ static struct platform_driver qcom_cpufreq_kryo_driver = { static const struct of_device_id qcom_cpufreq_kryo_match_list[] __initconst = { { .compatible = "qcom,apq8096", }, { .compatible = "qcom,msm8996", }, + {} }; /* @@ -198,8 +210,9 @@ static int __init qcom_cpufreq_kryo_init(void) if (unlikely(ret < 0)) return ret; - ret = PTR_ERR_OR_ZERO(platform_device_register_simple( - "qcom-cpufreq-kryo", -1, NULL, 0)); + kryo_cpufreq_pdev = platform_device_register_simple( + "qcom-cpufreq-kryo", -1, NULL, 0); + ret = PTR_ERR_OR_ZERO(kryo_cpufreq_pdev); if (0 == ret) return 0; @@ -208,5 +221,12 @@ static int __init qcom_cpufreq_kryo_init(void) } module_init(qcom_cpufreq_kryo_init); +static void __init qcom_cpufreq_kryo_exit(void) +{ + platform_device_unregister(kryo_cpufreq_pdev); + platform_driver_unregister(&qcom_cpufreq_kryo_driver); +} +module_exit(qcom_cpufreq_kryo_exit); + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. Kryo CPUfreq driver"); MODULE_LICENSE("GPL v2"); diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c index 00c7aab8e7d0..afebbd87c4aa 100644 --- a/drivers/crypto/chelsio/chtls/chtls_io.c +++ b/drivers/crypto/chelsio/chtls/chtls_io.c @@ -1548,15 +1548,14 @@ skip_copy: tp->urg_data = 0; if ((avail + offset) >= skb->len) { - if (likely(skb)) - chtls_free_skb(sk, skb); - buffers_freed++; if (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_TLS_HDR) { tp->copied_seq += skb->len; hws->rcvpld = skb->hdr_len; } else { tp->copied_seq += hws->rcvpld; } + chtls_free_skb(sk, skb); + buffers_freed++; hws->copied_seq = 0; if (copied >= target && !skb_peek(&sk->sk_receive_queue)) diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c index 1c6cbda56afe..09d823d36d3a 100644 --- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -266,6 +266,8 @@ static inline void padlock_xcrypt_ecb(const u8 *input, u8 *output, void *key, return; } + count -= initial; + if (initial) asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */ : "+S"(input), "+D"(output) @@ -273,7 +275,7 @@ static inline void padlock_xcrypt_ecb(const u8 *input, u8 *output, void *key, asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */ : "+S"(input), "+D"(output) - : "d"(control_word), "b"(key), "c"(count - initial)); + : "d"(control_word), "b"(key), "c"(count)); } static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key, @@ -284,6 +286,8 @@ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key, if (count < cbc_fetch_blocks) return cbc_crypt(input, output, key, iv, control_word, count); + count -= initial; + if (initial) asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */ : "+S" (input), "+D" (output), "+a" (iv) @@ -291,7 +295,7 @@ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key, asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */ : "+S" (input), "+D" (output), "+a" (iv) - : "d" (control_word), "b" (key), "c" (count-initial)); + : "d" (control_word), "b" (key), "c" (count)); return iv; } diff --git a/drivers/dax/device.c b/drivers/dax/device.c index de2f8297a210..108c37fca782 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -189,14 +189,16 @@ static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma, /* prevent private mappings from being established */ if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) { - dev_info(dev, "%s: %s: fail, attempted private mapping\n", + dev_info_ratelimited(dev, + "%s: %s: fail, attempted private mapping\n", current->comm, func); return -EINVAL; } mask = dax_region->align - 1; if (vma->vm_start & mask || vma->vm_end & mask) { - dev_info(dev, "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n", + dev_info_ratelimited(dev, + "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n", current->comm, func, vma->vm_start, vma->vm_end, mask); return -EINVAL; @@ -204,13 +206,15 @@ static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma, if ((dax_region->pfn_flags & (PFN_DEV|PFN_MAP)) == PFN_DEV && (vma->vm_flags & VM_DONTCOPY) == 0) { - dev_info(dev, "%s: %s: fail, dax range requires MADV_DONTFORK\n", + dev_info_ratelimited(dev, + "%s: %s: fail, dax range requires MADV_DONTFORK\n", current->comm, func); return -EINVAL; } if (!vma_is_dax(vma)) { - dev_info(dev, "%s: %s: fail, vma is not DAX capable\n", + dev_info_ratelimited(dev, + "%s: %s: fail, vma is not DAX capable\n", current->comm, func); return -EINVAL; } diff --git a/drivers/dax/super.c b/drivers/dax/super.c index 903d9c473749..45276abf03aa 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -86,6 +86,7 @@ bool __bdev_dax_supported(struct block_device *bdev, int blocksize) { struct dax_device *dax_dev; bool dax_enabled = false; + struct request_queue *q; pgoff_t pgoff; int err, id; void *kaddr; @@ -99,6 +100,13 @@ bool __bdev_dax_supported(struct block_device *bdev, int blocksize) return false; } + q = bdev_get_queue(bdev); + if (!q || !blk_queue_dax(q)) { + pr_debug("%s: error: request queue doesn't support dax\n", + bdevname(bdev, buf)); + return false; + } + err = bdev_dax_pgoff(bdev, 0, PAGE_SIZE, &pgoff); if (err) { pr_debug("%s: error: unaligned partition for dax\n", diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index b451354735d3..08ba8473a284 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -38,7 +38,7 @@ * Each device has a channels list, which runs unlocked but is never modified * once the device is registered, it's just setup by the driver. * - * See Documentation/dmaengine.txt for more details + * See Documentation/driver-api/dmaengine for more details */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt diff --git a/drivers/dma/k3dma.c b/drivers/dma/k3dma.c index fa31cccbe04f..6bfa217ed6d0 100644 --- a/drivers/dma/k3dma.c +++ b/drivers/dma/k3dma.c @@ -794,7 +794,7 @@ static struct dma_chan *k3_of_dma_simple_xlate(struct of_phandle_args *dma_spec, struct k3_dma_dev *d = ofdma->of_dma_data; unsigned int request = dma_spec->args[0]; - if (request > d->dma_requests) + if (request >= d->dma_requests) return NULL; return dma_get_slave_channel(&(d->chans[request].vc.chan)); diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c index defcdde4d358..de0957fe9668 100644 --- a/drivers/dma/pl330.c +++ b/drivers/dma/pl330.c @@ -3033,7 +3033,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) pd->src_addr_widths = PL330_DMA_BUSWIDTHS; pd->dst_addr_widths = PL330_DMA_BUSWIDTHS; pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); - pd->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; + pd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; pd->max_burst = ((pl330->quirks & PL330_QUIRK_BROKEN_NO_FLUSHP) ? 1 : PL330_MAX_BURST); diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c index 9b5ca8691f27..a4a931ddf6f6 100644 --- a/drivers/dma/ti/omap-dma.c +++ b/drivers/dma/ti/omap-dma.c @@ -1485,7 +1485,11 @@ static int omap_dma_probe(struct platform_device *pdev) od->ddev.src_addr_widths = OMAP_DMA_BUSWIDTHS; od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS; od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); - od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; + if (__dma_omap15xx(od->plat->dma_attr)) + od->ddev.residue_granularity = + DMA_RESIDUE_GRANULARITY_DESCRIPTOR; + else + od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; od->ddev.max_burst = SZ_16M - 1; /* CCEN: 24bit unsigned */ od->ddev.dev = &pdev->dev; INIT_LIST_HEAD(&od->ddev.channels); diff --git a/drivers/firmware/dmi-id.c b/drivers/firmware/dmi-id.c index 951b6c79f166..624a11cb07e2 100644 --- a/drivers/firmware/dmi-id.c +++ b/drivers/firmware/dmi-id.c @@ -47,6 +47,7 @@ DEFINE_DMI_ATTR_WITH_SHOW(product_name, 0444, DMI_PRODUCT_NAME); DEFINE_DMI_ATTR_WITH_SHOW(product_version, 0444, DMI_PRODUCT_VERSION); DEFINE_DMI_ATTR_WITH_SHOW(product_serial, 0400, DMI_PRODUCT_SERIAL); DEFINE_DMI_ATTR_WITH_SHOW(product_uuid, 0400, DMI_PRODUCT_UUID); +DEFINE_DMI_ATTR_WITH_SHOW(product_sku, 0444, DMI_PRODUCT_SKU); DEFINE_DMI_ATTR_WITH_SHOW(product_family, 0444, DMI_PRODUCT_FAMILY); DEFINE_DMI_ATTR_WITH_SHOW(board_vendor, 0444, DMI_BOARD_VENDOR); DEFINE_DMI_ATTR_WITH_SHOW(board_name, 0444, DMI_BOARD_NAME); @@ -193,6 +194,7 @@ static void __init dmi_id_init_attr_table(void) ADD_DMI_ATTR(product_serial, DMI_PRODUCT_SERIAL); ADD_DMI_ATTR(product_uuid, DMI_PRODUCT_UUID); ADD_DMI_ATTR(product_family, DMI_PRODUCT_FAMILY); + ADD_DMI_ATTR(product_sku, DMI_PRODUCT_SKU); ADD_DMI_ATTR(board_vendor, DMI_BOARD_VENDOR); ADD_DMI_ATTR(board_name, DMI_BOARD_NAME); ADD_DMI_ATTR(board_version, DMI_BOARD_VERSION); diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c index 54e66adef252..f2483548cde9 100644 --- a/drivers/firmware/dmi_scan.c +++ b/drivers/firmware/dmi_scan.c @@ -447,6 +447,7 @@ static void __init dmi_decode(const struct dmi_header *dm, void *dummy) dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6); dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7); dmi_save_uuid(dm, DMI_PRODUCT_UUID, 8); + dmi_save_ident(dm, DMI_PRODUCT_SKU, 25); dmi_save_ident(dm, DMI_PRODUCT_FAMILY, 26); break; case 2: /* Base Board Information */ diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig index 781a4a337557..d8e159feb573 100644 --- a/drivers/firmware/efi/Kconfig +++ b/drivers/firmware/efi/Kconfig @@ -87,6 +87,18 @@ config EFI_RUNTIME_WRAPPERS config EFI_ARMSTUB bool +config EFI_ARMSTUB_DTB_LOADER + bool "Enable the DTB loader" + depends on EFI_ARMSTUB + help + Select this config option to add support for the dtb= command + line parameter, allowing a device tree blob to be loaded into + memory from the EFI System Partition by the stub. + + The device tree is typically provided by the platform or by + the bootloader, so this option is mostly for development + purposes only. + config EFI_BOOTLOADER_CONTROL tristate "EFI Bootloader Control" depends on EFI_VARS diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c index 80d1a885def5..b5214c143fee 100644 --- a/drivers/firmware/efi/arm-init.c +++ b/drivers/firmware/efi/arm-init.c @@ -193,7 +193,7 @@ static __init void reserve_regions(void) * uses its own memory map instead. */ memblock_dump_all(); - memblock_remove(0, (phys_addr_t)ULLONG_MAX); + memblock_remove(0, PHYS_ADDR_MAX); for_each_efi_memory_desc(md) { paddr = md->phys_addr; diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c index 3bf0dca378a6..a7902fccdcfa 100644 --- a/drivers/firmware/efi/cper.c +++ b/drivers/firmware/efi/cper.c @@ -48,8 +48,21 @@ u64 cper_next_record_id(void) { static atomic64_t seq; - if (!atomic64_read(&seq)) - atomic64_set(&seq, ((u64)get_seconds()) << 32); + if (!atomic64_read(&seq)) { + time64_t time = ktime_get_real_seconds(); + + /* + * This code is unlikely to still be needed in year 2106, + * but just in case, let's use a few more bits for timestamps + * after y2038 to be sure they keep increasing monotonically + * for the next few hundred years... + */ + if (time < 0x80000000) + atomic64_set(&seq, (ktime_get_real_seconds()) << 32); + else + atomic64_set(&seq, 0x8000000000000000ull | + ktime_get_real_seconds() << 24); + } return atomic64_inc_return(&seq); } @@ -459,7 +472,7 @@ cper_estatus_print_section(const char *pfx, struct acpi_hest_generic_data *gdata else goto err_section_too_small; #if defined(CONFIG_ARM64) || defined(CONFIG_ARM) - } else if (!uuid_le_cmp(*sec_type, CPER_SEC_PROC_ARM)) { + } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { struct cper_sec_proc_arm *arm_err = acpi_hest_get_payload(gdata); printk("%ssection_type: ARM processor error\n", newpfx); diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c index 5a0fa939d70f..cfe87b465819 100644 --- a/drivers/firmware/efi/efi-pstore.c +++ b/drivers/firmware/efi/efi-pstore.c @@ -28,10 +28,9 @@ static int efi_pstore_close(struct pstore_info *psi) return 0; } -static inline u64 generic_id(unsigned long timestamp, - unsigned int part, int count) +static inline u64 generic_id(u64 timestamp, unsigned int part, int count) { - return ((u64) timestamp * 100 + part) * 1000 + count; + return (timestamp * 100 + part) * 1000 + count; } static int efi_pstore_read_func(struct efivar_entry *entry, @@ -42,7 +41,8 @@ static int efi_pstore_read_func(struct efivar_entry *entry, int i; int cnt; unsigned int part; - unsigned long time, size; + unsigned long size; + u64 time; if (efi_guidcmp(entry->var.VendorGuid, vendor)) return 0; @@ -50,7 +50,7 @@ static int efi_pstore_read_func(struct efivar_entry *entry, for (i = 0; i < DUMP_NAME_LEN; i++) name[i] = entry->var.VariableName[i]; - if (sscanf(name, "dump-type%u-%u-%d-%lu-%c", + if (sscanf(name, "dump-type%u-%u-%d-%llu-%c", &record->type, &part, &cnt, &time, &data_type) == 5) { record->id = generic_id(time, part, cnt); record->part = part; @@ -62,7 +62,7 @@ static int efi_pstore_read_func(struct efivar_entry *entry, else record->compressed = false; record->ecc_notice_size = 0; - } else if (sscanf(name, "dump-type%u-%u-%d-%lu", + } else if (sscanf(name, "dump-type%u-%u-%d-%llu", &record->type, &part, &cnt, &time) == 4) { record->id = generic_id(time, part, cnt); record->part = part; @@ -71,7 +71,7 @@ static int efi_pstore_read_func(struct efivar_entry *entry, record->time.tv_nsec = 0; record->compressed = false; record->ecc_notice_size = 0; - } else if (sscanf(name, "dump-type%u-%u-%lu", + } else if (sscanf(name, "dump-type%u-%u-%llu", &record->type, &part, &time) == 3) { /* * Check if an old format, @@ -250,9 +250,10 @@ static int efi_pstore_write(struct pstore_record *record) /* Since we copy the entire length of name, make sure it is wiped. */ memset(name, 0, sizeof(name)); - snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu-%c", + snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lld-%c", record->type, record->part, record->count, - record->time.tv_sec, record->compressed ? 'C' : 'D'); + (long long)record->time.tv_sec, + record->compressed ? 'C' : 'D'); for (i = 0; i < DUMP_NAME_LEN; i++) efi_name[i] = name[i]; @@ -327,15 +328,15 @@ static int efi_pstore_erase(struct pstore_record *record) char name[DUMP_NAME_LEN]; int ret; - snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu", + snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lld", record->type, record->part, record->count, - record->time.tv_sec); + (long long)record->time.tv_sec); ret = efi_pstore_erase_name(name); if (ret != -ENOENT) return ret; - snprintf(name, sizeof(name), "dump-type%u-%u-%lu", - record->type, record->part, record->time.tv_sec); + snprintf(name, sizeof(name), "dump-type%u-%u-%lld", + record->type, record->part, (long long)record->time.tv_sec); ret = efi_pstore_erase_name(name); return ret; diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index 232f4915223b..2a29dd9c986d 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -82,8 +82,11 @@ struct mm_struct efi_mm = { .mmap_sem = __RWSEM_INITIALIZER(efi_mm.mmap_sem), .page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock), .mmlist = LIST_HEAD_INIT(efi_mm.mmlist), + .cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0}, }; +struct workqueue_struct *efi_rts_wq; + static bool disable_runtime; static int __init setup_noefi(char *arg) { @@ -337,6 +340,18 @@ static int __init efisubsys_init(void) if (!efi_enabled(EFI_BOOT)) return 0; + /* + * Since we process only one efi_runtime_service() at a time, an + * ordered workqueue (which creates only one execution context) + * should suffice all our needs. + */ + efi_rts_wq = alloc_ordered_workqueue("efi_rts_wq", 0); + if (!efi_rts_wq) { + pr_err("Creating efi_rts_wq failed, EFI runtime services disabled.\n"); + clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); + return 0; + } + /* We register the efi directory at /sys/firmware/efi */ efi_kobj = kobject_create_and_add("efi", firmware_kobj); if (!efi_kobj) { @@ -388,7 +403,7 @@ subsys_initcall(efisubsys_init); * and if so, populate the supplied memory descriptor with the appropriate * data. */ -int __init efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md) +int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md) { efi_memory_desc_t *md; @@ -406,12 +421,6 @@ int __init efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md) u64 size; u64 end; - if (!(md->attribute & EFI_MEMORY_RUNTIME) && - md->type != EFI_BOOT_SERVICES_DATA && - md->type != EFI_RUNTIME_SERVICES_DATA) { - continue; - } - size = md->num_pages << EFI_PAGE_SHIFT; end = md->phys_addr + size; if (phys_addr >= md->phys_addr && phys_addr < end) { diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c index 1ab80e06e7c5..5d06bd247d07 100644 --- a/drivers/firmware/efi/esrt.c +++ b/drivers/firmware/efi/esrt.c @@ -250,7 +250,10 @@ void __init efi_esrt_init(void) return; rc = efi_mem_desc_lookup(efi.esrt, &md); - if (rc < 0) { + if (rc < 0 || + (!(md.attribute & EFI_MEMORY_RUNTIME) && + md.type != EFI_BOOT_SERVICES_DATA && + md.type != EFI_RUNTIME_SERVICES_DATA)) { pr_warn("ESRT header is not in the memory map.\n"); return; } @@ -326,7 +329,8 @@ void __init efi_esrt_init(void) end = esrt_data + size; pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end); - efi_mem_reserve(esrt_data, esrt_data_size); + if (md.type == EFI_BOOT_SERVICES_DATA) + efi_mem_reserve(esrt_data, esrt_data_size); pr_debug("esrt-init: loaded.\n"); } diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c index 01a9d78ee415..6920033de6d4 100644 --- a/drivers/firmware/efi/libstub/arm-stub.c +++ b/drivers/firmware/efi/libstub/arm-stub.c @@ -40,31 +40,6 @@ static u64 virtmap_base = EFI_RT_VIRTUAL_BASE; -efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, - void *__image, void **__fh) -{ - efi_file_io_interface_t *io; - efi_loaded_image_t *image = __image; - efi_file_handle_t *fh; - efi_guid_t fs_proto = EFI_FILE_SYSTEM_GUID; - efi_status_t status; - void *handle = (void *)(unsigned long)image->device_handle; - - status = sys_table_arg->boottime->handle_protocol(handle, - &fs_proto, (void **)&io); - if (status != EFI_SUCCESS) { - efi_printk(sys_table_arg, "Failed to handle fs_proto\n"); - return status; - } - - status = io->open_volume(io, &fh); - if (status != EFI_SUCCESS) - efi_printk(sys_table_arg, "Failed to open volume\n"); - - *__fh = fh; - return status; -} - void efi_char16_printk(efi_system_table_t *sys_table_arg, efi_char16_t *str) { @@ -202,9 +177,10 @@ unsigned long efi_entry(void *handle, efi_system_table_t *sys_table, * 'dtb=' unless UEFI Secure Boot is disabled. We assume that secure * boot is enabled if we can't determine its state. */ - if (secure_boot != efi_secureboot_mode_disabled && - strstr(cmdline_ptr, "dtb=")) { - pr_efi(sys_table, "Ignoring DTB from command line.\n"); + if (!IS_ENABLED(CONFIG_EFI_ARMSTUB_DTB_LOADER) || + secure_boot != efi_secureboot_mode_disabled) { + if (strstr(cmdline_ptr, "dtb=")) + pr_efi(sys_table, "Ignoring DTB from command line.\n"); } else { status = handle_cmdline_files(sys_table, image, cmdline_ptr, "dtb=", diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c index 50a9cab5a834..e94975f4655b 100644 --- a/drivers/firmware/efi/libstub/efi-stub-helper.c +++ b/drivers/firmware/efi/libstub/efi-stub-helper.c @@ -413,6 +413,34 @@ static efi_status_t efi_file_close(void *handle) return efi_call_proto(efi_file_handle, close, handle); } +static efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, + efi_loaded_image_t *image, + efi_file_handle_t **__fh) +{ + efi_file_io_interface_t *io; + efi_file_handle_t *fh; + efi_guid_t fs_proto = EFI_FILE_SYSTEM_GUID; + efi_status_t status; + void *handle = (void *)(unsigned long)efi_table_attr(efi_loaded_image, + device_handle, + image); + + status = efi_call_early(handle_protocol, handle, + &fs_proto, (void **)&io); + if (status != EFI_SUCCESS) { + efi_printk(sys_table_arg, "Failed to handle fs_proto\n"); + return status; + } + + status = efi_call_proto(efi_file_io_interface, open_volume, io, &fh); + if (status != EFI_SUCCESS) + efi_printk(sys_table_arg, "Failed to open volume\n"); + else + *__fh = fh; + + return status; +} + /* * Parse the ASCII string 'cmdline' for EFI options, denoted by the efi= * option, e.g. efi=nochunk. @@ -563,8 +591,7 @@ efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, /* Only open the volume once. */ if (!i) { - status = efi_open_volume(sys_table_arg, image, - (void **)&fh); + status = efi_open_volume(sys_table_arg, image, &fh); if (status != EFI_SUCCESS) goto free_files; } diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h index f59564b72ddc..32799cf039ef 100644 --- a/drivers/firmware/efi/libstub/efistub.h +++ b/drivers/firmware/efi/libstub/efistub.h @@ -36,9 +36,6 @@ extern int __pure is_quiet(void); void efi_char16_printk(efi_system_table_t *, efi_char16_t *); -efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image, - void **__fh); - unsigned long get_dram_base(efi_system_table_t *sys_table_arg); efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table, diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c index caa37a6dd9d4..a90b0b8fc69a 100644 --- a/drivers/firmware/efi/libstub/tpm.c +++ b/drivers/firmware/efi/libstub/tpm.c @@ -64,7 +64,7 @@ static void efi_retrieve_tpm2_eventlog_1_2(efi_system_table_t *sys_table_arg) efi_guid_t tcg2_guid = EFI_TCG2_PROTOCOL_GUID; efi_guid_t linux_eventlog_guid = LINUX_EFI_TPM_EVENT_LOG_GUID; efi_status_t status; - efi_physical_addr_t log_location, log_last_entry; + efi_physical_addr_t log_location = 0, log_last_entry = 0; struct linux_efi_tpm_eventlog *log_tbl = NULL; unsigned long first_entry_addr, last_entry_addr; size_t log_size, last_entry_size; diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c index ae54870b2788..aa66cbf23512 100644 --- a/drivers/firmware/efi/runtime-wrappers.c +++ b/drivers/firmware/efi/runtime-wrappers.c @@ -1,6 +1,15 @@ /* * runtime-wrappers.c - Runtime Services function call wrappers * + * Implementation summary: + * ----------------------- + * 1. When user/kernel thread requests to execute efi_runtime_service(), + * enqueue work to efi_rts_wq. + * 2. Caller thread waits for completion until the work is finished + * because it's dependent on the return status and execution of + * efi_runtime_service(). + * For instance, get_variable() and get_next_variable(). + * * Copyright (C) 2014 Linaro Ltd. * * Split off from arch/x86/platform/efi/efi.c @@ -22,6 +31,9 @@ #include #include #include +#include +#include + #include /* @@ -33,6 +45,76 @@ #define __efi_call_virt(f, args...) \ __efi_call_virt_pointer(efi.systab->runtime, f, args) +/* efi_runtime_service() function identifiers */ +enum efi_rts_ids { + GET_TIME, + SET_TIME, + GET_WAKEUP_TIME, + SET_WAKEUP_TIME, + GET_VARIABLE, + GET_NEXT_VARIABLE, + SET_VARIABLE, + QUERY_VARIABLE_INFO, + GET_NEXT_HIGH_MONO_COUNT, + UPDATE_CAPSULE, + QUERY_CAPSULE_CAPS, +}; + +/* + * efi_runtime_work: Details of EFI Runtime Service work + * @arg<1-5>: EFI Runtime Service function arguments + * @status: Status of executing EFI Runtime Service + * @efi_rts_id: EFI Runtime Service function identifier + * @efi_rts_comp: Struct used for handling completions + */ +struct efi_runtime_work { + void *arg1; + void *arg2; + void *arg3; + void *arg4; + void *arg5; + efi_status_t status; + struct work_struct work; + enum efi_rts_ids efi_rts_id; + struct completion efi_rts_comp; +}; + +/* + * efi_queue_work: Queue efi_runtime_service() and wait until it's done + * @rts: efi_runtime_service() function identifier + * @rts_arg<1-5>: efi_runtime_service() function arguments + * + * Accesses to efi_runtime_services() are serialized by a binary + * semaphore (efi_runtime_lock) and caller waits until the work is + * finished, hence _only_ one work is queued at a time and the caller + * thread waits for completion. + */ +#define efi_queue_work(_rts, _arg1, _arg2, _arg3, _arg4, _arg5) \ +({ \ + struct efi_runtime_work efi_rts_work; \ + efi_rts_work.status = EFI_ABORTED; \ + \ + init_completion(&efi_rts_work.efi_rts_comp); \ + INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \ + efi_rts_work.arg1 = _arg1; \ + efi_rts_work.arg2 = _arg2; \ + efi_rts_work.arg3 = _arg3; \ + efi_rts_work.arg4 = _arg4; \ + efi_rts_work.arg5 = _arg5; \ + efi_rts_work.efi_rts_id = _rts; \ + \ + /* \ + * queue_work() returns 0 if work was already on queue, \ + * _ideally_ this should never happen. \ + */ \ + if (queue_work(efi_rts_wq, &efi_rts_work.work)) \ + wait_for_completion(&efi_rts_work.efi_rts_comp); \ + else \ + pr_err("Failed to queue work to efi_rts_wq.\n"); \ + \ + efi_rts_work.status; \ +}) + void efi_call_virt_check_flags(unsigned long flags, const char *call) { unsigned long cur_flags, mismatch; @@ -90,13 +172,98 @@ void efi_call_virt_check_flags(unsigned long flags, const char *call) */ static DEFINE_SEMAPHORE(efi_runtime_lock); +/* + * Calls the appropriate efi_runtime_service() with the appropriate + * arguments. + * + * Semantics followed by efi_call_rts() to understand efi_runtime_work: + * 1. If argument was a pointer, recast it from void pointer to original + * pointer type. + * 2. If argument was a value, recast it from void pointer to original + * pointer type and dereference it. + */ +static void efi_call_rts(struct work_struct *work) +{ + struct efi_runtime_work *efi_rts_work; + void *arg1, *arg2, *arg3, *arg4, *arg5; + efi_status_t status = EFI_NOT_FOUND; + + efi_rts_work = container_of(work, struct efi_runtime_work, work); + arg1 = efi_rts_work->arg1; + arg2 = efi_rts_work->arg2; + arg3 = efi_rts_work->arg3; + arg4 = efi_rts_work->arg4; + arg5 = efi_rts_work->arg5; + + switch (efi_rts_work->efi_rts_id) { + case GET_TIME: + status = efi_call_virt(get_time, (efi_time_t *)arg1, + (efi_time_cap_t *)arg2); + break; + case SET_TIME: + status = efi_call_virt(set_time, (efi_time_t *)arg1); + break; + case GET_WAKEUP_TIME: + status = efi_call_virt(get_wakeup_time, (efi_bool_t *)arg1, + (efi_bool_t *)arg2, (efi_time_t *)arg3); + break; + case SET_WAKEUP_TIME: + status = efi_call_virt(set_wakeup_time, *(efi_bool_t *)arg1, + (efi_time_t *)arg2); + break; + case GET_VARIABLE: + status = efi_call_virt(get_variable, (efi_char16_t *)arg1, + (efi_guid_t *)arg2, (u32 *)arg3, + (unsigned long *)arg4, (void *)arg5); + break; + case GET_NEXT_VARIABLE: + status = efi_call_virt(get_next_variable, (unsigned long *)arg1, + (efi_char16_t *)arg2, + (efi_guid_t *)arg3); + break; + case SET_VARIABLE: + status = efi_call_virt(set_variable, (efi_char16_t *)arg1, + (efi_guid_t *)arg2, *(u32 *)arg3, + *(unsigned long *)arg4, (void *)arg5); + break; + case QUERY_VARIABLE_INFO: + status = efi_call_virt(query_variable_info, *(u32 *)arg1, + (u64 *)arg2, (u64 *)arg3, (u64 *)arg4); + break; + case GET_NEXT_HIGH_MONO_COUNT: + status = efi_call_virt(get_next_high_mono_count, (u32 *)arg1); + break; + case UPDATE_CAPSULE: + status = efi_call_virt(update_capsule, + (efi_capsule_header_t **)arg1, + *(unsigned long *)arg2, + *(unsigned long *)arg3); + break; + case QUERY_CAPSULE_CAPS: + status = efi_call_virt(query_capsule_caps, + (efi_capsule_header_t **)arg1, + *(unsigned long *)arg2, (u64 *)arg3, + (int *)arg4); + break; + default: + /* + * Ideally, we should never reach here because a caller of this + * function should have put the right efi_runtime_service() + * function identifier into efi_rts_work->efi_rts_id + */ + pr_err("Requested executing invalid EFI Runtime Service.\n"); + } + efi_rts_work->status = status; + complete(&efi_rts_work->efi_rts_comp); +} + static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) { efi_status_t status; if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(get_time, tm, tc); + status = efi_queue_work(GET_TIME, tm, tc, NULL, NULL, NULL); up(&efi_runtime_lock); return status; } @@ -107,7 +274,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm) if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(set_time, tm); + status = efi_queue_work(SET_TIME, tm, NULL, NULL, NULL, NULL); up(&efi_runtime_lock); return status; } @@ -120,7 +287,8 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(get_wakeup_time, enabled, pending, tm); + status = efi_queue_work(GET_WAKEUP_TIME, enabled, pending, tm, NULL, + NULL); up(&efi_runtime_lock); return status; } @@ -131,7 +299,8 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm) if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(set_wakeup_time, enabled, tm); + status = efi_queue_work(SET_WAKEUP_TIME, &enabled, tm, NULL, NULL, + NULL); up(&efi_runtime_lock); return status; } @@ -146,8 +315,8 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(get_variable, name, vendor, attr, data_size, - data); + status = efi_queue_work(GET_VARIABLE, name, vendor, attr, data_size, + data); up(&efi_runtime_lock); return status; } @@ -160,7 +329,8 @@ static efi_status_t virt_efi_get_next_variable(unsigned long *name_size, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(get_next_variable, name_size, name, vendor); + status = efi_queue_work(GET_NEXT_VARIABLE, name_size, name, vendor, + NULL, NULL); up(&efi_runtime_lock); return status; } @@ -175,8 +345,8 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(set_variable, name, vendor, attr, data_size, - data); + status = efi_queue_work(SET_VARIABLE, name, vendor, &attr, &data_size, + data); up(&efi_runtime_lock); return status; } @@ -210,8 +380,8 @@ static efi_status_t virt_efi_query_variable_info(u32 attr, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(query_variable_info, attr, storage_space, - remaining_space, max_variable_size); + status = efi_queue_work(QUERY_VARIABLE_INFO, &attr, storage_space, + remaining_space, max_variable_size, NULL); up(&efi_runtime_lock); return status; } @@ -242,7 +412,8 @@ static efi_status_t virt_efi_get_next_high_mono_count(u32 *count) if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(get_next_high_mono_count, count); + status = efi_queue_work(GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL, + NULL, NULL); up(&efi_runtime_lock); return status; } @@ -272,7 +443,8 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(update_capsule, capsules, count, sg_list); + status = efi_queue_work(UPDATE_CAPSULE, capsules, &count, &sg_list, + NULL, NULL); up(&efi_runtime_lock); return status; } @@ -289,8 +461,8 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules, if (down_interruptible(&efi_runtime_lock)) return EFI_ABORTED; - status = efi_call_virt(query_capsule_caps, capsules, count, max_size, - reset_type); + status = efi_queue_work(QUERY_CAPSULE_CAPS, capsules, &count, + max_size, reset_type, NULL); up(&efi_runtime_lock); return status; } diff --git a/drivers/fpga/altera-cvp.c b/drivers/fpga/altera-cvp.c index dd4edd8f22ce..7fa793672a7a 100644 --- a/drivers/fpga/altera-cvp.c +++ b/drivers/fpga/altera-cvp.c @@ -455,8 +455,10 @@ static int altera_cvp_probe(struct pci_dev *pdev, mgr = fpga_mgr_create(&pdev->dev, conf->mgr_name, &altera_cvp_ops, conf); - if (!mgr) - return -ENOMEM; + if (!mgr) { + ret = -ENOMEM; + goto err_unmap; + } pci_set_drvdata(pdev, mgr); diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c index d3cf9502e7e7..58faeb1cef63 100644 --- a/drivers/gpio/gpio-uniphier.c +++ b/drivers/gpio/gpio-uniphier.c @@ -181,7 +181,11 @@ static int uniphier_gpio_to_irq(struct gpio_chip *chip, unsigned int offset) fwspec.fwnode = of_node_to_fwnode(chip->parent->of_node); fwspec.param_count = 2; fwspec.param[0] = offset - UNIPHIER_GPIO_IRQ_OFFSET; - fwspec.param[1] = IRQ_TYPE_NONE; + /* + * IRQ_TYPE_NONE is rejected by the parent irq domain. Set LEVEL_HIGH + * temporarily. Anyway, ->irq_set_type() will override it later. + */ + fwspec.param[1] = IRQ_TYPE_LEVEL_HIGH; return irq_create_fwspec_mapping(&fwspec); } diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c index e2232cbcec8b..addd9fecc198 100644 --- a/drivers/gpio/gpiolib-acpi.c +++ b/drivers/gpio/gpiolib-acpi.c @@ -25,6 +25,7 @@ struct acpi_gpio_event { struct list_head node; + struct list_head initial_sync_list; acpi_handle handle; unsigned int pin; unsigned int irq; @@ -50,6 +51,9 @@ struct acpi_gpio_chip { struct list_head events; }; +static LIST_HEAD(acpi_gpio_initial_sync_list); +static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock); + static int acpi_gpiochip_find(struct gpio_chip *gc, void *data) { if (!gc->parent) @@ -85,6 +89,21 @@ static struct gpio_desc *acpi_get_gpiod(char *path, int pin) return gpiochip_get_desc(chip, pin); } +static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event) +{ + mutex_lock(&acpi_gpio_initial_sync_list_lock); + list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list); + mutex_unlock(&acpi_gpio_initial_sync_list_lock); +} + +static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event) +{ + mutex_lock(&acpi_gpio_initial_sync_list_lock); + if (!list_empty(&event->initial_sync_list)) + list_del_init(&event->initial_sync_list); + mutex_unlock(&acpi_gpio_initial_sync_list_lock); +} + static irqreturn_t acpi_gpio_irq_handler(int irq, void *data) { struct acpi_gpio_event *event = data; @@ -136,7 +155,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, irq_handler_t handler = NULL; struct gpio_desc *desc; unsigned long irqflags; - int ret, pin, irq; + int ret, pin, irq, value; if (!acpi_gpio_get_irq_resource(ares, &agpio)) return AE_OK; @@ -167,6 +186,8 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, gpiod_direction_input(desc); + value = gpiod_get_value(desc); + ret = gpiochip_lock_as_irq(chip, pin); if (ret) { dev_err(chip->parent, "Failed to lock GPIO as interrupt\n"); @@ -208,6 +229,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, event->irq = irq; event->pin = pin; event->desc = desc; + INIT_LIST_HEAD(&event->initial_sync_list); ret = request_threaded_irq(event->irq, NULL, handler, irqflags, "ACPI:Event", event); @@ -222,6 +244,18 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, enable_irq_wake(irq); list_add_tail(&event->node, &acpi_gpio->events); + + /* + * Make sure we trigger the initial state of the IRQ when using RISING + * or FALLING. Note we run the handlers on late_init, the AML code + * may refer to OperationRegions from other (builtin) drivers which + * may be probed after us. + */ + if (handler == acpi_gpio_irq_handler && + (((irqflags & IRQF_TRIGGER_RISING) && value == 1) || + ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))) + acpi_gpio_add_to_initial_sync_list(event); + return AE_OK; fail_free_event: @@ -294,6 +328,8 @@ void acpi_gpiochip_free_interrupts(struct gpio_chip *chip) list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) { struct gpio_desc *desc; + acpi_gpio_del_from_initial_sync_list(event); + if (irqd_is_wakeup_set(irq_get_irq_data(event->irq))) disable_irq_wake(event->irq); @@ -1158,3 +1194,21 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id) return con_id == NULL; } + +/* Sync the initial state of handlers after all builtin drivers have probed */ +static int acpi_gpio_initial_sync(void) +{ + struct acpi_gpio_event *event, *ep; + + mutex_lock(&acpi_gpio_initial_sync_list_lock); + list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list, + initial_sync_list) { + acpi_evaluate_object(event->handle, NULL, NULL, NULL); + list_del_init(&event->initial_sync_list); + } + mutex_unlock(&acpi_gpio_initial_sync_list_lock); + + return 0; +} +/* We must use _sync so that this runs after the first deferred_probe run */ +late_initcall_sync(acpi_gpio_initial_sync); diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c index 28d968088131..53a14ee8ad6d 100644 --- a/drivers/gpio/gpiolib-of.c +++ b/drivers/gpio/gpiolib-of.c @@ -64,7 +64,8 @@ static void of_gpio_flags_quirks(struct device_node *np, * Note that active low is the default. */ if (IS_ENABLED(CONFIG_REGULATOR) && - (of_device_is_compatible(np, "reg-fixed-voltage") || + (of_device_is_compatible(np, "regulator-fixed") || + of_device_is_compatible(np, "reg-fixed-voltage") || of_device_is_compatible(np, "regulator-gpio"))) { /* * The regulator GPIO handles are specified such that the diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index a59c07590cee..7dcbac8af9a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -190,6 +190,7 @@ struct amdgpu_job; struct amdgpu_irq_src; struct amdgpu_fpriv; struct amdgpu_bo_va_mapping; +struct amdgpu_atif; enum amdgpu_cp_irq { AMDGPU_CP_IRQ_GFX_EOP = 0, @@ -1269,43 +1270,6 @@ struct amdgpu_vram_scratch { /* * ACPI */ -struct amdgpu_atif_notification_cfg { - bool enabled; - int command_code; -}; - -struct amdgpu_atif_notifications { - bool display_switch; - bool expansion_mode_change; - bool thermal_state; - bool forced_power_state; - bool system_power_state; - bool display_conf_change; - bool px_gfx_switch; - bool brightness_change; - bool dgpu_display_event; -}; - -struct amdgpu_atif_functions { - bool system_params; - bool sbios_requests; - bool select_active_disp; - bool lid_state; - bool get_tv_standard; - bool set_tv_standard; - bool get_panel_expansion_mode; - bool set_panel_expansion_mode; - bool temperature_change; - bool graphics_device_types; -}; - -struct amdgpu_atif { - struct amdgpu_atif_notifications notifications; - struct amdgpu_atif_functions functions; - struct amdgpu_atif_notification_cfg notification_cfg; - struct amdgpu_encoder *encoder_for_bl; -}; - struct amdgpu_atcs_functions { bool get_ext_state; bool pcie_perf_req; @@ -1466,7 +1430,7 @@ struct amdgpu_device { #if defined(CONFIG_DEBUG_FS) struct dentry *debugfs_regs[AMDGPU_DEBUGFS_MAX_COMPONENTS]; #endif - struct amdgpu_atif atif; + struct amdgpu_atif *atif; struct amdgpu_atcs atcs; struct mutex srbm_mutex; /* GRBM index mutex. Protects concurrent access to GRBM index */ @@ -1894,6 +1858,12 @@ static inline bool amdgpu_atpx_dgpu_req_power_for_displays(void) { return false; static inline bool amdgpu_has_atpx(void) { return false; } #endif +#if defined(CONFIG_VGA_SWITCHEROO) && defined(CONFIG_ACPI) +void *amdgpu_atpx_get_dhandle(void); +#else +static inline void *amdgpu_atpx_get_dhandle(void) { return NULL; } +#endif + /* * KMS */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c index f4c474a95875..71efcf38f11b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c @@ -57,6 +57,10 @@ #define ACP_I2S_COMP2_CAP_REG_OFFSET 0xa8 #define ACP_I2S_COMP1_PLAY_REG_OFFSET 0x6c #define ACP_I2S_COMP2_PLAY_REG_OFFSET 0x68 +#define ACP_BT_PLAY_REGS_START 0x14970 +#define ACP_BT_PLAY_REGS_END 0x14a24 +#define ACP_BT_COMP1_REG_OFFSET 0xac +#define ACP_BT_COMP2_REG_OFFSET 0xa8 #define mmACP_PGFSM_RETAIN_REG 0x51c9 #define mmACP_PGFSM_CONFIG_REG 0x51ca @@ -77,7 +81,7 @@ #define ACP_SOFT_RESET_DONE_TIME_OUT_VALUE 0x000000FF #define ACP_TIMEOUT_LOOP 0x000000FF -#define ACP_DEVS 3 +#define ACP_DEVS 4 #define ACP_SRC_ID 162 enum { @@ -316,14 +320,13 @@ static int acp_hw_init(void *handle) if (adev->acp.acp_cell == NULL) return -ENOMEM; - adev->acp.acp_res = kcalloc(4, sizeof(struct resource), GFP_KERNEL); - + adev->acp.acp_res = kcalloc(5, sizeof(struct resource), GFP_KERNEL); if (adev->acp.acp_res == NULL) { kfree(adev->acp.acp_cell); return -ENOMEM; } - i2s_pdata = kcalloc(2, sizeof(struct i2s_platform_data), GFP_KERNEL); + i2s_pdata = kcalloc(3, sizeof(struct i2s_platform_data), GFP_KERNEL); if (i2s_pdata == NULL) { kfree(adev->acp.acp_res); kfree(adev->acp.acp_cell); @@ -358,6 +361,20 @@ static int acp_hw_init(void *handle) i2s_pdata[1].i2s_reg_comp1 = ACP_I2S_COMP1_CAP_REG_OFFSET; i2s_pdata[1].i2s_reg_comp2 = ACP_I2S_COMP2_CAP_REG_OFFSET; + i2s_pdata[2].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET; + switch (adev->asic_type) { + case CHIP_STONEY: + i2s_pdata[2].quirks |= DW_I2S_QUIRK_16BIT_IDX_OVERRIDE; + break; + default: + break; + } + + i2s_pdata[2].cap = DWC_I2S_PLAY | DWC_I2S_RECORD; + i2s_pdata[2].snd_rates = SNDRV_PCM_RATE_8000_96000; + i2s_pdata[2].i2s_reg_comp1 = ACP_BT_COMP1_REG_OFFSET; + i2s_pdata[2].i2s_reg_comp2 = ACP_BT_COMP2_REG_OFFSET; + adev->acp.acp_res[0].name = "acp2x_dma"; adev->acp.acp_res[0].flags = IORESOURCE_MEM; adev->acp.acp_res[0].start = acp_base; @@ -373,13 +390,18 @@ static int acp_hw_init(void *handle) adev->acp.acp_res[2].start = acp_base + ACP_I2S_CAP_REGS_START; adev->acp.acp_res[2].end = acp_base + ACP_I2S_CAP_REGS_END; - adev->acp.acp_res[3].name = "acp2x_dma_irq"; - adev->acp.acp_res[3].flags = IORESOURCE_IRQ; - adev->acp.acp_res[3].start = amdgpu_irq_create_mapping(adev, 162); - adev->acp.acp_res[3].end = adev->acp.acp_res[3].start; + adev->acp.acp_res[3].name = "acp2x_dw_bt_i2s_play_cap"; + adev->acp.acp_res[3].flags = IORESOURCE_MEM; + adev->acp.acp_res[3].start = acp_base + ACP_BT_PLAY_REGS_START; + adev->acp.acp_res[3].end = acp_base + ACP_BT_PLAY_REGS_END; + + adev->acp.acp_res[4].name = "acp2x_dma_irq"; + adev->acp.acp_res[4].flags = IORESOURCE_IRQ; + adev->acp.acp_res[4].start = amdgpu_irq_create_mapping(adev, 162); + adev->acp.acp_res[4].end = adev->acp.acp_res[4].start; adev->acp.acp_cell[0].name = "acp_audio_dma"; - adev->acp.acp_cell[0].num_resources = 4; + adev->acp.acp_cell[0].num_resources = 5; adev->acp.acp_cell[0].resources = &adev->acp.acp_res[0]; adev->acp.acp_cell[0].platform_data = &adev->asic_type; adev->acp.acp_cell[0].pdata_size = sizeof(adev->asic_type); @@ -396,6 +418,12 @@ static int acp_hw_init(void *handle) adev->acp.acp_cell[2].platform_data = &i2s_pdata[1]; adev->acp.acp_cell[2].pdata_size = sizeof(struct i2s_platform_data); + adev->acp.acp_cell[3].name = "designware-i2s"; + adev->acp.acp_cell[3].num_resources = 1; + adev->acp.acp_cell[3].resources = &adev->acp.acp_res[3]; + adev->acp.acp_cell[3].platform_data = &i2s_pdata[2]; + adev->acp.acp_cell[3].pdata_size = sizeof(struct i2s_platform_data); + r = mfd_add_hotplug_devices(adev->acp.parent, adev->acp.acp_cell, ACP_DEVS); if (r) @@ -451,7 +479,6 @@ static int acp_hw_init(void *handle) val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET); val &= ~ACP_SOFT_RESET__SoftResetAud_MASK; cgs_write_register(adev->acp.cgs_device, mmACP_SOFT_RESET, val); - return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c index 8fa850a070e0..0d8c3fc6eace 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c @@ -34,6 +34,45 @@ #include "amd_acpi.h" #include "atom.h" +struct amdgpu_atif_notification_cfg { + bool enabled; + int command_code; +}; + +struct amdgpu_atif_notifications { + bool display_switch; + bool expansion_mode_change; + bool thermal_state; + bool forced_power_state; + bool system_power_state; + bool display_conf_change; + bool px_gfx_switch; + bool brightness_change; + bool dgpu_display_event; +}; + +struct amdgpu_atif_functions { + bool system_params; + bool sbios_requests; + bool select_active_disp; + bool lid_state; + bool get_tv_standard; + bool set_tv_standard; + bool get_panel_expansion_mode; + bool set_panel_expansion_mode; + bool temperature_change; + bool graphics_device_types; +}; + +struct amdgpu_atif { + acpi_handle handle; + + struct amdgpu_atif_notifications notifications; + struct amdgpu_atif_functions functions; + struct amdgpu_atif_notification_cfg notification_cfg; + struct amdgpu_encoder *encoder_for_bl; +}; + /* Call the ATIF method */ /** @@ -46,8 +85,9 @@ * Executes the requested ATIF function (all asics). * Returns a pointer to the acpi output buffer. */ -static union acpi_object *amdgpu_atif_call(acpi_handle handle, int function, - struct acpi_buffer *params) +static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif, + int function, + struct acpi_buffer *params) { acpi_status status; union acpi_object atif_arg_elements[2]; @@ -70,7 +110,8 @@ static union acpi_object *amdgpu_atif_call(acpi_handle handle, int function, atif_arg_elements[1].integer.value = 0; } - status = acpi_evaluate_object(handle, "ATIF", &atif_arg, &buffer); + status = acpi_evaluate_object(atif->handle, NULL, &atif_arg, + &buffer); /* Fail only if calling the method fails and ATIF is supported */ if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { @@ -141,15 +182,14 @@ static void amdgpu_atif_parse_functions(struct amdgpu_atif_functions *f, u32 mas * (all asics). * returns 0 on success, error on failure. */ -static int amdgpu_atif_verify_interface(acpi_handle handle, - struct amdgpu_atif *atif) +static int amdgpu_atif_verify_interface(struct amdgpu_atif *atif) { union acpi_object *info; struct atif_verify_interface output; size_t size; int err = 0; - info = amdgpu_atif_call(handle, ATIF_FUNCTION_VERIFY_INTERFACE, NULL); + info = amdgpu_atif_call(atif, ATIF_FUNCTION_VERIFY_INTERFACE, NULL); if (!info) return -EIO; @@ -176,6 +216,35 @@ out: return err; } +static acpi_handle amdgpu_atif_probe_handle(acpi_handle dhandle) +{ + acpi_handle handle = NULL; + char acpi_method_name[255] = { 0 }; + struct acpi_buffer buffer = { sizeof(acpi_method_name), acpi_method_name }; + acpi_status status; + + /* For PX/HG systems, ATIF and ATPX are in the iGPU's namespace, on dGPU only + * systems, ATIF is in the dGPU's namespace. + */ + status = acpi_get_handle(dhandle, "ATIF", &handle); + if (ACPI_SUCCESS(status)) + goto out; + + if (amdgpu_has_atpx()) { + status = acpi_get_handle(amdgpu_atpx_get_dhandle(), "ATIF", + &handle); + if (ACPI_SUCCESS(status)) + goto out; + } + + DRM_DEBUG_DRIVER("No ATIF handle found\n"); + return NULL; +out: + acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); + DRM_DEBUG_DRIVER("Found ATIF handle %s\n", acpi_method_name); + return handle; +} + /** * amdgpu_atif_get_notification_params - determine notify configuration * @@ -188,15 +257,16 @@ out: * where n is specified in the result if a notifier is used. * Returns 0 on success, error on failure. */ -static int amdgpu_atif_get_notification_params(acpi_handle handle, - struct amdgpu_atif_notification_cfg *n) +static int amdgpu_atif_get_notification_params(struct amdgpu_atif *atif) { union acpi_object *info; + struct amdgpu_atif_notification_cfg *n = &atif->notification_cfg; struct atif_system_params params; size_t size; int err = 0; - info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS, NULL); + info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS, + NULL); if (!info) { err = -EIO; goto out; @@ -250,14 +320,15 @@ out: * (all asics). * Returns 0 on success, error on failure. */ -static int amdgpu_atif_get_sbios_requests(acpi_handle handle, - struct atif_sbios_requests *req) +static int amdgpu_atif_get_sbios_requests(struct amdgpu_atif *atif, + struct atif_sbios_requests *req) { union acpi_object *info; size_t size; int count = 0; - info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS, NULL); + info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS, + NULL); if (!info) return -EIO; @@ -290,11 +361,10 @@ out: * Returns NOTIFY code */ static int amdgpu_atif_handler(struct amdgpu_device *adev, - struct acpi_bus_event *event) + struct acpi_bus_event *event) { - struct amdgpu_atif *atif = &adev->atif; + struct amdgpu_atif *atif = adev->atif; struct atif_sbios_requests req; - acpi_handle handle; int count; DRM_DEBUG_DRIVER("event, device_class = %s, type = %#x\n", @@ -303,14 +373,14 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev, if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0) return NOTIFY_DONE; - if (!atif->notification_cfg.enabled || + if (!atif || + !atif->notification_cfg.enabled || event->type != atif->notification_cfg.command_code) /* Not our event */ return NOTIFY_DONE; /* Check pending SBIOS requests */ - handle = ACPI_HANDLE(&adev->pdev->dev); - count = amdgpu_atif_get_sbios_requests(handle, &req); + count = amdgpu_atif_get_sbios_requests(atif, &req); if (count <= 0) return NOTIFY_DONE; @@ -641,8 +711,8 @@ static int amdgpu_acpi_event(struct notifier_block *nb, */ int amdgpu_acpi_init(struct amdgpu_device *adev) { - acpi_handle handle; - struct amdgpu_atif *atif = &adev->atif; + acpi_handle handle, atif_handle; + struct amdgpu_atif *atif; struct amdgpu_atcs *atcs = &adev->atcs; int ret; @@ -658,12 +728,26 @@ int amdgpu_acpi_init(struct amdgpu_device *adev) DRM_DEBUG_DRIVER("Call to ATCS verify_interface failed: %d\n", ret); } + /* Probe for ATIF, and initialize it if found */ + atif_handle = amdgpu_atif_probe_handle(handle); + if (!atif_handle) + goto out; + + atif = kzalloc(sizeof(*atif), GFP_KERNEL); + if (!atif) { + DRM_WARN("Not enough memory to initialize ATIF\n"); + goto out; + } + atif->handle = atif_handle; + /* Call the ATIF method */ - ret = amdgpu_atif_verify_interface(handle, atif); + ret = amdgpu_atif_verify_interface(atif); if (ret) { DRM_DEBUG_DRIVER("Call to ATIF verify_interface failed: %d\n", ret); + kfree(atif); goto out; } + adev->atif = atif; if (atif->notifications.brightness_change) { struct drm_encoder *tmp; @@ -693,8 +777,7 @@ int amdgpu_acpi_init(struct amdgpu_device *adev) } if (atif->functions.system_params) { - ret = amdgpu_atif_get_notification_params(handle, - &atif->notification_cfg); + ret = amdgpu_atif_get_notification_params(atif); if (ret) { DRM_DEBUG_DRIVER("Call to GET_SYSTEM_PARAMS failed: %d\n", ret); @@ -720,4 +803,6 @@ out: void amdgpu_acpi_fini(struct amdgpu_device *adev) { unregister_acpi_notifier(&adev->acpi_nb); + if (adev->atif) + kfree(adev->atif); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c index 8f6f45567bfa..305143fcc1ce 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c @@ -342,15 +342,12 @@ void get_local_mem_info(struct kgd_dev *kgd, mem_info->local_mem_size_public, mem_info->local_mem_size_private); - if (amdgpu_emu_mode == 1) { - mem_info->mem_clk_max = 100; - return; - } - if (amdgpu_sriov_vf(adev)) mem_info->mem_clk_max = adev->clock.default_mclk / 100; - else + else if (adev->powerplay.pp_funcs) mem_info->mem_clk_max = amdgpu_dpm_get_mclk(adev, false) / 100; + else + mem_info->mem_clk_max = 100; } uint64_t get_gpu_clock_counter(struct kgd_dev *kgd) @@ -367,13 +364,12 @@ uint32_t get_max_engine_clock_in_mhz(struct kgd_dev *kgd) struct amdgpu_device *adev = (struct amdgpu_device *)kgd; /* the sclk is in quantas of 10kHz */ - if (amdgpu_emu_mode == 1) - return 100; - if (amdgpu_sriov_vf(adev)) return adev->clock.default_sclk / 100; - - return amdgpu_dpm_get_sclk(adev, false) / 100; + else if (adev->powerplay.pp_funcs) + return amdgpu_dpm_get_sclk(adev, false) / 100; + else + return 100; } void get_cu_info(struct kgd_dev *kgd, struct kfd_cu_info *cu_info) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c index 1bcb2b247335..ca8bf1c9a98e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c @@ -90,6 +90,12 @@ bool amdgpu_atpx_dgpu_req_power_for_displays(void) { return amdgpu_atpx_priv.atpx.dgpu_req_power_for_displays; } +#if defined(CONFIG_ACPI) +void *amdgpu_atpx_get_dhandle(void) { + return amdgpu_atpx_priv.dhandle; +} +#endif + /** * amdgpu_atpx_call - call an ATPX method * @@ -569,7 +575,7 @@ static const struct amdgpu_px_quirk amdgpu_px_quirk_list[] = { { 0x1002, 0x6900, 0x1002, 0x0124, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x1028, 0x0812, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x1028, 0x0813, AMDGPU_PX_QUIRK_FORCE_ATPX }, - { 0x1002, 0x67DF, 0x1028, 0x0774, AMDGPU_PX_QUIRK_FORCE_ATPX }, + { 0x1002, 0x6900, 0x1025, 0x125A, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0, 0, 0, 0, 0 }, }; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 9c1d491d742e..9c85a90be293 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -522,6 +522,9 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, struct amdgpu_bo_list_entry *e; struct list_head duplicates; unsigned i, tries = 10; + struct amdgpu_bo *gds; + struct amdgpu_bo *gws; + struct amdgpu_bo *oa; int r; INIT_LIST_HEAD(&p->validated); @@ -652,31 +655,36 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, p->bytes_moved_vis); + if (p->bo_list) { - struct amdgpu_bo *gds = p->bo_list->gds_obj; - struct amdgpu_bo *gws = p->bo_list->gws_obj; - struct amdgpu_bo *oa = p->bo_list->oa_obj; struct amdgpu_vm *vm = &fpriv->vm; unsigned i; + gds = p->bo_list->gds_obj; + gws = p->bo_list->gws_obj; + oa = p->bo_list->oa_obj; for (i = 0; i < p->bo_list->num_entries; i++) { struct amdgpu_bo *bo = p->bo_list->array[i].robj; p->bo_list->array[i].bo_va = amdgpu_vm_bo_find(vm, bo); } + } else { + gds = p->adev->gds.gds_gfx_bo; + gws = p->adev->gds.gws_gfx_bo; + oa = p->adev->gds.oa_gfx_bo; + } - if (gds) { - p->job->gds_base = amdgpu_bo_gpu_offset(gds); - p->job->gds_size = amdgpu_bo_size(gds); - } - if (gws) { - p->job->gws_base = amdgpu_bo_gpu_offset(gws); - p->job->gws_size = amdgpu_bo_size(gws); - } - if (oa) { - p->job->oa_base = amdgpu_bo_gpu_offset(oa); - p->job->oa_size = amdgpu_bo_size(oa); - } + if (gds) { + p->job->gds_base = amdgpu_bo_gpu_offset(gds); + p->job->gds_size = amdgpu_bo_size(gds); + } + if (gws) { + p->job->gws_base = amdgpu_bo_gpu_offset(gws); + p->job->gws_size = amdgpu_bo_size(gws); + } + if (oa) { + p->job->oa_base = amdgpu_bo_gpu_offset(oa); + p->job->oa_size = amdgpu_bo_size(oa); } if (!r && p->uf_entry.robj) { @@ -919,6 +927,10 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev, r = amdgpu_bo_vm_update_pte(p); if (r) return r; + + r = reservation_object_reserve_shared(vm->root.base.bo->tbo.resv); + if (r) + return r; } return amdgpu_cs_sync_rings(p); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 290e279abf0d..2c5f093e79e3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -1730,6 +1730,18 @@ static int amdgpu_device_ip_late_set_cg_state(struct amdgpu_device *adev) } } } + + if (adev->powerplay.pp_feature & PP_GFXOFF_MASK) { + /* enable gfx powergating */ + amdgpu_device_ip_set_powergating_state(adev, + AMD_IP_BLOCK_TYPE_GFX, + AMD_PG_STATE_GATE); + /* enable gfxoff */ + amdgpu_device_ip_set_powergating_state(adev, + AMD_IP_BLOCK_TYPE_SMC, + AMD_PG_STATE_GATE); + } + return 0; } @@ -2146,10 +2158,18 @@ bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type) switch (asic_type) { #if defined(CONFIG_DRM_AMD_DC) case CHIP_BONAIRE: - case CHIP_HAWAII: case CHIP_KAVERI: case CHIP_KABINI: case CHIP_MULLINS: + /* + * We have systems in the wild with these ASICs that require + * LVDS and VGA support which is not supported with DC. + * + * Fallback to the non-DC driver here by default so as not to + * cause regressions. + */ + return amdgpu_dc > 0; + case CHIP_HAWAII: case CHIP_CARRIZO: case CHIP_STONEY: case CHIP_POLARIS10: @@ -2727,6 +2747,9 @@ int amdgpu_device_resume(struct drm_device *dev, bool resume, bool fbcon) if (r) return r; + /* Make sure IB tests flushed */ + flush_delayed_work(&adev->late_init_work); + /* blat the mode back in */ if (fbcon) { if (!amdgpu_device_has_dc_support(adev)) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 39ec6b8890a1..e74d620d9699 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -376,7 +376,7 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring, struct amdgpu_device *adev = ring->adev; uint64_t index; - if (ring != &adev->uvd.inst[ring->me].ring) { + if (ring->funcs->type != AMDGPU_RING_TYPE_UVD) { ring->fence_drv.cpu_addr = &adev->wb.wb[ring->fence_offs]; ring->fence_drv.gpu_addr = adev->wb.gpu_addr + (ring->fence_offs * 4); } else { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 2c8e27370284..5fb156a01774 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -30,6 +30,7 @@ #include #include #include "amdgpu.h" +#include "amdgpu_display.h" void amdgpu_gem_object_free(struct drm_gem_object *gobj) { @@ -235,6 +236,13 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data, /* create a gem object to contain this object in */ if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS | AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) { + if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) { + /* if gds bo is created from user space, it must be + * passed to bo list + */ + DRM_ERROR("GDS bo cannot be per-vm-bo\n"); + return -EINVAL; + } flags |= AMDGPU_GEM_CREATE_NO_CPU_ACCESS; if (args->in.domains == AMDGPU_GEM_DOMAIN_GDS) size = size << AMDGPU_GDS_SHIFT; @@ -749,15 +757,16 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv, struct amdgpu_device *adev = dev->dev_private; struct drm_gem_object *gobj; uint32_t handle; + u32 domain; int r; args->pitch = amdgpu_align_pitch(adev, args->width, DIV_ROUND_UP(args->bpp, 8), 0); args->size = (u64)args->pitch * args->height; args->size = ALIGN(args->size, PAGE_SIZE); - - r = amdgpu_gem_object_create(adev, args->size, 0, - AMDGPU_GEM_DOMAIN_VRAM, + domain = amdgpu_bo_get_preferred_pin_domain(adev, + amdgpu_display_supported_domains(adev)); + r = amdgpu_gem_object_create(adev, args->size, 0, domain, AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED, false, NULL, &gobj); if (r) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index f70eeed9ed76..7aaa263ad8c7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -231,6 +231,12 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, if (ib->flags & AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE) fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY; + /* wrap the last IB with fence */ + if (job && job->uf_addr) { + amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, + fence_flags | AMDGPU_FENCE_FLAG_64BIT); + } + r = amdgpu_fence_emit(ring, f, fence_flags); if (r) { dev_err(adev->dev, "failed to emit fence (%d)\n", r); @@ -243,12 +249,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, if (ring->funcs->insert_end) ring->funcs->insert_end(ring); - /* wrap the last IB with fence */ - if (job && job->uf_addr) { - amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, - fence_flags | AMDGPU_FENCE_FLAG_64BIT); - } - if (patch_offset != ~0 && ring->funcs->patch_cond_exec) amdgpu_ring_patch_cond_exec(ring, patch_offset); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 6a9e46ae7f0a..3526efa8960e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -703,11 +703,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain, /* This assumes only APU display buffers are pinned with (VRAM|GTT). * See function amdgpu_display_supported_domains() */ - if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) { - domain = AMDGPU_GEM_DOMAIN_VRAM; - if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD) - domain = AMDGPU_GEM_DOMAIN_GTT; - } + domain = amdgpu_bo_get_preferred_pin_domain(adev, domain); if (bo->pin_count) { uint32_t mem_type = bo->tbo.mem.mem_type; @@ -766,8 +762,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain, domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type); if (domain == AMDGPU_GEM_DOMAIN_VRAM) { adev->vram_pin_size += amdgpu_bo_size(bo); - if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS) - adev->invisible_pin_size += amdgpu_bo_size(bo); + adev->invisible_pin_size += amdgpu_vram_mgr_bo_invisible_size(bo); } else if (domain == AMDGPU_GEM_DOMAIN_GTT) { adev->gart_pin_size += amdgpu_bo_size(bo); } @@ -794,25 +789,22 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo) bo->pin_count--; if (bo->pin_count) return 0; - for (i = 0; i < bo->placement.num_placement; i++) { - bo->placements[i].lpfn = 0; - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; - } - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - if (unlikely(r)) { - dev_err(adev->dev, "%p validate failed for unpin\n", bo); - goto error; - } if (bo->tbo.mem.mem_type == TTM_PL_VRAM) { adev->vram_pin_size -= amdgpu_bo_size(bo); - if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS) - adev->invisible_pin_size -= amdgpu_bo_size(bo); + adev->invisible_pin_size -= amdgpu_vram_mgr_bo_invisible_size(bo); } else if (bo->tbo.mem.mem_type == TTM_PL_TT) { adev->gart_pin_size -= amdgpu_bo_size(bo); } -error: + for (i = 0; i < bo->placement.num_placement; i++) { + bo->placements[i].lpfn = 0; + bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; + } + r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); + if (unlikely(r)) + dev_err(adev->dev, "%p validate failed for unpin\n", bo); + return r; } @@ -1066,3 +1058,14 @@ u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo) return bo->tbo.offset; } + +uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev, + uint32_t domain) +{ + if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) { + domain = AMDGPU_GEM_DOMAIN_VRAM; + if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD) + domain = AMDGPU_GEM_DOMAIN_GTT; + } + return domain; +} diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index 540e03fa159f..731748033878 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -289,7 +289,8 @@ int amdgpu_bo_restore_from_shadow(struct amdgpu_device *adev, struct reservation_object *resv, struct dma_fence **fence, bool direct); - +uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev, + uint32_t domain); /* * sub allocation diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c index b455da487782..fc818b4d849c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c @@ -1882,7 +1882,7 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev) if (!amdgpu_device_has_dc_support(adev)) { mutex_lock(&adev->pm.mutex); amdgpu_dpm_get_active_displays(adev); - adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtcs; + adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count; adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev); adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev); /* we have issues with mclk switching with refresh rates over 120 hz on the non-DC code. */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h index e969c879d87e..e5da4654b630 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h @@ -73,6 +73,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem); uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man); int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man); +u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo); uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man); uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index bcf68f80bbf0..3ff08e326838 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -130,7 +130,7 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev) unsigned version_major, version_minor, family_id; int i, j, r; - INIT_DELAYED_WORK(&adev->uvd.inst->idle_work, amdgpu_uvd_idle_work_handler); + INIT_DELAYED_WORK(&adev->uvd.idle_work, amdgpu_uvd_idle_work_handler); switch (adev->asic_type) { #ifdef CONFIG_DRM_AMDGPU_CIK @@ -314,12 +314,12 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev) void *ptr; int i, j; + cancel_delayed_work_sync(&adev->uvd.idle_work); + for (j = 0; j < adev->uvd.num_uvd_inst; ++j) { if (adev->uvd.inst[j].vcpu_bo == NULL) continue; - cancel_delayed_work_sync(&adev->uvd.inst[j].idle_work); - /* only valid for physical mode */ if (adev->asic_type < CHIP_POLARIS10) { for (i = 0; i < adev->uvd.max_handles; ++i) @@ -1145,7 +1145,7 @@ int amdgpu_uvd_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle, static void amdgpu_uvd_idle_work_handler(struct work_struct *work) { struct amdgpu_device *adev = - container_of(work, struct amdgpu_device, uvd.inst->idle_work.work); + container_of(work, struct amdgpu_device, uvd.idle_work.work); unsigned fences = 0, i, j; for (i = 0; i < adev->uvd.num_uvd_inst; ++i) { @@ -1167,7 +1167,7 @@ static void amdgpu_uvd_idle_work_handler(struct work_struct *work) AMD_CG_STATE_GATE); } } else { - schedule_delayed_work(&adev->uvd.inst->idle_work, UVD_IDLE_TIMEOUT); + schedule_delayed_work(&adev->uvd.idle_work, UVD_IDLE_TIMEOUT); } } @@ -1179,7 +1179,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring) if (amdgpu_sriov_vf(adev)) return; - set_clocks = !cancel_delayed_work_sync(&adev->uvd.inst->idle_work); + set_clocks = !cancel_delayed_work_sync(&adev->uvd.idle_work); if (set_clocks) { if (adev->pm.dpm_enabled) { amdgpu_dpm_enable_uvd(adev, true); @@ -1196,7 +1196,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring) void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring) { if (!amdgpu_sriov_vf(ring->adev)) - schedule_delayed_work(&ring->adev->uvd.inst->idle_work, UVD_IDLE_TIMEOUT); + schedule_delayed_work(&ring->adev->uvd.idle_work, UVD_IDLE_TIMEOUT); } /** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h index b1579fba134c..8b23a1b00c76 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h @@ -44,7 +44,6 @@ struct amdgpu_uvd_inst { void *saved_bo; atomic_t handles[AMDGPU_MAX_UVD_HANDLES]; struct drm_file *filp[AMDGPU_MAX_UVD_HANDLES]; - struct delayed_work idle_work; struct amdgpu_ring ring; struct amdgpu_ring ring_enc[AMDGPU_MAX_UVD_ENC_RINGS]; struct amdgpu_irq_src irq; @@ -62,6 +61,7 @@ struct amdgpu_uvd { bool address_64_bit; bool use_ctx_buf; struct amdgpu_uvd_inst inst[AMDGPU_MAX_UVD_INSTANCES]; + struct delayed_work idle_work; }; int amdgpu_uvd_sw_init(struct amdgpu_device *adev); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c index 8851bcdfc260..1b4ad9b2a755 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c @@ -49,12 +49,10 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work); int amdgpu_vcn_sw_init(struct amdgpu_device *adev) { - struct amdgpu_ring *ring; - struct drm_sched_rq *rq; unsigned long bo_size; const char *fw_name; const struct common_firmware_header *hdr; - unsigned version_major, version_minor, family_id; + unsigned char fw_check; int r; INIT_DELAYED_WORK(&adev->vcn.idle_work, amdgpu_vcn_idle_work_handler); @@ -84,12 +82,34 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev) } hdr = (const struct common_firmware_header *)adev->vcn.fw->data; - family_id = le32_to_cpu(hdr->ucode_version) & 0xff; - version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; - version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; - DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", - version_major, version_minor, family_id); + adev->vcn.fw_version = le32_to_cpu(hdr->ucode_version); + + /* Bit 20-23, it is encode major and non-zero for new naming convention. + * This field is part of version minor and DRM_DISABLED_FLAG in old naming + * convention. Since the l:wq!atest version minor is 0x5B and DRM_DISABLED_FLAG + * is zero in old naming convention, this field is always zero so far. + * These four bits are used to tell which naming convention is present. + */ + fw_check = (le32_to_cpu(hdr->ucode_version) >> 20) & 0xf; + if (fw_check) { + unsigned int dec_ver, enc_major, enc_minor, vep, fw_rev; + + fw_rev = le32_to_cpu(hdr->ucode_version) & 0xfff; + enc_minor = (le32_to_cpu(hdr->ucode_version) >> 12) & 0xff; + enc_major = fw_check; + dec_ver = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xf; + vep = (le32_to_cpu(hdr->ucode_version) >> 28) & 0xf; + DRM_INFO("Found VCN firmware Version ENC: %hu.%hu DEC: %hu VEP: %hu Revision: %hu\n", + enc_major, enc_minor, dec_ver, vep, fw_rev); + } else { + unsigned int version_major, version_minor, family_id; + family_id = le32_to_cpu(hdr->ucode_version) & 0xff; + version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; + version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; + DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", + version_major, version_minor, family_id); + } bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8) + AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE @@ -102,24 +122,6 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev) return r; } - ring = &adev->vcn.ring_dec; - rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->vcn.entity_dec, - rq, NULL); - if (r != 0) { - DRM_ERROR("Failed setting up VCN dec run queue.\n"); - return r; - } - - ring = &adev->vcn.ring_enc[0]; - rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->vcn.entity_enc, - rq, NULL); - if (r != 0) { - DRM_ERROR("Failed setting up VCN enc run queue.\n"); - return r; - } - return 0; } @@ -129,10 +131,6 @@ int amdgpu_vcn_sw_fini(struct amdgpu_device *adev) kfree(adev->vcn.saved_bo); - drm_sched_entity_fini(&adev->vcn.ring_dec.sched, &adev->vcn.entity_dec); - - drm_sched_entity_fini(&adev->vcn.ring_enc[0].sched, &adev->vcn.entity_enc); - amdgpu_bo_free_kernel(&adev->vcn.vcpu_bo, &adev->vcn.gpu_addr, (void **)&adev->vcn.cpu_addr); @@ -278,7 +276,7 @@ int amdgpu_vcn_dec_ring_test_ring(struct amdgpu_ring *ring) } static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring, - struct amdgpu_bo *bo, bool direct, + struct amdgpu_bo *bo, struct dma_fence **fence) { struct amdgpu_device *adev = ring->adev; @@ -306,19 +304,12 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring, } ib->length_dw = 16; - if (direct) { - r = amdgpu_ib_schedule(ring, 1, ib, NULL, &f); - job->fence = dma_fence_get(f); - if (r) - goto err_free; + r = amdgpu_ib_schedule(ring, 1, ib, NULL, &f); + job->fence = dma_fence_get(f); + if (r) + goto err_free; - amdgpu_job_free(job); - } else { - r = amdgpu_job_submit(job, ring, &adev->vcn.entity_dec, - AMDGPU_FENCE_OWNER_UNDEFINED, &f); - if (r) - goto err_free; - } + amdgpu_job_free(job); amdgpu_bo_fence(bo, f, false); amdgpu_bo_unreserve(bo); @@ -370,11 +361,11 @@ static int amdgpu_vcn_dec_get_create_msg(struct amdgpu_ring *ring, uint32_t hand for (i = 14; i < 1024; ++i) msg[i] = cpu_to_le32(0x0); - return amdgpu_vcn_dec_send_msg(ring, bo, true, fence); + return amdgpu_vcn_dec_send_msg(ring, bo, fence); } static int amdgpu_vcn_dec_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle, - bool direct, struct dma_fence **fence) + struct dma_fence **fence) { struct amdgpu_device *adev = ring->adev; struct amdgpu_bo *bo = NULL; @@ -396,7 +387,7 @@ static int amdgpu_vcn_dec_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han for (i = 6; i < 1024; ++i) msg[i] = cpu_to_le32(0x0); - return amdgpu_vcn_dec_send_msg(ring, bo, direct, fence); + return amdgpu_vcn_dec_send_msg(ring, bo, fence); } int amdgpu_vcn_dec_ring_test_ib(struct amdgpu_ring *ring, long timeout) @@ -410,7 +401,7 @@ int amdgpu_vcn_dec_ring_test_ib(struct amdgpu_ring *ring, long timeout) goto error; } - r = amdgpu_vcn_dec_get_destroy_msg(ring, 1, true, &fence); + r = amdgpu_vcn_dec_get_destroy_msg(ring, 1, &fence); if (r) { DRM_ERROR("amdgpu: failed to get destroy ib (%ld).\n", r); goto error; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h index 181e6afa9847..773010b9ff15 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h @@ -67,8 +67,6 @@ struct amdgpu_vcn { struct amdgpu_ring ring_dec; struct amdgpu_ring ring_enc[AMDGPU_VCN_MAX_ENC_RINGS]; struct amdgpu_irq_src irq; - struct drm_sched_entity entity_dec; - struct drm_sched_entity entity_enc; unsigned num_enc_rings; }; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index ccba88cc8c54..fdcb498f6d19 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -107,6 +107,9 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base, return; list_add_tail(&base->bo_list, &bo->va); + if (bo->tbo.type == ttm_bo_type_kernel) + list_move(&base->vm_status, &vm->relocated); + if (bo->tbo.resv != vm->root.base.bo->tbo.resv) return; @@ -468,7 +471,6 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device *adev, pt->parent = amdgpu_bo_ref(parent->base.bo); amdgpu_vm_bo_base_init(&entry->base, vm, pt); - list_move(&entry->base.vm_status, &vm->relocated); } if (level < AMDGPU_VM_PTB) { @@ -1463,7 +1465,9 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev, uint64_t count; max_entries = min(max_entries, 16ull * 1024ull); - for (count = 1; count < max_entries; ++count) { + for (count = 1; + count < max_entries / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); + ++count) { uint64_t idx = pfn + count; if (pages_addr[idx] != @@ -1476,7 +1480,7 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev, dma_addr = pages_addr; } else { addr = pages_addr[pfn]; - max_entries = count; + max_entries = count * (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); } } else if (flags & AMDGPU_PTE_VALID) { @@ -1491,7 +1495,7 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev, if (r) return r; - pfn += last - start + 1; + pfn += (last - start + 1) / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); if (nodes && nodes->size == pfn) { pfn = 0; ++nodes; @@ -2123,7 +2127,8 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev, before->last = saddr - 1; before->offset = tmp->offset; before->flags = tmp->flags; - list_add(&before->list, &tmp->list); + before->bo_va = tmp->bo_va; + list_add(&before->list, &tmp->bo_va->invalids); } /* Remember mapping split at the end */ @@ -2133,7 +2138,8 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev, after->offset = tmp->offset; after->offset += after->start - tmp->start; after->flags = tmp->flags; - list_add(&after->list, &tmp->list); + after->bo_va = tmp->bo_va; + list_add(&after->list, &tmp->bo_va->invalids); } list_del(&tmp->list); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index 9aca653bec07..b6333f92ba45 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -96,6 +96,38 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev, adev->gmc.visible_vram_size : end) - start; } +/** + * amdgpu_vram_mgr_bo_invisible_size - CPU invisible BO size + * + * @bo: &amdgpu_bo buffer object (must be in VRAM) + * + * Returns: + * How much of the given &amdgpu_bo buffer object lies in CPU invisible VRAM. + */ +u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo) +{ + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); + struct ttm_mem_reg *mem = &bo->tbo.mem; + struct drm_mm_node *nodes = mem->mm_node; + unsigned pages = mem->num_pages; + u64 usage = 0; + + if (adev->gmc.visible_vram_size == adev->gmc.real_vram_size) + return 0; + + if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT) + return amdgpu_bo_size(bo); + + while (nodes && pages) { + usage += nodes->size << PAGE_SHIFT; + usage -= amdgpu_vram_mgr_vis_size(adev, nodes); + pages -= nodes->size; + ++nodes; + } + + return usage; +} + /** * amdgpu_vram_mgr_new - allocate new ranges * @@ -135,7 +167,8 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man, num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node); } - nodes = kcalloc(num_nodes, sizeof(*nodes), GFP_KERNEL); + nodes = kvmalloc_array(num_nodes, sizeof(*nodes), + GFP_KERNEL | __GFP_ZERO); if (!nodes) return -ENOMEM; @@ -190,7 +223,7 @@ error: drm_mm_remove_node(&nodes[i]); spin_unlock(&mgr->lock); - kfree(nodes); + kvfree(nodes); return r == -ENOSPC ? 0 : r; } @@ -229,7 +262,7 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man, atomic64_sub(usage, &mgr->usage); atomic64_sub(vis_usage, &mgr->vis_usage); - kfree(mem->mm_node); + kvfree(mem->mm_node); mem->mm_node = NULL; } diff --git a/drivers/gpu/drm/amd/amdgpu/df_v3_6.c b/drivers/gpu/drm/amd/amdgpu/df_v3_6.c index 60608b3df881..d5ebe566809b 100644 --- a/drivers/gpu/drm/amd/amdgpu/df_v3_6.c +++ b/drivers/gpu/drm/amd/amdgpu/df_v3_6.c @@ -64,7 +64,7 @@ static u32 df_v3_6_get_hbm_channel_number(struct amdgpu_device *adev) int fb_channel_number; fb_channel_number = adev->df_funcs->get_fb_channel_number(adev); - if (fb_channel_number > ARRAY_SIZE(df_v3_6_channel_number)) + if (fb_channel_number >= ARRAY_SIZE(df_v3_6_channel_number)) fb_channel_number = 0; return df_v3_6_channel_number[fb_channel_number]; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c index d7530fdfaad5..a69153435ea7 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -111,6 +111,7 @@ static const struct soc15_reg_golden golden_settings_gc_9_0_vg10[] = static const struct soc15_reg_golden golden_settings_gc_9_0_vg20[] = { + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_DCC_CONFIG, 0x0f000080, 0x04000080), SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_2, 0x0f000000, 0x0a000000), SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_3, 0x30000000, 0x10000000), SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0xf3e777ff, 0x22014042), @@ -1837,13 +1838,15 @@ static void gfx_v9_1_parse_ind_reg_list(int *register_list_format, int indirect_offset, int list_size, int *unique_indirect_regs, - int *unique_indirect_reg_count, + int unique_indirect_reg_count, int *indirect_start_offsets, - int *indirect_start_offsets_count) + int *indirect_start_offsets_count, + int max_start_offsets_count) { int idx; for (; indirect_offset < list_size; indirect_offset++) { + WARN_ON(*indirect_start_offsets_count >= max_start_offsets_count); indirect_start_offsets[*indirect_start_offsets_count] = indirect_offset; *indirect_start_offsets_count = *indirect_start_offsets_count + 1; @@ -1851,14 +1854,14 @@ static void gfx_v9_1_parse_ind_reg_list(int *register_list_format, indirect_offset += 2; /* look for the matching indice */ - for (idx = 0; idx < *unique_indirect_reg_count; idx++) { + for (idx = 0; idx < unique_indirect_reg_count; idx++) { if (unique_indirect_regs[idx] == register_list_format[indirect_offset] || !unique_indirect_regs[idx]) break; } - BUG_ON(idx >= *unique_indirect_reg_count); + BUG_ON(idx >= unique_indirect_reg_count); if (!unique_indirect_regs[idx]) unique_indirect_regs[idx] = register_list_format[indirect_offset]; @@ -1893,9 +1896,10 @@ static int gfx_v9_1_init_rlc_save_restore_list(struct amdgpu_device *adev) adev->gfx.rlc.reg_list_format_direct_reg_list_length, adev->gfx.rlc.reg_list_format_size_bytes >> 2, unique_indirect_regs, - &unique_indirect_reg_count, + unique_indirect_reg_count, indirect_start_offsets, - &indirect_start_offsets_count); + &indirect_start_offsets_count, + ARRAY_SIZE(indirect_start_offsets)); /* enable auto inc in case it is disabled */ tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SRM_CNTL)); @@ -3404,11 +3408,6 @@ static int gfx_v9_0_late_init(void *handle) if (r) return r; - r = amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_GFX, - AMD_PG_STATE_GATE); - if (r) - return r; - return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c index 0c768e388ace..727071fee6f6 100644 --- a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c +++ b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c @@ -47,6 +47,8 @@ MODULE_FIRMWARE("amdgpu/vega20_asd.bin"); #define smnMP1_FIRMWARE_FLAGS 0x3010028 +static uint32_t sos_old_versions[] = {1517616, 1510592, 1448594, 1446554}; + static int psp_v3_1_get_fw_type(struct amdgpu_firmware_info *ucode, enum psp_gfx_fw_type *type) { @@ -210,12 +212,31 @@ static int psp_v3_1_bootloader_load_sysdrv(struct psp_context *psp) return ret; } +static bool psp_v3_1_match_version(struct amdgpu_device *adev, uint32_t ver) +{ + int i; + + if (ver == adev->psp.sos_fw_version) + return true; + + /* + * Double check if the latest four legacy versions. + * If yes, it is still the right version. + */ + for (i = 0; i < sizeof(sos_old_versions) / sizeof(uint32_t); i++) { + if (sos_old_versions[i] == adev->psp.sos_fw_version) + return true; + } + + return false; +} + static int psp_v3_1_bootloader_load_sos(struct psp_context *psp) { int ret; unsigned int psp_gfxdrv_command_reg = 0; struct amdgpu_device *adev = psp->adev; - uint32_t sol_reg; + uint32_t sol_reg, ver; /* Check sOS sign of life register to confirm sys driver and sOS * are already been loaded. @@ -248,6 +269,10 @@ static int psp_v3_1_bootloader_load_sos(struct psp_context *psp) RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81), 0, true); + ver = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_58); + if (!psp_v3_1_match_version(adev, ver)) + DRM_WARN("SOS version doesn't match\n"); + return ret; } diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c index 68b4a22a8892..83f2717fcf81 100644 --- a/drivers/gpu/drm/amd/amdgpu/soc15.c +++ b/drivers/gpu/drm/amd/amdgpu/soc15.c @@ -685,6 +685,7 @@ static int soc15_common_early_init(void *handle) AMD_CG_SUPPORT_BIF_MGCG | AMD_CG_SUPPORT_BIF_LS | AMD_CG_SUPPORT_HDP_MGCG | + AMD_CG_SUPPORT_HDP_LS | AMD_CG_SUPPORT_ROM_MGCG | AMD_CG_SUPPORT_VCE_MGCG | AMD_CG_SUPPORT_UVD_MGCG; diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c index 0999c843f623..a71b97519cc0 100644 --- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c @@ -900,7 +900,7 @@ static const struct amdgpu_ring_funcs vce_v3_0_ring_phys_funcs = { .emit_frame_size = 4 + /* vce_v3_0_emit_pipeline_sync */ 6, /* amdgpu_vce_ring_emit_fence x1 no user fence */ - .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */ + .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */ .emit_ib = amdgpu_vce_ring_emit_ib, .emit_fence = amdgpu_vce_ring_emit_fence, .test_ring = amdgpu_vce_ring_test_ring, @@ -924,7 +924,7 @@ static const struct amdgpu_ring_funcs vce_v3_0_ring_vm_funcs = { 6 + /* vce_v3_0_emit_vm_flush */ 4 + /* vce_v3_0_emit_pipeline_sync */ 6 + 6, /* amdgpu_vce_ring_emit_fence x2 vm fence */ - .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */ + .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */ .emit_ib = vce_v3_0_ring_emit_ib, .emit_vm_flush = vce_v3_0_emit_vm_flush, .emit_pipeline_sync = vce_v3_0_emit_pipeline_sync, diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c index 110b294ebed3..29684c3ea4ef 100644 --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c @@ -769,14 +769,14 @@ static int vcn_v1_0_stop(struct amdgpu_device *adev) return 0; } -bool vcn_v1_0_is_idle(void *handle) +static bool vcn_v1_0_is_idle(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; return (RREG32_SOC15(VCN, 0, mmUVD_STATUS) == 0x2); } -int vcn_v1_0_wait_for_idle(void *handle) +static int vcn_v1_0_wait_for_idle(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; int ret = 0; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index f9b9ab90558c..770c6b24be0b 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include @@ -2095,12 +2096,6 @@ convert_color_depth_from_display_info(const struct drm_connector *connector) { uint32_t bpc = connector->display_info.bpc; - /* Limited color depth to 8bit - * TODO: Still need to handle deep color - */ - if (bpc > 8) - bpc = 8; - switch (bpc) { case 0: /* Temporary Work around, DRM don't parse color depth for @@ -2180,6 +2175,46 @@ get_output_color_space(const struct dc_crtc_timing *dc_crtc_timing) return color_space; } +static void reduce_mode_colour_depth(struct dc_crtc_timing *timing_out) +{ + if (timing_out->display_color_depth <= COLOR_DEPTH_888) + return; + + timing_out->display_color_depth--; +} + +static void adjust_colour_depth_from_display_info(struct dc_crtc_timing *timing_out, + const struct drm_display_info *info) +{ + int normalized_clk; + if (timing_out->display_color_depth <= COLOR_DEPTH_888) + return; + do { + normalized_clk = timing_out->pix_clk_khz; + /* YCbCr 4:2:0 requires additional adjustment of 1/2 */ + if (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420) + normalized_clk /= 2; + /* Adjusting pix clock following on HDMI spec based on colour depth */ + switch (timing_out->display_color_depth) { + case COLOR_DEPTH_101010: + normalized_clk = (normalized_clk * 30) / 24; + break; + case COLOR_DEPTH_121212: + normalized_clk = (normalized_clk * 36) / 24; + break; + case COLOR_DEPTH_161616: + normalized_clk = (normalized_clk * 48) / 24; + break; + default: + return; + } + if (normalized_clk <= info->max_tmds_clock) + return; + reduce_mode_colour_depth(timing_out); + + } while (timing_out->display_color_depth > COLOR_DEPTH_888); + +} /*****************************************************************************/ static void @@ -2188,6 +2223,7 @@ fill_stream_properties_from_drm_display_mode(struct dc_stream_state *stream, const struct drm_connector *connector) { struct dc_crtc_timing *timing_out = &stream->timing; + const struct drm_display_info *info = &connector->display_info; memset(timing_out, 0, sizeof(struct dc_crtc_timing)); @@ -2196,8 +2232,10 @@ fill_stream_properties_from_drm_display_mode(struct dc_stream_state *stream, timing_out->v_border_top = 0; timing_out->v_border_bottom = 0; /* TODO: un-hardcode */ - - if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB444) + if (drm_mode_is_420_only(info, mode_in) + && stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) + timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420; + else if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB444) && stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444; else @@ -2233,6 +2271,8 @@ fill_stream_properties_from_drm_display_mode(struct dc_stream_state *stream, stream->out_transfer_func->type = TF_TYPE_PREDEFINED; stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB; + if (stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) + adjust_colour_depth_from_display_info(timing_out, info); } static void fill_audio_info(struct audio_info *audio_info, @@ -2316,27 +2356,22 @@ decide_crtc_timing_for_drm_display_mode(struct drm_display_mode *drm_mode, } } -static int create_fake_sink(struct amdgpu_dm_connector *aconnector) +static struct dc_sink * +create_fake_sink(struct amdgpu_dm_connector *aconnector) { - struct dc_sink *sink = NULL; struct dc_sink_init_data sink_init_data = { 0 }; - + struct dc_sink *sink = NULL; sink_init_data.link = aconnector->dc_link; sink_init_data.sink_signal = aconnector->dc_link->connector_signal; sink = dc_sink_create(&sink_init_data); if (!sink) { DRM_ERROR("Failed to create sink!\n"); - return -ENOMEM; + return NULL; } - sink->sink_signal = SIGNAL_TYPE_VIRTUAL; - aconnector->fake_enable = true; - - aconnector->dc_sink = sink; - aconnector->dc_link->local_sink = sink; - return 0; + return sink; } static void set_multisync_trigger_params( @@ -2399,7 +2434,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector, struct dc_stream_state *stream = NULL; struct drm_display_mode mode = *drm_mode; bool native_mode_found = false; - + struct dc_sink *sink = NULL; if (aconnector == NULL) { DRM_ERROR("aconnector is NULL!\n"); return stream; @@ -2417,15 +2452,18 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector, return stream; } - if (create_fake_sink(aconnector)) + sink = create_fake_sink(aconnector); + if (!sink) return stream; + } else { + sink = aconnector->dc_sink; } - stream = dc_create_stream_for_sink(aconnector->dc_sink); + stream = dc_create_stream_for_sink(sink); if (stream == NULL) { DRM_ERROR("Failed to create stream for sink!\n"); - return stream; + goto finish; } list_for_each_entry(preferred_mode, &aconnector->base.modes, head) { @@ -2464,12 +2502,15 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector, fill_audio_info( &stream->audio_info, drm_connector, - aconnector->dc_sink); + sink); update_stream_signal(stream); if (dm_state && dm_state->freesync_capable) stream->ignore_msa_timing_param = true; +finish: + if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL) + dc_sink_release(sink); return stream; } @@ -2714,6 +2755,9 @@ void amdgpu_dm_connector_funcs_reset(struct drm_connector *connector) struct dm_connector_state *state = to_dm_connector_state(connector->state); + if (connector->state) + __drm_atomic_helper_connector_destroy_state(connector->state); + kfree(state); state = kzalloc(sizeof(*state), GFP_KERNEL); @@ -2724,8 +2768,7 @@ void amdgpu_dm_connector_funcs_reset(struct drm_connector *connector) state->underscan_hborder = 0; state->underscan_vborder = 0; - connector->state = &state->base; - connector->state->connector = connector; + __drm_atomic_helper_connector_reset(connector, &state->base); } } @@ -3083,17 +3126,6 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, } } - /* It's a hack for s3 since in 4.9 kernel filter out cursor buffer - * prepare and cleanup in drm_atomic_helper_prepare_planes - * and drm_atomic_helper_cleanup_planes because fb doens't in s3. - * IN 4.10 kernel this code should be removed and amdgpu_device_suspend - * code touching fram buffers should be avoided for DC. - */ - if (plane->type == DRM_PLANE_TYPE_CURSOR) { - struct amdgpu_crtc *acrtc = to_amdgpu_crtc(new_state->crtc); - - acrtc->cursor_bo = obj; - } return 0; } @@ -3941,10 +3973,11 @@ static void amdgpu_dm_do_flip(struct drm_crtc *crtc, if (acrtc->base.state->event) prepare_flip_isr(acrtc); + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); + surface_updates->surface = dc_stream_get_status(acrtc_state->stream)->plane_states[0]; surface_updates->flip_addr = &addr; - dc_commit_updates_for_stream(adev->dm.dc, surface_updates, 1, @@ -3957,9 +3990,6 @@ static void amdgpu_dm_do_flip(struct drm_crtc *crtc, __func__, addr.address.grph.addr.high_part, addr.address.grph.addr.low_part); - - - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); } /* @@ -4219,6 +4249,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct drm_connector *connector; struct drm_connector_state *old_con_state, *new_con_state; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; + int crtc_disable_count = 0; drm_atomic_helper_update_legacy_modeset_state(dev, state); @@ -4281,6 +4312,8 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) if (dm_old_crtc_state->stream) remove_stream(adev, acrtc, dm_old_crtc_state->stream); + pm_runtime_get_noresume(dev->dev); + acrtc->enabled = true; acrtc->hw_mode = new_crtc_state->mode; crtc->hwmode = new_crtc_state->mode; @@ -4421,6 +4454,9 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); bool modeset_needed; + if (old_crtc_state->active && !new_crtc_state->active) + crtc_disable_count++; + dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); modeset_needed = modeset_required( @@ -4469,6 +4505,14 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) drm_atomic_helper_wait_for_flip_done(dev, state); drm_atomic_helper_cleanup_planes(dev, state); + + /* Finally, drop a runtime PM reference for each newly disabled CRTC, + * so we can put the GPU into runtime suspend if we're not driving any + * displays anymore + */ + for (i = 0; i < crtc_disable_count; i++) + pm_runtime_put_autosuspend(dev->dev); + pm_runtime_mark_last_busy(dev->dev); } diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c index 4be21bf54749..a910f01838ab 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c @@ -555,6 +555,9 @@ static inline int dm_irq_state(struct amdgpu_device *adev, return 0; } + if (acrtc->otg_inst == -1) + return 0; + irq_source = dal_irq_type + acrtc->otg_inst; st = (state == AMDGPU_IRQ_STATE_ENABLE); diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index 4304d9e408b8..ace9ad578ca0 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -83,22 +83,21 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, enum i2c_mot_mode mot = (msg->request & DP_AUX_I2C_MOT) ? I2C_MOT_TRUE : I2C_MOT_FALSE; enum ddc_result res; - uint32_t read_bytes = msg->size; + ssize_t read_bytes; if (WARN_ON(msg->size > 16)) return -E2BIG; switch (msg->request & ~DP_AUX_I2C_MOT) { case DP_AUX_NATIVE_READ: - res = dal_ddc_service_read_dpcd_data( + read_bytes = dal_ddc_service_read_dpcd_data( TO_DM_AUX(aux)->ddc_service, false, I2C_MOT_UNDEF, msg->address, msg->buffer, - msg->size, - &read_bytes); - break; + msg->size); + return read_bytes; case DP_AUX_NATIVE_WRITE: res = dal_ddc_service_write_dpcd_data( TO_DM_AUX(aux)->ddc_service, @@ -109,15 +108,14 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, msg->size); break; case DP_AUX_I2C_READ: - res = dal_ddc_service_read_dpcd_data( + read_bytes = dal_ddc_service_read_dpcd_data( TO_DM_AUX(aux)->ddc_service, true, mot, msg->address, msg->buffer, - msg->size, - &read_bytes); - break; + msg->size); + return read_bytes; case DP_AUX_I2C_WRITE: res = dal_ddc_service_write_dpcd_data( TO_DM_AUX(aux)->ddc_service, @@ -139,9 +137,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, r == DDC_RESULT_SUCESSFULL); #endif - if (res != DDC_RESULT_SUCESSFULL) - return -EIO; - return read_bytes; + return msg->size; } static enum drm_connector_status diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c index 0229c7edb8ad..5a2e952c5bea 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c @@ -234,6 +234,34 @@ static void pp_to_dc_clock_levels( } } +static void pp_to_dc_clock_levels_with_latency( + const struct pp_clock_levels_with_latency *pp_clks, + struct dm_pp_clock_levels_with_latency *clk_level_info, + enum dm_pp_clock_type dc_clk_type) +{ + uint32_t i; + + if (pp_clks->num_levels > DM_PP_MAX_CLOCK_LEVELS) { + DRM_INFO("DM_PPLIB: Warning: %s clock: number of levels %d exceeds maximum of %d!\n", + DC_DECODE_PP_CLOCK_TYPE(dc_clk_type), + pp_clks->num_levels, + DM_PP_MAX_CLOCK_LEVELS); + + clk_level_info->num_levels = DM_PP_MAX_CLOCK_LEVELS; + } else + clk_level_info->num_levels = pp_clks->num_levels; + + DRM_DEBUG("DM_PPLIB: values for %s clock\n", + DC_DECODE_PP_CLOCK_TYPE(dc_clk_type)); + + for (i = 0; i < clk_level_info->num_levels; i++) { + DRM_DEBUG("DM_PPLIB:\t %d in 10kHz\n", pp_clks->data[i].clocks_in_khz); + /* translate 10kHz to kHz */ + clk_level_info->data[i].clocks_in_khz = pp_clks->data[i].clocks_in_khz * 10; + clk_level_info->data[i].latency_in_us = pp_clks->data[i].latency_in_us; + } +} + bool dm_pp_get_clock_levels_by_type( const struct dc_context *ctx, enum dm_pp_clock_type clk_type, @@ -311,8 +339,22 @@ bool dm_pp_get_clock_levels_by_type_with_latency( enum dm_pp_clock_type clk_type, struct dm_pp_clock_levels_with_latency *clk_level_info) { - /* TODO: to be implemented */ - return false; + struct amdgpu_device *adev = ctx->driver_context; + void *pp_handle = adev->powerplay.pp_handle; + struct pp_clock_levels_with_latency pp_clks = { 0 }; + const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs; + + if (!pp_funcs || !pp_funcs->get_clock_by_type_with_latency) + return false; + + if (pp_funcs->get_clock_by_type_with_latency(pp_handle, + dc_to_pp_clock_type(clk_type), + &pp_clks)) + return false; + + pp_to_dc_clock_levels_with_latency(&pp_clks, clk_level_info, clk_type); + + return true; } bool dm_pp_get_clock_levels_by_type_with_voltage( diff --git a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c index e61dd97d0928..f28989860fd8 100644 --- a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c +++ b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c @@ -449,6 +449,11 @@ static inline unsigned int clamp_ux_dy( return min_clamp; } +unsigned int dc_fixpt_u3d19(struct fixed31_32 arg) +{ + return ux_dy(arg.value, 3, 19); +} + unsigned int dc_fixpt_u2d19(struct fixed31_32 arg) { return ux_dy(arg.value, 2, 19); diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c index ae48d603ebd6..49c2face1e7a 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c @@ -629,14 +629,13 @@ bool dal_ddc_service_query_ddc_data( return ret; } -enum ddc_result dal_ddc_service_read_dpcd_data( +ssize_t dal_ddc_service_read_dpcd_data( struct ddc_service *ddc, bool i2c, enum i2c_mot_mode mot, uint32_t address, uint8_t *data, - uint32_t len, - uint32_t *read) + uint32_t len) { struct aux_payload read_payload = { .i2c_over_aux = i2c, @@ -653,8 +652,6 @@ enum ddc_result dal_ddc_service_read_dpcd_data( .mot = mot }; - *read = 0; - if (len > DEFAULT_AUX_MAX_DATA_SIZE) { BREAK_TO_DEBUGGER(); return DDC_RESULT_FAILED_INVALID_OPERATION; @@ -664,8 +661,7 @@ enum ddc_result dal_ddc_service_read_dpcd_data( ddc->ctx->i2caux, ddc->ddc_pin, &command)) { - *read = command.payloads->length; - return DDC_RESULT_SUCESSFULL; + return (ssize_t)command.payloads->length; } return DDC_RESULT_FAILED_OPERATION; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c index 7d609c71394b..bdd121485cbc 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c @@ -1630,17 +1630,42 @@ static enum dc_status read_hpd_rx_irq_data( struct dc_link *link, union hpd_irq_data *irq_data) { + static enum dc_status retval; + /* The HW reads 16 bytes from 200h on HPD, * but if we get an AUX_DEFER, the HW cannot retry * and this causes the CTS tests 4.3.2.1 - 3.2.4 to * fail, so we now explicitly read 6 bytes which is * the req from the above mentioned test cases. + * + * For DP 1.4 we need to read those from 2002h range. */ - return core_link_read_dpcd( - link, - DP_SINK_COUNT, - irq_data->raw, - sizeof(union hpd_irq_data)); + if (link->dpcd_caps.dpcd_rev.raw < DPCD_REV_14) + retval = core_link_read_dpcd( + link, + DP_SINK_COUNT, + irq_data->raw, + sizeof(union hpd_irq_data)); + else { + /* Read 2 bytes at this location,... */ + retval = core_link_read_dpcd( + link, + DP_SINK_COUNT_ESI, + irq_data->raw, + 2); + + if (retval != DC_OK) + return retval; + + /* ... then read remaining 4 at the other location */ + retval = core_link_read_dpcd( + link, + DP_LANE0_1_STATUS_ESI, + &irq_data->raw[2], + 4); + } + + return retval; } static bool allow_hpd_rx_irq(const struct dc_link *link) @@ -1742,12 +1767,10 @@ static void dp_test_send_link_training(struct dc_link *link) dp_retrain_link_dp_test(link, &link_settings, false); } -/* TODO hbr2 compliance eye output is unstable +/* TODO Raven hbr2 compliance eye output is unstable * (toggling on and off) with debugger break * This caueses intermittent PHY automation failure * Need to look into the root cause */ -static uint8_t force_tps4_for_cp2520 = 1; - static void dp_test_send_phy_test_pattern(struct dc_link *link) { union phy_test_pattern dpcd_test_pattern; @@ -1807,13 +1830,13 @@ static void dp_test_send_phy_test_pattern(struct dc_link *link) break; case PHY_TEST_PATTERN_CP2520_1: /* CP2520 pattern is unstable, temporarily use TPS4 instead */ - test_pattern = (force_tps4_for_cp2520 == 1) ? + test_pattern = (link->dc->caps.force_dp_tps4_for_cp2520 == 1) ? DP_TEST_PATTERN_TRAINING_PATTERN4 : DP_TEST_PATTERN_HBR2_COMPLIANCE_EYE; break; case PHY_TEST_PATTERN_CP2520_2: /* CP2520 pattern is unstable, temporarily use TPS4 instead */ - test_pattern = (force_tps4_for_cp2520 == 1) ? + test_pattern = (link->dc->caps.force_dp_tps4_for_cp2520 == 1) ? DP_TEST_PATTERN_TRAINING_PATTERN4 : DP_TEST_PATTERN_HBR2_COMPLIANCE_EYE; break; @@ -2278,7 +2301,7 @@ static void dp_wa_power_up_0010FA(struct dc_link *link, uint8_t *dpcd_data, static bool retrieve_link_cap(struct dc_link *link) { - uint8_t dpcd_data[DP_TRAINING_AUX_RD_INTERVAL - DP_DPCD_REV + 1]; + uint8_t dpcd_data[DP_ADAPTER_CAP - DP_DPCD_REV + 1]; union down_stream_port_count down_strm_port_count; union edp_configuration_cap edp_config_cap; diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 9cfde0ccf4e9..53c71296f3dd 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -76,6 +76,7 @@ struct dc_caps { bool is_apu; bool dual_link_dvi; bool post_blend_color_processing; + bool force_dp_tps4_for_cp2520; }; struct dc_dcc_surface_param { diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c index b235a75355b8..bae752332a9f 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c @@ -741,6 +741,29 @@ static struct mem_input_funcs dce_mi_funcs = { .mem_input_is_flip_pending = dce_mi_is_flip_pending }; +static struct mem_input_funcs dce112_mi_funcs = { + .mem_input_program_display_marks = dce112_mi_program_display_marks, + .allocate_mem_input = dce_mi_allocate_dmif, + .free_mem_input = dce_mi_free_dmif, + .mem_input_program_surface_flip_and_addr = + dce_mi_program_surface_flip_and_addr, + .mem_input_program_pte_vm = dce_mi_program_pte_vm, + .mem_input_program_surface_config = + dce_mi_program_surface_config, + .mem_input_is_flip_pending = dce_mi_is_flip_pending +}; + +static struct mem_input_funcs dce120_mi_funcs = { + .mem_input_program_display_marks = dce120_mi_program_display_marks, + .allocate_mem_input = dce_mi_allocate_dmif, + .free_mem_input = dce_mi_free_dmif, + .mem_input_program_surface_flip_and_addr = + dce_mi_program_surface_flip_and_addr, + .mem_input_program_pte_vm = dce_mi_program_pte_vm, + .mem_input_program_surface_config = + dce_mi_program_surface_config, + .mem_input_is_flip_pending = dce_mi_is_flip_pending +}; void dce_mem_input_construct( struct dce_mem_input *dce_mi, @@ -769,7 +792,7 @@ void dce112_mem_input_construct( const struct dce_mem_input_mask *mi_mask) { dce_mem_input_construct(dce_mi, ctx, inst, regs, mi_shift, mi_mask); - dce_mi->base.funcs->mem_input_program_display_marks = dce112_mi_program_display_marks; + dce_mi->base.funcs = &dce112_mi_funcs; } void dce120_mem_input_construct( @@ -781,5 +804,5 @@ void dce120_mem_input_construct( const struct dce_mem_input_mask *mi_mask) { dce_mem_input_construct(dce_mi, ctx, inst, regs, mi_shift, mi_mask); - dce_mi->base.funcs->mem_input_program_display_marks = dce120_mi_program_display_marks; + dce_mi->base.funcs = &dce120_mi_funcs; } diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c index 0a6d483dc046..c0e813c7ddd4 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c @@ -72,7 +72,8 @@ static void dce110_update_generic_info_packet( uint32_t max_retries = 50; /*we need turn on clock before programming AFMT block*/ - REG_UPDATE(AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, 1); + if (REG(AFMT_CNTL)) + REG_UPDATE(AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, 1); if (REG(AFMT_VBI_PACKET_CONTROL1)) { if (packet_index >= 8) @@ -719,7 +720,8 @@ static void dce110_stream_encoder_update_hdmi_info_packets( const uint32_t *content = (const uint32_t *) &info_frame->avi.sb[0]; /*we need turn on clock before programming AFMT block*/ - REG_UPDATE(AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, 1); + if (REG(AFMT_CNTL)) + REG_UPDATE(AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, 1); REG_WRITE(AFMT_AVI_INFO0, content[0]); diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c index 38ec0d609297..344dd2e69e7c 100644 --- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c @@ -678,9 +678,22 @@ bool dce100_validate_bandwidth( struct dc *dc, struct dc_state *context) { - /* TODO implement when needed but for now hardcode max value*/ - context->bw.dce.dispclk_khz = 681000; - context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER; + int i; + bool at_least_one_pipe = false; + + for (i = 0; i < dc->res_pool->pipe_count; i++) { + if (context->res_ctx.pipe_ctx[i].stream) + at_least_one_pipe = true; + } + + if (at_least_one_pipe) { + /* TODO implement when needed but for now hardcode max value*/ + context->bw.dce.dispclk_khz = 681000; + context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER; + } else { + context->bw.dce.dispclk_khz = 0; + context->bw.dce.yclk_khz = 0; + } return true; } diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c index 9150d2694450..e2994d337044 100644 --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c @@ -121,10 +121,10 @@ static void reset_lb_on_vblank(struct dc_context *ctx) frame_count = dm_read_reg(ctx, mmCRTC_STATUS_FRAME_COUNT); - for (retry = 100; retry > 0; retry--) { + for (retry = 10000; retry > 0; retry--) { if (frame_count != dm_read_reg(ctx, mmCRTC_STATUS_FRAME_COUNT)) break; - msleep(1); + udelay(10); } if (!retry) dm_error("Frame count did not increase for 100ms.\n"); @@ -147,14 +147,14 @@ static void wait_for_fbc_state_changed( uint32_t addr = mmFBC_STATUS; uint32_t value; - while (counter < 10) { + while (counter < 1000) { value = dm_read_reg(cp110->base.ctx, addr); if (get_reg_field_value( value, FBC_STATUS, FBC_ENABLE_STATUS) == enabled) break; - msleep(10); + udelay(100); counter++; } diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c index a92fb0aa2ff3..c29052b6da5a 100644 --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c @@ -1004,9 +1004,9 @@ void dce110_disable_stream(struct pipe_ctx *pipe_ctx, int option) /*don't free audio if it is from retrain or internal disable stream*/ if (option == FREE_ACQUIRED_RESOURCE && dc->caps.dynamic_audio == true) { /*we have to dynamic arbitrate the audio endpoints*/ - pipe_ctx->stream_res.audio = NULL; /*we free the resource, need reset is_audio_acquired*/ update_audio_usage(&dc->current_state->res_ctx, dc->res_pool, pipe_ctx->stream_res.audio, false); + pipe_ctx->stream_res.audio = NULL; } /* TODO: notify audio driver for if audio modes list changed diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c index 46a35c7f01df..c69fa4bfab0a 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c @@ -132,8 +132,7 @@ void dpp_set_gamut_remap_bypass(struct dcn10_dpp *dpp) #define IDENTITY_RATIO(ratio) (dc_fixpt_u2d19(ratio) == (1 << 19)) - -bool dpp_get_optimal_number_of_taps( +static bool dpp_get_optimal_number_of_taps( struct dpp *dpp, struct scaler_data *scl_data, const struct scaling_taps *in_taps) diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h index 5944a3ba0409..e862cafa6501 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h @@ -1424,12 +1424,8 @@ void dpp1_set_degamma( enum ipp_degamma_mode mode); void dpp1_set_degamma_pwl(struct dpp *dpp_base, - const struct pwl_params *params); + const struct pwl_params *params); -bool dpp_get_optimal_number_of_taps( - struct dpp *dpp, - struct scaler_data *scl_data, - const struct scaling_taps *in_taps); void dpp_read_state(struct dpp *dpp_base, struct dcn_dpp_state *s); diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c index 4ddd6273d5a5..f862fd148cca 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c @@ -565,16 +565,16 @@ static void dpp1_dscl_set_manual_ratio_init( uint32_t init_int = 0; REG_SET(SCL_HORZ_FILTER_SCALE_RATIO, 0, - SCL_H_SCALE_RATIO, dc_fixpt_u2d19(data->ratios.horz) << 5); + SCL_H_SCALE_RATIO, dc_fixpt_u3d19(data->ratios.horz) << 5); REG_SET(SCL_VERT_FILTER_SCALE_RATIO, 0, - SCL_V_SCALE_RATIO, dc_fixpt_u2d19(data->ratios.vert) << 5); + SCL_V_SCALE_RATIO, dc_fixpt_u3d19(data->ratios.vert) << 5); REG_SET(SCL_HORZ_FILTER_SCALE_RATIO_C, 0, - SCL_H_SCALE_RATIO_C, dc_fixpt_u2d19(data->ratios.horz_c) << 5); + SCL_H_SCALE_RATIO_C, dc_fixpt_u3d19(data->ratios.horz_c) << 5); REG_SET(SCL_VERT_FILTER_SCALE_RATIO_C, 0, - SCL_V_SCALE_RATIO_C, dc_fixpt_u2d19(data->ratios.vert_c) << 5); + SCL_V_SCALE_RATIO_C, dc_fixpt_u3d19(data->ratios.vert_c) << 5); /* * 0.24 format for fraction, first five bits zeroed diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c index d2ab78b35a7a..c28085be39ff 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c @@ -396,11 +396,15 @@ bool hubp1_program_surface_flip_and_addr( if (address->grph_stereo.right_addr.quad_part == 0) break; - REG_UPDATE_4(DCSURF_SURFACE_CONTROL, + REG_UPDATE_8(DCSURF_SURFACE_CONTROL, PRIMARY_SURFACE_TMZ, address->tmz_surface, PRIMARY_SURFACE_TMZ_C, address->tmz_surface, PRIMARY_META_SURFACE_TMZ, address->tmz_surface, - PRIMARY_META_SURFACE_TMZ_C, address->tmz_surface); + PRIMARY_META_SURFACE_TMZ_C, address->tmz_surface, + SECONDARY_SURFACE_TMZ, address->tmz_surface, + SECONDARY_SURFACE_TMZ_C, address->tmz_surface, + SECONDARY_META_SURFACE_TMZ, address->tmz_surface, + SECONDARY_META_SURFACE_TMZ_C, address->tmz_surface); if (address->grph_stereo.right_meta_addr.quad_part != 0) { @@ -459,9 +463,11 @@ void hubp1_dcc_control(struct hubp *hubp, bool enable, uint32_t dcc_ind_64b_blk = independent_64b_blks ? 1 : 0; struct dcn10_hubp *hubp1 = TO_DCN10_HUBP(hubp); - REG_UPDATE_2(DCSURF_SURFACE_CONTROL, + REG_UPDATE_4(DCSURF_SURFACE_CONTROL, PRIMARY_SURFACE_DCC_EN, dcc_en, - PRIMARY_SURFACE_DCC_IND_64B_BLK, dcc_ind_64b_blk); + PRIMARY_SURFACE_DCC_IND_64B_BLK, dcc_ind_64b_blk, + SECONDARY_SURFACE_DCC_EN, dcc_en, + SECONDARY_SURFACE_DCC_IND_64B_BLK, dcc_ind_64b_blk); } void hubp1_program_surface_config( diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h index af384034398f..d901d5092969 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h @@ -312,6 +312,12 @@ HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, PRIMARY_META_SURFACE_TMZ_C, mask_sh),\ HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, PRIMARY_SURFACE_DCC_EN, mask_sh),\ HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, PRIMARY_SURFACE_DCC_IND_64B_BLK, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_TMZ, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_TMZ_C, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_META_SURFACE_TMZ, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_META_SURFACE_TMZ_C, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_EN, mask_sh),\ + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_IND_64B_BLK, mask_sh),\ HUBP_SF(HUBPRET0_HUBPRET_CONTROL, DET_BUF_PLANE1_BASE_ADDRESS, mask_sh),\ HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CB_B, mask_sh),\ HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CR_R, mask_sh),\ @@ -489,6 +495,8 @@ type SECONDARY_META_SURFACE_TMZ_C;\ type PRIMARY_SURFACE_DCC_EN;\ type PRIMARY_SURFACE_DCC_IND_64B_BLK;\ + type SECONDARY_SURFACE_DCC_EN;\ + type SECONDARY_SURFACE_DCC_IND_64B_BLK;\ type DET_BUF_PLANE1_BASE_ADDRESS;\ type CROSSBAR_SRC_CB_B;\ type CROSSBAR_SRC_CR_R;\ diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c index df5cb2d1d164..34dac84066a0 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c @@ -1027,6 +1027,8 @@ static bool construct( dc->caps.max_slave_planes = 1; dc->caps.is_apu = true; dc->caps.post_blend_color_processing = false; + /* Raven DP PHY HBR2 eye diagram pattern is not stable. Use TP4 */ + dc->caps.force_dp_tps4_for_cp2520 = true; if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV) dc->debug = debug_defaults_drv; diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c index 653b7b2efe2e..c928ee4cd382 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c @@ -319,6 +319,10 @@ void enc1_stream_encoder_dp_set_stream_attribute( REG_UPDATE(DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, DP_COMPONENT_PIXEL_DEPTH_12BPC); break; + case COLOR_DEPTH_161616: + REG_UPDATE(DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, + DP_COMPONENT_PIXEL_DEPTH_16BPC); + break; default: REG_UPDATE(DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, DP_COMPONENT_PIXEL_DEPTH_6BPC); diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h index 30b3a08b91be..090b7a8dd67b 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h +++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h @@ -102,14 +102,13 @@ bool dal_ddc_service_query_ddc_data( uint8_t *read_buf, uint32_t read_size); -enum ddc_result dal_ddc_service_read_dpcd_data( +ssize_t dal_ddc_service_read_dpcd_data( struct ddc_service *ddc, bool i2c, enum i2c_mot_mode mot, uint32_t address, uint8_t *data, - uint32_t len, - uint32_t *read); + uint32_t len); enum ddc_result dal_ddc_service_write_dpcd_data( struct ddc_service *ddc, diff --git a/drivers/gpu/drm/amd/display/include/fixed31_32.h b/drivers/gpu/drm/amd/display/include/fixed31_32.h index bb0d4ebba9f0..a981b3e99ab3 100644 --- a/drivers/gpu/drm/amd/display/include/fixed31_32.h +++ b/drivers/gpu/drm/amd/display/include/fixed31_32.h @@ -496,6 +496,8 @@ static inline int dc_fixpt_ceil(struct fixed31_32 arg) * fractional */ +unsigned int dc_fixpt_u3d19(struct fixed31_32 arg); + unsigned int dc_fixpt_u2d19(struct fixed31_32 arg); unsigned int dc_fixpt_u0d19(struct fixed31_32 arg); diff --git a/drivers/gpu/drm/amd/include/asic_reg/df/df_3_6_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/df/df_3_6_sh_mask.h index 88f7c69df6b9..06fac509e987 100644 --- a/drivers/gpu/drm/amd/include/asic_reg/df/df_3_6_sh_mask.h +++ b/drivers/gpu/drm/amd/include/asic_reg/df/df_3_6_sh_mask.h @@ -36,13 +36,13 @@ /* DF_CS_AON0_DramBaseAddress0 */ #define DF_CS_UMC_AON0_DramBaseAddress0__AddrRngVal__SHIFT 0x0 #define DF_CS_UMC_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT 0x1 -#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvNumChan__SHIFT 0x4 -#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT 0x8 +#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvNumChan__SHIFT 0x2 +#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT 0x9 #define DF_CS_UMC_AON0_DramBaseAddress0__DramBaseAddr__SHIFT 0xc #define DF_CS_UMC_AON0_DramBaseAddress0__AddrRngVal_MASK 0x00000001L #define DF_CS_UMC_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK 0x00000002L -#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvNumChan_MASK 0x000000F0L -#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvAddrSel_MASK 0x00000700L +#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvNumChan_MASK 0x0000003CL +#define DF_CS_UMC_AON0_DramBaseAddress0__IntLvAddrSel_MASK 0x00000E00L #define DF_CS_UMC_AON0_DramBaseAddress0__DramBaseAddr_MASK 0xFFFFF000L #endif diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h index c6c1666ac120..33b4de4ad66e 100644 --- a/drivers/gpu/drm/amd/include/atomfirmware.h +++ b/drivers/gpu/drm/amd/include/atomfirmware.h @@ -1433,7 +1433,10 @@ struct atom_smc_dpm_info_v4_1 uint8_t acggfxclkspreadpercent; uint16_t acggfxclkspreadfreq; - uint32_t boardreserved[10]; + uint8_t Vr2_I2C_address; + uint8_t padding_vr2[3]; + + uint32_t boardreserved[9]; }; /* @@ -2026,17 +2029,15 @@ enum atom_smu11_syspll_id { SMU11_SYSPLL3_1_ID = 6, }; - enum atom_smu11_syspll0_clock_id { - SMU11_SYSPLL0_SOCCLK_ID = 0, // SOCCLK - SMU11_SYSPLL0_MP0CLK_ID = 1, // MP0CLK - SMU11_SYSPLL0_DCLK_ID = 2, // DCLK - SMU11_SYSPLL0_VCLK_ID = 3, // VCLK - SMU11_SYSPLL0_ECLK_ID = 4, // ECLK + SMU11_SYSPLL0_ECLK_ID = 0, // ECLK + SMU11_SYSPLL0_SOCCLK_ID = 1, // SOCCLK + SMU11_SYSPLL0_MP0CLK_ID = 2, // MP0CLK + SMU11_SYSPLL0_DCLK_ID = 3, // DCLK + SMU11_SYSPLL0_VCLK_ID = 4, // VCLK SMU11_SYSPLL0_DCEFCLK_ID = 5, // DCEFCLK }; - enum atom_smu11_syspll1_0_clock_id { SMU11_SYSPLL1_0_UCLKA_ID = 0, // UCLK_a }; diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c index b493369e6d0f..d567be49c31b 100644 --- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c +++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c @@ -180,7 +180,6 @@ static int pp_late_init(void *handle) { struct amdgpu_device *adev = handle; struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle; - int ret; if (hwmgr && hwmgr->pm_en) { mutex_lock(&hwmgr->smu_lock); @@ -191,13 +190,6 @@ static int pp_late_init(void *handle) if (adev->pm.smu_prv_buffer_size != 0) pp_reserve_vram_for_smu(adev); - if (hwmgr->hwmgr_func->gfx_off_control && - (hwmgr->feature_mask & PP_GFXOFF_MASK)) { - ret = hwmgr->hwmgr_func->gfx_off_control(hwmgr, true); - if (ret) - pr_err("gfx off enabling failed!\n"); - } - return 0; } @@ -245,7 +237,7 @@ static int pp_set_powergating_state(void *handle, } if (hwmgr->hwmgr_func->enable_per_cu_power_gating == NULL) { - pr_info("%s was not implemented.\n", __func__); + pr_debug("%s was not implemented.\n", __func__); return 0; } diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c index e45a1fcc7f08..91ffb7bc4ee7 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c @@ -265,19 +265,18 @@ int psm_adjust_power_state_dynamic(struct pp_hwmgr *hwmgr, bool skip, if (skip) return 0; - if (!hwmgr->ps) - /* - * for vega12/vega20 which does not support power state manager - * DAL clock limits should also be honoured - */ - phm_apply_clock_adjust_rules(hwmgr); - phm_pre_display_configuration_changed(hwmgr); phm_display_configuration_changed(hwmgr); if (hwmgr->ps) power_state_management(hwmgr, new_ps); + else + /* + * for vega12/vega20 which does not support power state manager + * DAL clock limits should also be honoured + */ + phm_apply_clock_adjust_rules(hwmgr); phm_notify_smc_display_config_after_ps_adjustment(hwmgr); diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c index c97b0e5ba43b..d27c1c9df286 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c @@ -496,7 +496,9 @@ int pp_atomfwctrl_get_clk_information_by_clkid(struct pp_hwmgr *hwmgr, BIOS_CLKI uint32_t ix; parameters.clk_id = id; + parameters.syspll_id = 0; parameters.command = GET_SMU_CLOCK_INFO_V3_1_GET_CLOCK_FREQ; + parameters.dfsdid = 0; ix = GetIndexIntoMasterCmdTable(getsmuclockinfo); @@ -505,19 +507,87 @@ int pp_atomfwctrl_get_clk_information_by_clkid(struct pp_hwmgr *hwmgr, BIOS_CLKI return -EINVAL; output = (struct atom_get_smu_clock_info_output_parameters_v3_1 *)¶meters; - *frequency = output->atom_smu_outputclkfreq.smu_clock_freq_hz / 10000; + *frequency = le32_to_cpu(output->atom_smu_outputclkfreq.smu_clock_freq_hz) / 10000; return 0; } +static void pp_atomfwctrl_copy_vbios_bootup_values_3_2(struct pp_hwmgr *hwmgr, + struct pp_atomfwctrl_bios_boot_up_values *boot_values, + struct atom_firmware_info_v3_2 *fw_info) +{ + uint32_t frequency = 0; + + boot_values->ulRevision = fw_info->firmware_revision; + boot_values->ulGfxClk = fw_info->bootup_sclk_in10khz; + boot_values->ulUClk = fw_info->bootup_mclk_in10khz; + boot_values->usVddc = fw_info->bootup_vddc_mv; + boot_values->usVddci = fw_info->bootup_vddci_mv; + boot_values->usMvddc = fw_info->bootup_mvddc_mv; + boot_values->usVddGfx = fw_info->bootup_vddgfx_mv; + boot_values->ucCoolingID = fw_info->coolingsolution_id; + boot_values->ulSocClk = 0; + boot_values->ulDCEFClk = 0; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_SOCCLK_ID, &frequency)) + boot_values->ulSocClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_DCEFCLK_ID, &frequency)) + boot_values->ulDCEFClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_ECLK_ID, &frequency)) + boot_values->ulEClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_VCLK_ID, &frequency)) + boot_values->ulVClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_DCLK_ID, &frequency)) + boot_values->ulDClk = frequency; +} + +static void pp_atomfwctrl_copy_vbios_bootup_values_3_1(struct pp_hwmgr *hwmgr, + struct pp_atomfwctrl_bios_boot_up_values *boot_values, + struct atom_firmware_info_v3_1 *fw_info) +{ + uint32_t frequency = 0; + + boot_values->ulRevision = fw_info->firmware_revision; + boot_values->ulGfxClk = fw_info->bootup_sclk_in10khz; + boot_values->ulUClk = fw_info->bootup_mclk_in10khz; + boot_values->usVddc = fw_info->bootup_vddc_mv; + boot_values->usVddci = fw_info->bootup_vddci_mv; + boot_values->usMvddc = fw_info->bootup_mvddc_mv; + boot_values->usVddGfx = fw_info->bootup_vddgfx_mv; + boot_values->ucCoolingID = fw_info->coolingsolution_id; + boot_values->ulSocClk = 0; + boot_values->ulDCEFClk = 0; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_SOCCLK_ID, &frequency)) + boot_values->ulSocClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_DCEFCLK_ID, &frequency)) + boot_values->ulDCEFClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_ECLK_ID, &frequency)) + boot_values->ulEClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_VCLK_ID, &frequency)) + boot_values->ulVClk = frequency; + + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_DCLK_ID, &frequency)) + boot_values->ulDClk = frequency; +} + int pp_atomfwctrl_get_vbios_bootup_values(struct pp_hwmgr *hwmgr, struct pp_atomfwctrl_bios_boot_up_values *boot_values) { - struct atom_firmware_info_v3_1 *info = NULL; + struct atom_firmware_info_v3_2 *fwinfo_3_2; + struct atom_firmware_info_v3_1 *fwinfo_3_1; + struct atom_common_table_header *info = NULL; uint16_t ix; ix = GetIndexIntoMasterDataTable(firmwareinfo); - info = (struct atom_firmware_info_v3_1 *) + info = (struct atom_common_table_header *) smu_atom_get_data_table(hwmgr->adev, ix, NULL, NULL, NULL); @@ -526,16 +596,18 @@ int pp_atomfwctrl_get_vbios_bootup_values(struct pp_hwmgr *hwmgr, return -EINVAL; } - boot_values->ulRevision = info->firmware_revision; - boot_values->ulGfxClk = info->bootup_sclk_in10khz; - boot_values->ulUClk = info->bootup_mclk_in10khz; - boot_values->usVddc = info->bootup_vddc_mv; - boot_values->usVddci = info->bootup_vddci_mv; - boot_values->usMvddc = info->bootup_mvddc_mv; - boot_values->usVddGfx = info->bootup_vddgfx_mv; - boot_values->ucCoolingID = info->coolingsolution_id; - boot_values->ulSocClk = 0; - boot_values->ulDCEFClk = 0; + if ((info->format_revision == 3) && (info->content_revision == 2)) { + fwinfo_3_2 = (struct atom_firmware_info_v3_2 *)info; + pp_atomfwctrl_copy_vbios_bootup_values_3_2(hwmgr, + boot_values, fwinfo_3_2); + } else if ((info->format_revision == 3) && (info->content_revision == 1)) { + fwinfo_3_1 = (struct atom_firmware_info_v3_1 *)info; + pp_atomfwctrl_copy_vbios_bootup_values_3_1(hwmgr, + boot_values, fwinfo_3_1); + } else { + pr_info("Fw info table revision does not match!"); + return -EINVAL; + } return 0; } @@ -627,5 +699,7 @@ int pp_atomfwctrl_get_smc_dpm_information(struct pp_hwmgr *hwmgr, param->acggfxclkspreadpercent = info->acggfxclkspreadpercent; param->acggfxclkspreadfreq = info->acggfxclkspreadfreq; + param->Vr2_I2C_address = info->Vr2_I2C_address; + return 0; } diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h index fe10aa4db5e6..22e21668c93a 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h @@ -136,6 +136,9 @@ struct pp_atomfwctrl_bios_boot_up_values { uint32_t ulUClk; uint32_t ulSocClk; uint32_t ulDCEFClk; + uint32_t ulEClk; + uint32_t ulVClk; + uint32_t ulDClk; uint16_t usVddc; uint16_t usVddci; uint16_t usMvddc; @@ -207,6 +210,8 @@ struct pp_atomfwctrl_smc_dpm_parameters uint8_t acggfxclkspreadenabled; uint8_t acggfxclkspreadpercent; uint16_t acggfxclkspreadfreq; + + uint8_t Vr2_I2C_address; }; int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr, diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c index f0d48b183d22..35bd9870ab10 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c @@ -870,12 +870,6 @@ static int init_over_drive_limits( hwmgr->platform_descriptor.maxOverdriveVDDC = 0; hwmgr->platform_descriptor.overdriveVDDCStep = 0; - if (hwmgr->platform_descriptor.overdriveLimit.engineClock == 0 \ - || hwmgr->platform_descriptor.overdriveLimit.memoryClock == 0) { - hwmgr->od_enabled = false; - pr_debug("OverDrive feature not support by VBIOS\n"); - } - return 0; } diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c index ce64dfabd34b..925e17104f90 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c @@ -1074,12 +1074,6 @@ static int init_overdrive_limits(struct pp_hwmgr *hwmgr, powerplay_table, (const ATOM_FIRMWARE_INFO_V2_1 *)fw_info); - if (hwmgr->platform_descriptor.overdriveLimit.engineClock == 0 - && hwmgr->platform_descriptor.overdriveLimit.memoryClock == 0) { - hwmgr->od_enabled = false; - pr_debug("OverDrive feature not support by VBIOS\n"); - } - return result; } diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c index 85f84f4d8be5..d4bc83e81389 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c @@ -53,8 +53,37 @@ static const unsigned long SMU10_Magic = (unsigned long) PHM_Rv_Magic; static int smu10_display_clock_voltage_request(struct pp_hwmgr *hwmgr, - struct pp_display_clock_request *clock_req); + struct pp_display_clock_request *clock_req) +{ + struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend); + enum amd_pp_clock_type clk_type = clock_req->clock_type; + uint32_t clk_freq = clock_req->clock_freq_in_khz / 1000; + PPSMC_Msg msg; + switch (clk_type) { + case amd_pp_dcf_clock: + if (clk_freq == smu10_data->dcf_actual_hard_min_freq) + return 0; + msg = PPSMC_MSG_SetHardMinDcefclkByFreq; + smu10_data->dcf_actual_hard_min_freq = clk_freq; + break; + case amd_pp_soc_clock: + msg = PPSMC_MSG_SetHardMinSocclkByFreq; + break; + case amd_pp_f_clock: + if (clk_freq == smu10_data->f_actual_hard_min_freq) + return 0; + smu10_data->f_actual_hard_min_freq = clk_freq; + msg = PPSMC_MSG_SetHardMinFclkByFreq; + break; + default: + pr_info("[DisplayClockVoltageRequest]Invalid Clock Type!"); + return -EINVAL; + } + smum_send_msg_to_smc_with_parameter(hwmgr, msg, clk_freq); + + return 0; +} static struct smu10_power_state *cast_smu10_ps(struct pp_hw_power_state *hw_ps) { @@ -284,7 +313,7 @@ static int smu10_disable_gfx_off(struct pp_hwmgr *hwmgr) static int smu10_disable_dpm_tasks(struct pp_hwmgr *hwmgr) { - return smu10_disable_gfx_off(hwmgr); + return 0; } static int smu10_enable_gfx_off(struct pp_hwmgr *hwmgr) @@ -299,7 +328,7 @@ static int smu10_enable_gfx_off(struct pp_hwmgr *hwmgr) static int smu10_enable_dpm_tasks(struct pp_hwmgr *hwmgr) { - return smu10_enable_gfx_off(hwmgr); + return 0; } static int smu10_gfx_off_control(struct pp_hwmgr *hwmgr, bool enable) @@ -1000,6 +1029,12 @@ static int smu10_get_clock_by_type_with_voltage(struct pp_hwmgr *hwmgr, case amd_pp_soc_clock: pclk_vol_table = pinfo->vdd_dep_on_socclk; break; + case amd_pp_disp_clock: + pclk_vol_table = pinfo->vdd_dep_on_dispclk; + break; + case amd_pp_phy_clock: + pclk_vol_table = pinfo->vdd_dep_on_phyclk; + break; default: return -EINVAL; } @@ -1017,39 +1052,7 @@ static int smu10_get_clock_by_type_with_voltage(struct pp_hwmgr *hwmgr, return 0; } -static int smu10_display_clock_voltage_request(struct pp_hwmgr *hwmgr, - struct pp_display_clock_request *clock_req) -{ - struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend); - enum amd_pp_clock_type clk_type = clock_req->clock_type; - uint32_t clk_freq = clock_req->clock_freq_in_khz / 1000; - PPSMC_Msg msg; - switch (clk_type) { - case amd_pp_dcf_clock: - if (clk_freq == smu10_data->dcf_actual_hard_min_freq) - return 0; - msg = PPSMC_MSG_SetHardMinDcefclkByFreq; - smu10_data->dcf_actual_hard_min_freq = clk_freq; - break; - case amd_pp_soc_clock: - msg = PPSMC_MSG_SetHardMinSocclkByFreq; - break; - case amd_pp_f_clock: - if (clk_freq == smu10_data->f_actual_hard_min_freq) - return 0; - smu10_data->f_actual_hard_min_freq = clk_freq; - msg = PPSMC_MSG_SetHardMinFclkByFreq; - break; - default: - pr_info("[DisplayClockVoltageRequest]Invalid Clock Type!"); - return -EINVAL; - } - - smum_send_msg_to_smc_with_parameter(hwmgr, msg, clk_freq); - - return 0; -} static int smu10_get_max_high_clocks(struct pp_hwmgr *hwmgr, struct amd_pp_simple_clock_info *clocks) { @@ -1182,6 +1185,7 @@ static const struct pp_hwmgr_func smu10_hwmgr_funcs = { .set_mmhub_powergating_by_smu = smu10_set_mmhub_powergating_by_smu, .smus_notify_pwe = smu10_smus_notify_pwe, .gfx_off_control = smu10_gfx_off_control, + .display_clock_voltage_request = smu10_display_clock_voltage_request, }; int smu10_init_function_pointers(struct pp_hwmgr *hwmgr) diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c index 45e9b8cb169d..f8e866ceda02 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c @@ -791,7 +791,8 @@ static int smu7_setup_dpm_tables_v1(struct pp_hwmgr *hwmgr) data->dpm_table.sclk_table.count++; } } - + if (hwmgr->platform_descriptor.overdriveLimit.engineClock == 0) + hwmgr->platform_descriptor.overdriveLimit.engineClock = dep_sclk_table->entries[i-1].clk; /* Initialize Mclk DPM table based on allow Mclk values */ data->dpm_table.mclk_table.count = 0; for (i = 0; i < dep_mclk_table->count; i++) { @@ -806,6 +807,8 @@ static int smu7_setup_dpm_tables_v1(struct pp_hwmgr *hwmgr) } } + if (hwmgr->platform_descriptor.overdriveLimit.memoryClock == 0) + hwmgr->platform_descriptor.overdriveLimit.memoryClock = dep_mclk_table->entries[i-1].clk; return 0; } @@ -3752,14 +3755,17 @@ static int smu7_trim_dpm_states(struct pp_hwmgr *hwmgr, static int smu7_generate_dpm_level_enable_mask( struct pp_hwmgr *hwmgr, const void *input) { - int result; + int result = 0; const struct phm_set_power_state_input *states = (const struct phm_set_power_state_input *)input; struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); const struct smu7_power_state *smu7_ps = cast_const_phw_smu7_power_state(states->pnew_state); - result = smu7_trim_dpm_states(hwmgr, smu7_ps); + /*skip the trim if od is enabled*/ + if (!hwmgr->od_enabled) + result = smu7_trim_dpm_states(hwmgr, smu7_ps); + if (result) return result; diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c index d156b7bb92ae..05e680d55dbb 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c @@ -321,8 +321,12 @@ static int vega10_odn_initial_default_setting(struct pp_hwmgr *hwmgr) odn_table->min_vddc = dep_table[0]->entries[0].vddc; i = od_table[2]->count - 1; - od_table[2]->entries[i].clk = hwmgr->platform_descriptor.overdriveLimit.memoryClock; - od_table[2]->entries[i].vddc = odn_table->max_vddc; + od_table[2]->entries[i].clk = hwmgr->platform_descriptor.overdriveLimit.memoryClock > od_table[2]->entries[i].clk ? + hwmgr->platform_descriptor.overdriveLimit.memoryClock : + od_table[2]->entries[i].clk; + od_table[2]->entries[i].vddc = odn_table->max_vddc > od_table[2]->entries[i].vddc ? + odn_table->max_vddc : + od_table[2]->entries[i].vddc; return 0; } @@ -1311,6 +1315,9 @@ static int vega10_setup_default_dpm_tables(struct pp_hwmgr *hwmgr) vega10_setup_default_single_dpm_table(hwmgr, dpm_table, dep_gfx_table); + if (hwmgr->platform_descriptor.overdriveLimit.engineClock == 0) + hwmgr->platform_descriptor.overdriveLimit.engineClock = + dpm_table->dpm_levels[dpm_table->count-1].value; vega10_init_dpm_state(&(dpm_table->dpm_state)); /* Initialize Mclk DPM table based on allow Mclk values */ @@ -1319,6 +1326,10 @@ static int vega10_setup_default_dpm_tables(struct pp_hwmgr *hwmgr) vega10_setup_default_single_dpm_table(hwmgr, dpm_table, dep_mclk_table); + if (hwmgr->platform_descriptor.overdriveLimit.memoryClock == 0) + hwmgr->platform_descriptor.overdriveLimit.memoryClock = + dpm_table->dpm_levels[dpm_table->count-1].value; + vega10_init_dpm_state(&(dpm_table->dpm_state)); data->dpm_table.eclk_table.count = 0; diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c index a9efd8554fbc..22364875a943 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c @@ -1090,7 +1090,7 @@ static int vega10_disable_se_edc_config(struct pp_hwmgr *hwmgr) static int vega10_enable_psm_gc_edc_config(struct pp_hwmgr *hwmgr) { struct amdgpu_device *adev = hwmgr->adev; - int result; + int result = 0; uint32_t num_se = 0; uint32_t count, data; @@ -1104,7 +1104,7 @@ static int vega10_enable_psm_gc_edc_config(struct pp_hwmgr *hwmgr) for (count = 0; count < num_se; count++) { data = GRBM_GFX_INDEX__INSTANCE_BROADCAST_WRITES_MASK | GRBM_GFX_INDEX__SH_BROADCAST_WRITES_MASK | ( count << GRBM_GFX_INDEX__SE_INDEX__SHIFT); WREG32_SOC15(GC, 0, mmGRBM_GFX_INDEX, data); - result |= vega10_program_didt_config_registers(hwmgr, PSMSEEDCStallPatternConfig_Vega10, VEGA10_CONFIGREG_DIDT); + result = vega10_program_didt_config_registers(hwmgr, PSMSEEDCStallPatternConfig_Vega10, VEGA10_CONFIGREG_DIDT); result |= vega10_program_didt_config_registers(hwmgr, PSMSEEDCStallDelayConfig_Vega10, VEGA10_CONFIGREG_DIDT); result |= vega10_program_didt_config_registers(hwmgr, PSMSEEDCCtrlResetConfig_Vega10, VEGA10_CONFIGREG_DIDT); result |= vega10_program_didt_config_registers(hwmgr, PSMSEEDCCtrlConfig_Vega10, VEGA10_CONFIGREG_DIDT); diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c index 0768d259c07c..16b1a9cf6cf0 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c @@ -267,12 +267,6 @@ static int init_over_drive_limits( hwmgr->platform_descriptor.maxOverdriveVDDC = 0; hwmgr->platform_descriptor.overdriveVDDCStep = 0; - if (hwmgr->platform_descriptor.overdriveLimit.engineClock == 0 || - hwmgr->platform_descriptor.overdriveLimit.memoryClock == 0) { - hwmgr->od_enabled = false; - pr_debug("OverDrive feature not support by VBIOS\n"); - } - return 0; } diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c index 782e2098824d..c98e5de777cd 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c @@ -81,6 +81,7 @@ static void vega12_set_default_registry_data(struct pp_hwmgr *hwmgr) data->registry_data.disallowed_features = 0x0; data->registry_data.od_state_in_dc_support = 0; + data->registry_data.thermal_support = 1; data->registry_data.skip_baco_hardware = 0; data->registry_data.log_avfs_param = 0; @@ -803,6 +804,9 @@ static int vega12_init_smc_table(struct pp_hwmgr *hwmgr) data->vbios_boot_state.soc_clock = boot_up_values.ulSocClk; data->vbios_boot_state.dcef_clock = boot_up_values.ulDCEFClk; data->vbios_boot_state.uc_cooling_id = boot_up_values.ucCoolingID; + data->vbios_boot_state.eclock = boot_up_values.ulEClk; + data->vbios_boot_state.dclock = boot_up_values.ulDClk; + data->vbios_boot_state.vclock = boot_up_values.ulVClk; smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetMinDeepSleepDcefclk, (uint32_t)(data->vbios_boot_state.dcef_clock / 100)); diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h index e81ded1ec198..49b38df8c7f2 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h @@ -167,6 +167,9 @@ struct vega12_vbios_boot_state { uint32_t mem_clock; uint32_t soc_clock; uint32_t dcef_clock; + uint32_t eclock; + uint32_t dclock; + uint32_t vclock; }; #define DPMTABLE_OD_UPDATE_SCLK 0x00000001 diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c index 888ddca902d8..29914700ee82 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c @@ -230,6 +230,8 @@ static int append_vbios_pptable(struct pp_hwmgr *hwmgr, PPTable_t *ppsmc_pptable ppsmc_pptable->AcgThresholdFreqLow = 0xFFFF; } + ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address; + return 0; } diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h index 2f8a3b983cce..b08526fd1619 100644 --- a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h +++ b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h @@ -499,7 +499,10 @@ typedef struct { uint8_t AcgGfxclkSpreadPercent; uint16_t AcgGfxclkSpreadFreq; - uint32_t BoardReserved[10]; + uint8_t Vr2_I2C_address; + uint8_t padding_vr2[3]; + + uint32_t BoardReserved[9]; uint32_t MmHubPadding[7]; diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c index d644a9bb9078..9f407c48d4f0 100644 --- a/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c +++ b/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c @@ -381,6 +381,7 @@ int smu7_request_smu_load_fw(struct pp_hwmgr *hwmgr) uint32_t fw_to_load; int result = 0; struct SMU_DRAMData_TOC *toc; + uint32_t num_entries = 0; if (!hwmgr->reload_fw) { pr_info("skip reloading...\n"); @@ -422,41 +423,41 @@ int smu7_request_smu_load_fw(struct pp_hwmgr *hwmgr) } toc = (struct SMU_DRAMData_TOC *)smu_data->header; - toc->num_entries = 0; toc->structure_version = 1; PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_RLC_G, &toc->entry[toc->num_entries++]), + UCODE_ID_RLC_G, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_CE, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_CE, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_PFP, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_PFP, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_ME, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_ME, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_MEC, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_MEC, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_MEC_JT1, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_MEC_JT1, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_CP_MEC_JT2, &toc->entry[toc->num_entries++]), + UCODE_ID_CP_MEC_JT2, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_SDMA0, &toc->entry[toc->num_entries++]), + UCODE_ID_SDMA0, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_SDMA1, &toc->entry[toc->num_entries++]), + UCODE_ID_SDMA1, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); if (!hwmgr->not_vf) PP_ASSERT_WITH_CODE(0 == smu7_populate_single_firmware_entry(hwmgr, - UCODE_ID_MEC_STORAGE, &toc->entry[toc->num_entries++]), + UCODE_ID_MEC_STORAGE, &toc->entry[num_entries++]), "Failed to Get Firmware Entry.", return -EINVAL); + toc->num_entries = num_entries; smu7_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_DRV_DRAM_ADDR_HI, upper_32_bits(smu_data->header_buffer.mc_addr)); smu7_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_DRV_DRAM_ADDR_LO, lower_32_bits(smu_data->header_buffer.mc_addr)); diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c index 8d20faa198cf..0a788d76ed5f 100644 --- a/drivers/gpu/drm/arm/malidp_drv.c +++ b/drivers/gpu/drm/arm/malidp_drv.c @@ -278,7 +278,6 @@ static int malidp_init(struct drm_device *drm) static void malidp_fini(struct drm_device *drm) { - drm_atomic_helper_shutdown(drm); drm_mode_config_cleanup(drm); } @@ -646,6 +645,7 @@ vblank_fail: malidp_de_irq_fini(drm); drm->irq_enabled = false; irq_init_fail: + drm_atomic_helper_shutdown(drm); component_unbind_all(dev, drm); bind_fail: of_node_put(malidp->crtc.port); @@ -681,6 +681,7 @@ static void malidp_unbind(struct device *dev) malidp_se_irq_fini(drm); malidp_de_irq_fini(drm); drm->irq_enabled = false; + drm_atomic_helper_shutdown(drm); component_unbind_all(dev, drm); of_node_put(malidp->crtc.port); malidp->crtc.port = NULL; diff --git a/drivers/gpu/drm/arm/malidp_hw.c b/drivers/gpu/drm/arm/malidp_hw.c index d789b46dc817..069783e715f1 100644 --- a/drivers/gpu/drm/arm/malidp_hw.c +++ b/drivers/gpu/drm/arm/malidp_hw.c @@ -634,7 +634,8 @@ const struct malidp_hw malidp_device[MALIDP_MAX_DEVICES] = { .vsync_irq = MALIDP500_DE_IRQ_VSYNC, }, .se_irq_map = { - .irq_mask = MALIDP500_SE_IRQ_CONF_MODE, + .irq_mask = MALIDP500_SE_IRQ_CONF_MODE | + MALIDP500_SE_IRQ_GLOBAL, .vsync_irq = 0, }, .dc_irq_map = { diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c index 7a44897c50fe..29409a65d864 100644 --- a/drivers/gpu/drm/arm/malidp_planes.c +++ b/drivers/gpu/drm/arm/malidp_planes.c @@ -23,6 +23,7 @@ /* Layer specific register offsets */ #define MALIDP_LAYER_FORMAT 0x000 +#define LAYER_FORMAT_MASK 0x3f #define MALIDP_LAYER_CONTROL 0x004 #define LAYER_ENABLE (1 << 0) #define LAYER_FLOWCFG_MASK 7 @@ -235,8 +236,8 @@ static int malidp_de_plane_check(struct drm_plane *plane, if (state->rotation & MALIDP_ROTATED_MASK) { int val; - val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_h, - state->crtc_w, + val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_w, + state->crtc_h, fb->format->format); if (val < 0) return val; @@ -337,7 +338,9 @@ static void malidp_de_plane_update(struct drm_plane *plane, dest_w = plane->state->crtc_w; dest_h = plane->state->crtc_h; - malidp_hw_write(mp->hwdev, ms->format, mp->layer->base); + val = malidp_hw_read(mp->hwdev, mp->layer->base); + val = (val & ~LAYER_FORMAT_MASK) | ms->format; + malidp_hw_write(mp->hwdev, val, mp->layer->base); for (i = 0; i < ms->n_planes; i++) { /* calculate the offset for the layer's plane registers */ diff --git a/drivers/gpu/drm/armada/armada_crtc.c b/drivers/gpu/drm/armada/armada_crtc.c index 03eeee11dd5b..42a40daff132 100644 --- a/drivers/gpu/drm/armada/armada_crtc.c +++ b/drivers/gpu/drm/armada/armada_crtc.c @@ -519,8 +519,9 @@ static irqreturn_t armada_drm_irq(int irq, void *arg) u32 v, stat = readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR); /* - * This is rediculous - rather than writing bits to clear, we - * have to set the actual status register value. This is racy. + * Reading the ISR appears to clear bits provided CLEAN_SPU_IRQ_ISR + * is set. Writing has some other effect to acknowledge the IRQ - + * without this, we only get a single IRQ. */ writel_relaxed(0, dcrtc->base + LCD_SPU_IRQ_ISR); @@ -1116,16 +1117,22 @@ armada_drm_crtc_set_property(struct drm_crtc *crtc, static int armada_drm_crtc_enable_vblank(struct drm_crtc *crtc) { struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc); + unsigned long flags; + spin_lock_irqsave(&dcrtc->irq_lock, flags); armada_drm_crtc_enable_irq(dcrtc, VSYNC_IRQ_ENA); + spin_unlock_irqrestore(&dcrtc->irq_lock, flags); return 0; } static void armada_drm_crtc_disable_vblank(struct drm_crtc *crtc) { struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc); + unsigned long flags; + spin_lock_irqsave(&dcrtc->irq_lock, flags); armada_drm_crtc_disable_irq(dcrtc, VSYNC_IRQ_ENA); + spin_unlock_irqrestore(&dcrtc->irq_lock, flags); } static const struct drm_crtc_funcs armada_crtc_funcs = { @@ -1415,6 +1422,7 @@ static int armada_drm_crtc_create(struct drm_device *drm, struct device *dev, CFG_PDWN64x66, dcrtc->base + LCD_SPU_SRAM_PARA1); writel_relaxed(0x2032ff81, dcrtc->base + LCD_SPU_DMA_CTRL1); writel_relaxed(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA); + readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR); writel_relaxed(0, dcrtc->base + LCD_SPU_IRQ_ISR); ret = devm_request_irq(dev, irq, armada_drm_irq, 0, "armada_drm_crtc", diff --git a/drivers/gpu/drm/armada/armada_hw.h b/drivers/gpu/drm/armada/armada_hw.h index 27319a8335e2..345dc4d0851e 100644 --- a/drivers/gpu/drm/armada/armada_hw.h +++ b/drivers/gpu/drm/armada/armada_hw.h @@ -160,6 +160,7 @@ enum { CFG_ALPHAM_GRA = 0x1 << 16, CFG_ALPHAM_CFG = 0x2 << 16, CFG_ALPHA_MASK = 0xff << 8, +#define CFG_ALPHA(x) ((x) << 8) CFG_PIXCMD_MASK = 0xff, }; diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c index c391955009d6..afa7ded3ae31 100644 --- a/drivers/gpu/drm/armada/armada_overlay.c +++ b/drivers/gpu/drm/armada/armada_overlay.c @@ -28,6 +28,7 @@ struct armada_ovl_plane_properties { uint16_t contrast; uint16_t saturation; uint32_t colorkey_mode; + uint32_t colorkey_enable; }; struct armada_ovl_plane { @@ -54,11 +55,13 @@ armada_ovl_update_attr(struct armada_ovl_plane_properties *prop, writel_relaxed(0x00002000, dcrtc->base + LCD_SPU_CBSH_HUE); spin_lock_irq(&dcrtc->irq_lock); - armada_updatel(prop->colorkey_mode | CFG_ALPHAM_GRA, - CFG_CKMODE_MASK | CFG_ALPHAM_MASK | CFG_ALPHA_MASK, - dcrtc->base + LCD_SPU_DMA_CTRL1); - - armada_updatel(ADV_GRACOLORKEY, 0, dcrtc->base + LCD_SPU_ADV_REG); + armada_updatel(prop->colorkey_mode, + CFG_CKMODE_MASK | CFG_ALPHAM_MASK | CFG_ALPHA_MASK, + dcrtc->base + LCD_SPU_DMA_CTRL1); + if (dcrtc->variant->has_spu_adv_reg) + armada_updatel(prop->colorkey_enable, + ADV_GRACOLORKEY | ADV_VIDCOLORKEY, + dcrtc->base + LCD_SPU_ADV_REG); spin_unlock_irq(&dcrtc->irq_lock); } @@ -321,8 +324,17 @@ static int armada_ovl_plane_set_property(struct drm_plane *plane, dplane->prop.colorkey_vb |= K2B(val); update_attr = true; } else if (property == priv->colorkey_mode_prop) { - dplane->prop.colorkey_mode &= ~CFG_CKMODE_MASK; - dplane->prop.colorkey_mode |= CFG_CKMODE(val); + if (val == CKMODE_DISABLE) { + dplane->prop.colorkey_mode = + CFG_CKMODE(CKMODE_DISABLE) | + CFG_ALPHAM_CFG | CFG_ALPHA(255); + dplane->prop.colorkey_enable = 0; + } else { + dplane->prop.colorkey_mode = + CFG_CKMODE(val) | + CFG_ALPHAM_GRA | CFG_ALPHA(0); + dplane->prop.colorkey_enable = ADV_GRACOLORKEY; + } update_attr = true; } else if (property == priv->brightness_prop) { dplane->prop.brightness = val - 256; @@ -453,7 +465,9 @@ int armada_overlay_plane_create(struct drm_device *dev, unsigned long crtcs) dplane->prop.colorkey_yr = 0xfefefe00; dplane->prop.colorkey_ug = 0x01010100; dplane->prop.colorkey_vb = 0x01010100; - dplane->prop.colorkey_mode = CFG_CKMODE(CKMODE_RGB); + dplane->prop.colorkey_mode = CFG_CKMODE(CKMODE_RGB) | + CFG_ALPHAM_GRA | CFG_ALPHA(0); + dplane->prop.colorkey_enable = ADV_GRACOLORKEY; dplane->prop.brightness = 0; dplane->prop.contrast = 0x4000; dplane->prop.saturation = 0x4000; diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c index 73c875db45f4..47e0992f3908 100644 --- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c +++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c @@ -839,7 +839,7 @@ static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane) return ret; } - if (desc->layout.xstride && desc->layout.pstride) { + if (desc->layout.xstride[0] && desc->layout.pstride[0]) { int ret; ret = drm_plane_create_rotation_property(&plane->base, diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c index 73021b388e12..dd3ff2f2cdce 100644 --- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c @@ -429,6 +429,18 @@ static void adv7511_hpd_work(struct work_struct *work) else status = connector_status_disconnected; + /* + * The bridge resets its registers on unplug. So when we get a plug + * event and we're already supposed to be powered, cycle the bridge to + * restore its state. + */ + if (status == connector_status_connected && + adv7511->connector.status == connector_status_disconnected && + adv7511->powered) { + regcache_mark_dirty(adv7511->regmap); + adv7511_power_on(adv7511); + } + if (adv7511->connector.status != status) { adv7511->connector.status = status; if (status == connector_status_disconnected) diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c index 7ab36042a822..a6e8f4591e63 100644 --- a/drivers/gpu/drm/bridge/sil-sii8620.c +++ b/drivers/gpu/drm/bridge/sil-sii8620.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -36,8 +37,11 @@ #define SII8620_BURST_BUF_LEN 288 #define VAL_RX_HDMI_CTRL2_DEFVAL VAL_RX_HDMI_CTRL2_IDLE_CNT(3) -#define MHL1_MAX_LCLK 225000 -#define MHL3_MAX_LCLK 600000 + +#define MHL1_MAX_PCLK 75000 +#define MHL1_MAX_PCLK_PP_MODE 150000 +#define MHL3_MAX_PCLK 200000 +#define MHL3_MAX_PCLK_PP_MODE 300000 enum sii8620_mode { CM_DISCONNECTED, @@ -69,9 +73,7 @@ struct sii8620 { struct regulator_bulk_data supplies[2]; struct mutex lock; /* context lock, protects fields below */ int error; - int pixel_clock; unsigned int use_packed_pixel:1; - int video_code; enum sii8620_mode mode; enum sii8620_sink_type sink_type; u8 cbus_status; @@ -79,7 +81,9 @@ struct sii8620 { u8 xstat[MHL_XDS_SIZE]; u8 devcap[MHL_DCAP_SIZE]; u8 xdevcap[MHL_XDC_SIZE]; - u8 avif[HDMI_INFOFRAME_SIZE(AVI)]; + bool feature_complete; + bool devcap_read; + bool sink_detected; struct edid *edid; unsigned int gen2_write_burst:1; enum sii8620_mt_state mt_state; @@ -476,7 +480,7 @@ static void sii8620_update_array(u8 *dst, u8 *src, int count) } } -static void sii8620_sink_detected(struct sii8620 *ctx, int ret) +static void sii8620_identify_sink(struct sii8620 *ctx) { static const char * const sink_str[] = { [SINK_NONE] = "NONE", @@ -487,7 +491,7 @@ static void sii8620_sink_detected(struct sii8620 *ctx, int ret) char sink_name[20]; struct device *dev = ctx->dev; - if (ret < 0) + if (!ctx->sink_detected || !ctx->devcap_read) return; sii8620_fetch_edid(ctx); @@ -496,6 +500,7 @@ static void sii8620_sink_detected(struct sii8620 *ctx, int ret) sii8620_mhl_disconnected(ctx); return; } + sii8620_set_upstream_edid(ctx); if (drm_detect_hdmi_monitor(ctx->edid)) ctx->sink_type = SINK_HDMI; @@ -508,53 +513,6 @@ static void sii8620_sink_detected(struct sii8620 *ctx, int ret) sink_str[ctx->sink_type], sink_name); } -static void sii8620_hsic_init(struct sii8620 *ctx) -{ - if (!sii8620_is_mhl3(ctx)) - return; - - sii8620_write(ctx, REG_FCGC, - BIT_FCGC_HSIC_HOSTMODE | BIT_FCGC_HSIC_ENABLE); - sii8620_setbits(ctx, REG_HRXCTRL3, - BIT_HRXCTRL3_HRX_STAY_RESET | BIT_HRXCTRL3_STATUS_EN, ~0); - sii8620_setbits(ctx, REG_TTXNUMB, MSK_TTXNUMB_TTX_NUMBPS, 4); - sii8620_setbits(ctx, REG_TRXCTRL, BIT_TRXCTRL_TRX_FROM_SE_COC, ~0); - sii8620_setbits(ctx, REG_HTXCTRL, BIT_HTXCTRL_HTX_DRVCONN1, 0); - sii8620_setbits(ctx, REG_KEEPER, MSK_KEEPER_MODE, VAL_KEEPER_MODE_HOST); - sii8620_write_seq_static(ctx, - REG_TDMLLCTL, 0, - REG_UTSRST, BIT_UTSRST_HRX_SRST | BIT_UTSRST_HTX_SRST | - BIT_UTSRST_KEEPER_SRST | BIT_UTSRST_FC_SRST, - REG_UTSRST, BIT_UTSRST_HRX_SRST | BIT_UTSRST_HTX_SRST, - REG_HRXINTL, 0xff, - REG_HRXINTH, 0xff, - REG_TTXINTL, 0xff, - REG_TTXINTH, 0xff, - REG_TRXINTL, 0xff, - REG_TRXINTH, 0xff, - REG_HTXINTL, 0xff, - REG_HTXINTH, 0xff, - REG_FCINTR0, 0xff, - REG_FCINTR1, 0xff, - REG_FCINTR2, 0xff, - REG_FCINTR3, 0xff, - REG_FCINTR4, 0xff, - REG_FCINTR5, 0xff, - REG_FCINTR6, 0xff, - REG_FCINTR7, 0xff - ); -} - -static void sii8620_edid_read(struct sii8620 *ctx, int ret) -{ - if (ret < 0) - return; - - sii8620_set_upstream_edid(ctx); - sii8620_hsic_init(ctx); - sii8620_enable_hpd(ctx); -} - static void sii8620_mr_devcap(struct sii8620 *ctx) { u8 dcap[MHL_DCAP_SIZE]; @@ -570,6 +528,8 @@ static void sii8620_mr_devcap(struct sii8620 *ctx) dcap[MHL_DCAP_ADOPTER_ID_H], dcap[MHL_DCAP_ADOPTER_ID_L], dcap[MHL_DCAP_DEVICE_ID_H], dcap[MHL_DCAP_DEVICE_ID_L]); sii8620_update_array(ctx->devcap, dcap, MHL_DCAP_SIZE); + ctx->devcap_read = true; + sii8620_identify_sink(ctx); } static void sii8620_mr_xdevcap(struct sii8620 *ctx) @@ -807,6 +767,7 @@ static void sii8620_burst_rx_all(struct sii8620 *ctx) static void sii8620_fetch_edid(struct sii8620 *ctx) { u8 lm_ddc, ddc_cmd, int3, cbus; + unsigned long timeout; int fetched, i; int edid_len = EDID_LENGTH; u8 *edid; @@ -856,23 +817,31 @@ static void sii8620_fetch_edid(struct sii8620 *ctx) REG_DDC_CMD, ddc_cmd | VAL_DDC_CMD_ENH_DDC_READ_NO_ACK ); - do { - int3 = sii8620_readb(ctx, REG_INTR3); + int3 = 0; + timeout = jiffies + msecs_to_jiffies(200); + for (;;) { cbus = sii8620_readb(ctx, REG_CBUS_STATUS); - - if (int3 & BIT_DDC_CMD_DONE) - break; - - if (!(cbus & BIT_CBUS_STATUS_CBUS_CONNECTED)) { + if (~cbus & BIT_CBUS_STATUS_CBUS_CONNECTED) { + kfree(edid); + edid = NULL; + goto end; + } + if (int3 & BIT_DDC_CMD_DONE) { + if (sii8620_readb(ctx, REG_DDC_DOUT_CNT) + >= FETCH_SIZE) + break; + } else { + int3 = sii8620_readb(ctx, REG_INTR3); + } + if (time_is_before_jiffies(timeout)) { + ctx->error = -ETIMEDOUT; + dev_err(ctx->dev, "timeout during EDID read\n"); kfree(edid); edid = NULL; goto end; } - } while (1); - - sii8620_readb(ctx, REG_DDC_STATUS); - while (sii8620_readb(ctx, REG_DDC_DOUT_CNT) < FETCH_SIZE) usleep_range(10, 20); + } sii8620_read_buf(ctx, REG_DDC_DATA, edid + fetched, FETCH_SIZE); if (fetched + FETCH_SIZE == EDID_LENGTH) { @@ -971,8 +940,17 @@ static int sii8620_hw_on(struct sii8620 *ctx) ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); if (ret) return ret; + usleep_range(10000, 20000); - return clk_prepare_enable(ctx->clk_xtal); + ret = clk_prepare_enable(ctx->clk_xtal); + if (ret) + return ret; + + msleep(100); + gpiod_set_value(ctx->gpio_reset, 0); + msleep(100); + + return 0; } static int sii8620_hw_off(struct sii8620 *ctx) @@ -982,17 +960,6 @@ static int sii8620_hw_off(struct sii8620 *ctx) return regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); } -static void sii8620_hw_reset(struct sii8620 *ctx) -{ - usleep_range(10000, 20000); - gpiod_set_value(ctx->gpio_reset, 0); - usleep_range(5000, 20000); - gpiod_set_value(ctx->gpio_reset, 1); - usleep_range(10000, 20000); - gpiod_set_value(ctx->gpio_reset, 0); - msleep(300); -} - static void sii8620_cbus_reset(struct sii8620 *ctx) { sii8620_write(ctx, REG_PWD_SRST, BIT_PWD_SRST_CBUS_RST @@ -1055,23 +1022,23 @@ static void sii8620_set_format(struct sii8620 *ctx) BIT_M3_P0CTRL_MHL3_P0_PIXEL_MODE_PACKED, ctx->use_packed_pixel ? ~0 : 0); } else { - if (ctx->use_packed_pixel) + if (ctx->use_packed_pixel) { sii8620_write_seq_static(ctx, REG_VID_MODE, BIT_VID_MODE_M1080P, REG_MHL_TOP_CTL, BIT_MHL_TOP_CTL_MHL_PP_SEL | 1, REG_MHLTX_CTL6, 0x60 ); - else + } else { sii8620_write_seq_static(ctx, REG_VID_MODE, 0, REG_MHL_TOP_CTL, 1, REG_MHLTX_CTL6, 0xa0 ); + } } if (ctx->use_packed_pixel) - out_fmt = VAL_TPI_FORMAT(YCBCR422, FULL) | - BIT_TPI_OUTPUT_CSCMODE709; + out_fmt = VAL_TPI_FORMAT(YCBCR422, FULL); else out_fmt = VAL_TPI_FORMAT(RGB, FULL); @@ -1128,18 +1095,28 @@ static ssize_t mhl3_infoframe_pack(struct mhl3_infoframe *frame, return frm_len; } -static void sii8620_set_infoframes(struct sii8620 *ctx) +static void sii8620_set_infoframes(struct sii8620 *ctx, + struct drm_display_mode *mode) { struct mhl3_infoframe mhl_frm; union hdmi_infoframe frm; u8 buf[31]; int ret; + ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi, + mode, + true); + if (ctx->use_packed_pixel) + frm.avi.colorspace = HDMI_COLORSPACE_YUV422; + + if (!ret) + ret = hdmi_avi_infoframe_pack(&frm.avi, buf, ARRAY_SIZE(buf)); + if (ret > 0) + sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, buf + 3, ret - 3); + if (!sii8620_is_mhl3(ctx) || !ctx->use_packed_pixel) { sii8620_write(ctx, REG_TPI_SC, BIT_TPI_SC_TPI_OUTPUT_MODE_0_HDMI); - sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, ctx->avif + 3, - ARRAY_SIZE(ctx->avif) - 3); sii8620_write(ctx, REG_PKT_FILTER_0, BIT_PKT_FILTER_0_DROP_CEA_GAMUT_PKT | BIT_PKT_FILTER_0_DROP_MPEG_PKT | @@ -1148,16 +1125,6 @@ static void sii8620_set_infoframes(struct sii8620 *ctx) return; } - ret = hdmi_avi_infoframe_init(&frm.avi); - frm.avi.colorspace = HDMI_COLORSPACE_YUV422; - frm.avi.active_aspect = HDMI_ACTIVE_ASPECT_PICTURE; - frm.avi.picture_aspect = HDMI_PICTURE_ASPECT_16_9; - frm.avi.colorimetry = HDMI_COLORIMETRY_ITU_709; - frm.avi.video_code = ctx->video_code; - if (!ret) - ret = hdmi_avi_infoframe_pack(&frm.avi, buf, ARRAY_SIZE(buf)); - if (ret > 0) - sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, buf + 3, ret - 3); sii8620_write(ctx, REG_PKT_FILTER_0, BIT_PKT_FILTER_0_DROP_CEA_GAMUT_PKT | BIT_PKT_FILTER_0_DROP_MPEG_PKT | @@ -1177,6 +1144,9 @@ static void sii8620_set_infoframes(struct sii8620 *ctx) static void sii8620_start_video(struct sii8620 *ctx) { + struct drm_display_mode *mode = + &ctx->bridge.encoder->crtc->state->adjusted_mode; + if (!sii8620_is_mhl3(ctx)) sii8620_stop_video(ctx); @@ -1195,8 +1165,14 @@ static void sii8620_start_video(struct sii8620 *ctx) sii8620_set_format(ctx); if (!sii8620_is_mhl3(ctx)) { - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), - MHL_DST_LM_CLK_MODE_NORMAL | MHL_DST_LM_PATH_ENABLED); + u8 link_mode = MHL_DST_LM_PATH_ENABLED; + + if (ctx->use_packed_pixel) + link_mode |= MHL_DST_LM_CLK_MODE_PACKED_PIXEL; + else + link_mode |= MHL_DST_LM_CLK_MODE_NORMAL; + + sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), link_mode); sii8620_set_auto_zone(ctx); } else { static const struct { @@ -1213,10 +1189,10 @@ static void sii8620_start_video(struct sii8620 *ctx) MHL_XDS_LINK_RATE_6_0_GBPS, 0x40 }, }; u8 p0_ctrl = BIT_M3_P0CTRL_MHL3_P0_PORT_EN; - int clk = ctx->pixel_clock * (ctx->use_packed_pixel ? 2 : 3); + int clk = mode->clock * (ctx->use_packed_pixel ? 2 : 3); int i; - for (i = 0; i < ARRAY_SIZE(clk_spec); ++i) + for (i = 0; i < ARRAY_SIZE(clk_spec) - 1; ++i) if (clk < clk_spec[i].max_clk) break; @@ -1242,7 +1218,7 @@ static void sii8620_start_video(struct sii8620 *ctx) clk_spec[i].link_rate); } - sii8620_set_infoframes(ctx); + sii8620_set_infoframes(ctx, mode); } static void sii8620_disable_hpd(struct sii8620 *ctx) @@ -1534,6 +1510,16 @@ static void sii8620_set_mode(struct sii8620 *ctx, enum sii8620_mode mode) ); } +static void sii8620_hpd_unplugged(struct sii8620 *ctx) +{ + sii8620_disable_hpd(ctx); + ctx->sink_type = SINK_NONE; + ctx->sink_detected = false; + ctx->feature_complete = false; + kfree(ctx->edid); + ctx->edid = NULL; +} + static void sii8620_disconnect(struct sii8620 *ctx) { sii8620_disable_gen2_write_burst(ctx); @@ -1561,7 +1547,7 @@ static void sii8620_disconnect(struct sii8620 *ctx) REG_MHL_DP_CTL6, 0x2A, REG_MHL_DP_CTL7, 0x03 ); - sii8620_disable_hpd(ctx); + sii8620_hpd_unplugged(ctx); sii8620_write_seq_static(ctx, REG_M3_CTRL, VAL_M3_CTRL_MHL3_VALUE, REG_MHL_COC_CTL1, 0x07, @@ -1609,10 +1595,8 @@ static void sii8620_disconnect(struct sii8620 *ctx) memset(ctx->xstat, 0, sizeof(ctx->xstat)); memset(ctx->devcap, 0, sizeof(ctx->devcap)); memset(ctx->xdevcap, 0, sizeof(ctx->xdevcap)); + ctx->devcap_read = false; ctx->cbus_status = 0; - ctx->sink_type = SINK_NONE; - kfree(ctx->edid); - ctx->edid = NULL; sii8620_mt_cleanup(ctx); } @@ -1699,17 +1683,18 @@ static void sii8620_status_dcap_ready(struct sii8620 *ctx) static void sii8620_status_changed_path(struct sii8620 *ctx) { - if (ctx->stat[MHL_DST_LINK_MODE] & MHL_DST_LM_PATH_ENABLED) { - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), - MHL_DST_LM_CLK_MODE_NORMAL - | MHL_DST_LM_PATH_ENABLED); - if (!sii8620_is_mhl3(ctx)) - sii8620_mt_read_devcap(ctx, false); - sii8620_mt_set_cont(ctx, sii8620_sink_detected); - } else { - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), - MHL_DST_LM_CLK_MODE_NORMAL); - } + u8 link_mode; + + if (ctx->use_packed_pixel) + link_mode = MHL_DST_LM_CLK_MODE_PACKED_PIXEL; + else + link_mode = MHL_DST_LM_CLK_MODE_NORMAL; + + if (ctx->stat[MHL_DST_LINK_MODE] & MHL_DST_LM_PATH_ENABLED) + link_mode |= MHL_DST_LM_PATH_ENABLED; + + sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), + link_mode); } static void sii8620_msc_mr_write_stat(struct sii8620 *ctx) @@ -1722,9 +1707,14 @@ static void sii8620_msc_mr_write_stat(struct sii8620 *ctx) sii8620_update_array(ctx->stat, st, MHL_DST_SIZE); sii8620_update_array(ctx->xstat, xst, MHL_XDS_SIZE); - if (ctx->stat[MHL_DST_CONNECTED_RDY] & MHL_DST_CONN_DCAP_RDY) + if (ctx->stat[MHL_DST_CONNECTED_RDY] & st[MHL_DST_CONNECTED_RDY] & + MHL_DST_CONN_DCAP_RDY) { sii8620_status_dcap_ready(ctx); + if (!sii8620_is_mhl3(ctx)) + sii8620_mt_read_devcap(ctx, false); + } + if (st[MHL_DST_LINK_MODE] & MHL_DST_LM_PATH_ENABLED) sii8620_status_changed_path(ctx); } @@ -1808,8 +1798,11 @@ static void sii8620_msc_mr_set_int(struct sii8620 *ctx) } if (ints[MHL_INT_RCHANGE] & MHL_INT_RC_FEAT_REQ) sii8620_send_features(ctx); - if (ints[MHL_INT_RCHANGE] & MHL_INT_RC_FEAT_COMPLETE) - sii8620_edid_read(ctx, 0); + if (ints[MHL_INT_RCHANGE] & MHL_INT_RC_FEAT_COMPLETE) { + ctx->feature_complete = true; + if (ctx->edid) + sii8620_enable_hpd(ctx); + } } static struct sii8620_mt_msg *sii8620_msc_msg_first(struct sii8620 *ctx) @@ -1884,6 +1877,15 @@ static void sii8620_irq_msc(struct sii8620 *ctx) if (stat & BIT_CBUS_MSC_MR_WRITE_STAT) sii8620_msc_mr_write_stat(ctx); + if (stat & BIT_CBUS_HPD_CHG) { + if (ctx->cbus_status & BIT_CBUS_STATUS_CBUS_HPD) { + ctx->sink_detected = true; + sii8620_identify_sink(ctx); + } else { + sii8620_hpd_unplugged(ctx); + } + } + if (stat & BIT_CBUS_MSC_MR_SET_INT) sii8620_msc_mr_set_int(ctx); @@ -1931,14 +1933,6 @@ static void sii8620_irq_edid(struct sii8620 *ctx) ctx->mt_state = MT_STATE_DONE; } -static void sii8620_scdt_high(struct sii8620 *ctx) -{ - sii8620_write_seq_static(ctx, - REG_INTR8_MASK, BIT_CEA_NEW_AVI | BIT_CEA_NEW_VSI, - REG_TPI_SC, BIT_TPI_SC_TPI_OUTPUT_MODE_0_HDMI, - ); -} - static void sii8620_irq_scdt(struct sii8620 *ctx) { u8 stat = sii8620_readb(ctx, REG_INTR5); @@ -1946,53 +1940,13 @@ static void sii8620_irq_scdt(struct sii8620 *ctx) if (stat & BIT_INTR_SCDT_CHANGE) { u8 cstat = sii8620_readb(ctx, REG_TMDS_CSTAT_P3); - if (cstat & BIT_TMDS_CSTAT_P3_SCDT) { - if (ctx->sink_type == SINK_HDMI) - /* enable infoframe interrupt */ - sii8620_scdt_high(ctx); - else - sii8620_start_video(ctx); - } + if (cstat & BIT_TMDS_CSTAT_P3_SCDT) + sii8620_start_video(ctx); } sii8620_write(ctx, REG_INTR5, stat); } -static void sii8620_new_vsi(struct sii8620 *ctx) -{ - u8 vsif[11]; - - sii8620_write(ctx, REG_RX_HDMI_CTRL2, - VAL_RX_HDMI_CTRL2_DEFVAL | - BIT_RX_HDMI_CTRL2_VSI_MON_SEL_VSI); - sii8620_read_buf(ctx, REG_RX_HDMI_MON_PKT_HEADER1, vsif, - ARRAY_SIZE(vsif)); -} - -static void sii8620_new_avi(struct sii8620 *ctx) -{ - sii8620_write(ctx, REG_RX_HDMI_CTRL2, VAL_RX_HDMI_CTRL2_DEFVAL); - sii8620_read_buf(ctx, REG_RX_HDMI_MON_PKT_HEADER1, ctx->avif, - ARRAY_SIZE(ctx->avif)); -} - -static void sii8620_irq_infr(struct sii8620 *ctx) -{ - u8 stat = sii8620_readb(ctx, REG_INTR8) - & (BIT_CEA_NEW_VSI | BIT_CEA_NEW_AVI); - - sii8620_write(ctx, REG_INTR8, stat); - - if (stat & BIT_CEA_NEW_VSI) - sii8620_new_vsi(ctx); - - if (stat & BIT_CEA_NEW_AVI) - sii8620_new_avi(ctx); - - if (stat & (BIT_CEA_NEW_VSI | BIT_CEA_NEW_AVI)) - sii8620_start_video(ctx); -} - static void sii8620_got_xdevcap(struct sii8620 *ctx, int ret) { if (ret < 0) @@ -2043,11 +1997,11 @@ static void sii8620_irq_ddc(struct sii8620 *ctx) if (stat & BIT_DDC_CMD_DONE) { sii8620_write(ctx, REG_INTR3_MASK, 0); - if (sii8620_is_mhl3(ctx)) + if (sii8620_is_mhl3(ctx) && !ctx->feature_complete) sii8620_mt_set_int(ctx, MHL_INT_REG(RCHANGE), MHL_INT_RC_FEAT_REQ); else - sii8620_edid_read(ctx, 0); + sii8620_enable_hpd(ctx); } sii8620_write(ctx, REG_INTR3, stat); } @@ -2074,7 +2028,6 @@ static irqreturn_t sii8620_irq_thread(int irq, void *data) { BIT_FAST_INTR_STAT_EDID, sii8620_irq_edid }, { BIT_FAST_INTR_STAT_DDC, sii8620_irq_ddc }, { BIT_FAST_INTR_STAT_SCDT, sii8620_irq_scdt }, - { BIT_FAST_INTR_STAT_INFR, sii8620_irq_infr }, }; struct sii8620 *ctx = data; u8 stats[LEN_FAST_INTR_STAT]; @@ -2112,7 +2065,6 @@ static void sii8620_cable_in(struct sii8620 *ctx) dev_err(dev, "Error powering on, %d.\n", ret); return; } - sii8620_hw_reset(ctx); sii8620_read_buf(ctx, REG_VND_IDL, ver, ARRAY_SIZE(ver)); ret = sii8620_clear_error(ctx); @@ -2268,17 +2220,43 @@ static void sii8620_detach(struct drm_bridge *bridge) rc_unregister_device(ctx->rc_dev); } +static int sii8620_is_packing_required(struct sii8620 *ctx, + const struct drm_display_mode *mode) +{ + int max_pclk, max_pclk_pp_mode; + + if (sii8620_is_mhl3(ctx)) { + max_pclk = MHL3_MAX_PCLK; + max_pclk_pp_mode = MHL3_MAX_PCLK_PP_MODE; + } else { + max_pclk = MHL1_MAX_PCLK; + max_pclk_pp_mode = MHL1_MAX_PCLK_PP_MODE; + } + + if (mode->clock < max_pclk) + return 0; + else if (mode->clock < max_pclk_pp_mode) + return 1; + else + return -1; +} + static enum drm_mode_status sii8620_mode_valid(struct drm_bridge *bridge, const struct drm_display_mode *mode) { struct sii8620 *ctx = bridge_to_sii8620(bridge); + int pack_required = sii8620_is_packing_required(ctx, mode); bool can_pack = ctx->devcap[MHL_DCAP_VID_LINK_MODE] & MHL_DCAP_VID_LINK_PPIXEL; - unsigned int max_pclk = sii8620_is_mhl3(ctx) ? MHL3_MAX_LCLK : - MHL1_MAX_LCLK; - max_pclk /= can_pack ? 2 : 3; - return (mode->clock > max_pclk) ? MODE_CLOCK_HIGH : MODE_OK; + switch (pack_required) { + case 0: + return MODE_OK; + case 1: + return (can_pack) ? MODE_OK : MODE_CLOCK_HIGH; + default: + return MODE_CLOCK_HIGH; + } } static bool sii8620_mode_fixup(struct drm_bridge *bridge, @@ -2286,43 +2264,14 @@ static bool sii8620_mode_fixup(struct drm_bridge *bridge, struct drm_display_mode *adjusted_mode) { struct sii8620 *ctx = bridge_to_sii8620(bridge); - int max_lclk; - bool ret = true; mutex_lock(&ctx->lock); - max_lclk = sii8620_is_mhl3(ctx) ? MHL3_MAX_LCLK : MHL1_MAX_LCLK; - if (max_lclk > 3 * adjusted_mode->clock) { - ctx->use_packed_pixel = 0; - goto end; - } - if ((ctx->devcap[MHL_DCAP_VID_LINK_MODE] & MHL_DCAP_VID_LINK_PPIXEL) && - max_lclk > 2 * adjusted_mode->clock) { - ctx->use_packed_pixel = 1; - goto end; - } - ret = false; -end: - if (ret) { - u8 vic = drm_match_cea_mode(adjusted_mode); - - if (!vic) { - union hdmi_infoframe frm; - u8 mhl_vic[] = { 0, 95, 94, 93, 98 }; - - /* FIXME: We need the connector here */ - drm_hdmi_vendor_infoframe_from_display_mode( - &frm.vendor.hdmi, NULL, adjusted_mode); - vic = frm.vendor.hdmi.vic; - if (vic >= ARRAY_SIZE(mhl_vic)) - vic = 0; - vic = mhl_vic[vic]; - } - ctx->video_code = vic; - ctx->pixel_clock = adjusted_mode->clock; - } + ctx->use_packed_pixel = sii8620_is_packing_required(ctx, adjusted_mode); + mutex_unlock(&ctx->lock); - return ret; + + return true; } static const struct drm_bridge_funcs sii8620_bridge_funcs = { diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 130da5195f3b..81e32199d3ef 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -1510,8 +1510,9 @@ int drm_atomic_helper_async_check(struct drm_device *dev, { struct drm_crtc *crtc; struct drm_crtc_state *crtc_state; - struct drm_plane *plane; - struct drm_plane_state *old_plane_state, *new_plane_state; + struct drm_plane *plane = NULL; + struct drm_plane_state *old_plane_state = NULL; + struct drm_plane_state *new_plane_state = NULL; const struct drm_plane_helper_funcs *funcs; int i, n_planes = 0; @@ -1527,7 +1528,8 @@ int drm_atomic_helper_async_check(struct drm_device *dev, if (n_planes != 1) return -EINVAL; - if (!new_plane_state->crtc) + if (!new_plane_state->crtc || + old_plane_state->crtc != new_plane_state->crtc) return -EINVAL; funcs = plane->helper_private; diff --git a/drivers/gpu/drm/drm_context.c b/drivers/gpu/drm/drm_context.c index 3c4000facb36..f973d287696a 100644 --- a/drivers/gpu/drm/drm_context.c +++ b/drivers/gpu/drm/drm_context.c @@ -372,7 +372,7 @@ int drm_legacy_addctx(struct drm_device *dev, void *data, ctx->handle = drm_legacy_ctxbitmap_next(dev); } DRM_DEBUG("%d\n", ctx->handle); - if (ctx->handle == -1) { + if (ctx->handle < 0) { DRM_DEBUG("Not enough free contexts.\n"); /* Should this return -EBUSY instead? */ return -ENOMEM; diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c index b553a6f2ff0e..7af748ed1c58 100644 --- a/drivers/gpu/drm/drm_drv.c +++ b/drivers/gpu/drm/drm_drv.c @@ -369,13 +369,6 @@ EXPORT_SYMBOL(drm_dev_exit); */ void drm_dev_unplug(struct drm_device *dev) { - drm_dev_unregister(dev); - - mutex_lock(&drm_global_mutex); - if (dev->open_count == 0) - drm_dev_put(dev); - mutex_unlock(&drm_global_mutex); - /* * After synchronizing any critical read section is guaranteed to see * the new value of ->unplugged, and any critical section which might @@ -384,6 +377,13 @@ void drm_dev_unplug(struct drm_device *dev) */ dev->unplugged = true; synchronize_srcu(&drm_unplug_srcu); + + drm_dev_unregister(dev); + + mutex_lock(&drm_global_mutex); + if (dev->open_count == 0) + drm_dev_put(dev); + mutex_unlock(&drm_global_mutex); } EXPORT_SYMBOL(drm_dev_unplug); diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c index 50c73c0a20b9..b54fb78a283c 100644 --- a/drivers/gpu/drm/drm_lease.c +++ b/drivers/gpu/drm/drm_lease.c @@ -553,24 +553,13 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev, /* Clone the lessor file to create a new file for us */ DRM_DEBUG_LEASE("Allocating lease file\n"); - path_get(&lessor_file->f_path); - lessee_file = alloc_file(&lessor_file->f_path, - lessor_file->f_mode, - fops_get(lessor_file->f_inode->i_fop)); - + lessee_file = file_clone_open(lessor_file); if (IS_ERR(lessee_file)) { ret = PTR_ERR(lessee_file); goto out_lessee; } - /* Initialize the new file for DRM */ - DRM_DEBUG_LEASE("Initializing the file with %p\n", lessee_file->f_op->open); - ret = lessee_file->f_op->open(lessee_file->f_inode, lessee_file); - if (ret) - goto out_lessee_file; - lessee_priv = lessee_file->private_data; - /* Change the file to a master one */ drm_master_put(&lessee_priv->master); lessee_priv->master = lessee; @@ -588,9 +577,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev, DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n"); return 0; -out_lessee_file: - fput(lessee_file); - out_lessee: drm_master_put(&lessee); diff --git a/drivers/gpu/drm/drm_property.c b/drivers/gpu/drm/drm_property.c index 1f8031e30f53..cdb10f885a4f 100644 --- a/drivers/gpu/drm/drm_property.c +++ b/drivers/gpu/drm/drm_property.c @@ -532,7 +532,7 @@ static void drm_property_free_blob(struct kref *kref) drm_mode_object_unregister(blob->dev, &blob->base); - kfree(blob); + kvfree(blob); } /** @@ -559,7 +559,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length, if (!length || length > ULONG_MAX - sizeof(struct drm_property_blob)) return ERR_PTR(-EINVAL); - blob = kzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL); + blob = kvzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL); if (!blob) return ERR_PTR(-ENOMEM); @@ -576,7 +576,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length, ret = __drm_mode_object_add(dev, &blob->base, DRM_MODE_OBJECT_BLOB, true, drm_property_free_blob); if (ret) { - kfree(blob); + kvfree(blob); return ERR_PTR(-EINVAL); } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index e5013a999147..540b59fb4103 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -631,8 +631,11 @@ static struct platform_driver etnaviv_platform_driver = { }, }; +static struct platform_device *etnaviv_drm; + static int __init etnaviv_init(void) { + struct platform_device *pdev; int ret; struct device_node *np; @@ -644,7 +647,7 @@ static int __init etnaviv_init(void) ret = platform_driver_register(&etnaviv_platform_driver); if (ret != 0) - platform_driver_unregister(&etnaviv_gpu_driver); + goto unregister_gpu_driver; /* * If the DT contains at least one available GPU device, instantiate @@ -653,20 +656,33 @@ static int __init etnaviv_init(void) for_each_compatible_node(np, NULL, "vivante,gc") { if (!of_device_is_available(np)) continue; - - platform_device_register_simple("etnaviv", -1, NULL, 0); + pdev = platform_device_register_simple("etnaviv", -1, + NULL, 0); + if (IS_ERR(pdev)) { + ret = PTR_ERR(pdev); + of_node_put(np); + goto unregister_platform_driver; + } + etnaviv_drm = pdev; of_node_put(np); break; } + return 0; + +unregister_platform_driver: + platform_driver_unregister(&etnaviv_platform_driver); +unregister_gpu_driver: + platform_driver_unregister(&etnaviv_gpu_driver); return ret; } module_init(etnaviv_init); static void __exit etnaviv_exit(void) { - platform_driver_unregister(&etnaviv_gpu_driver); + platform_device_unregister(etnaviv_drm); platform_driver_unregister(&etnaviv_platform_driver); + platform_driver_unregister(&etnaviv_gpu_driver); } module_exit(etnaviv_exit); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index dd430f0f8ff5..90f17ff7888e 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -131,6 +131,9 @@ struct etnaviv_gpu { struct work_struct sync_point_work; int sync_point_event; + /* hang detection */ + u32 hangcheck_dma_addr; + void __iomem *mmio; int irq; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index a74eb57af15b..50d6b88cb7aa 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -10,6 +10,7 @@ #include "etnaviv_gem.h" #include "etnaviv_gpu.h" #include "etnaviv_sched.h" +#include "state.xml.h" static int etnaviv_job_hang_limit = 0; module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444); @@ -85,6 +86,29 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) { struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); struct etnaviv_gpu *gpu = submit->gpu; + u32 dma_addr; + int change; + + /* + * If the GPU managed to complete this jobs fence, the timout is + * spurious. Bail out. + */ + if (fence_completed(gpu, submit->out_fence->seqno)) + return; + + /* + * If the GPU is still making forward progress on the front-end (which + * should never loop) we shift out the timeout to give it a chance to + * finish the job. + */ + dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS); + change = dma_addr - gpu->hangcheck_dma_addr; + if (change < 0 || change > 16) { + gpu->hangcheck_dma_addr = dma_addr; + schedule_delayed_work(&sched_job->work_tdr, + sched_job->sched->timeout); + return; + } /* block scheduler */ kthread_park(gpu->sched.thread); diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c index 82c95c34447f..e868773ea509 100644 --- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c +++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c @@ -265,7 +265,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win, unsigned long val; val = readl(ctx->addr + DECON_WINCONx(win)); - val &= ~WINCONx_BPPMODE_MASK; + val &= WINCONx_ENWIN_F; switch (fb->format->format) { case DRM_FORMAT_XRGB1555: @@ -356,8 +356,8 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc, writel(val, ctx->addr + DECON_VIDOSDxB(win)); } - val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) | - VIDOSD_Wx_ALPHA_B_F(0x0); + val = VIDOSD_Wx_ALPHA_R_F(0xff) | VIDOSD_Wx_ALPHA_G_F(0xff) | + VIDOSD_Wx_ALPHA_B_F(0xff); writel(val, ctx->addr + DECON_VIDOSDxC(win)); val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) | diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c index a81b4a5e24a7..ed3cc2989f93 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c @@ -420,7 +420,7 @@ err_mode_config_cleanup: err_free_private: kfree(private); err_free_drm: - drm_dev_unref(drm); + drm_dev_put(drm); return ret; } @@ -444,7 +444,7 @@ static void exynos_drm_unbind(struct device *dev) drm->dev_private = NULL; dev_set_drvdata(dev, NULL); - drm_dev_unref(drm); + drm_dev_put(drm); } static const struct component_master_ops exynos_drm_ops = { diff --git a/drivers/gpu/drm/exynos/exynos_drm_fb.c b/drivers/gpu/drm/exynos/exynos_drm_fb.c index 7fcc1a7ab1a0..27b7d34d776c 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_fb.c +++ b/drivers/gpu/drm/exynos/exynos_drm_fb.c @@ -138,7 +138,7 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, err: while (i--) - drm_gem_object_unreference_unlocked(&exynos_gem[i]->base); + drm_gem_object_put_unlocked(&exynos_gem[i]->base); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimc.c b/drivers/gpu/drm/exynos/exynos_drm_fimc.c index 6127ef25acd6..e8d0670bb5f8 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_fimc.c +++ b/drivers/gpu/drm/exynos/exynos_drm_fimc.c @@ -470,17 +470,18 @@ static void fimc_src_set_transf(struct fimc_context *ctx, unsigned int rotation) static void fimc_set_window(struct fimc_context *ctx, struct exynos_drm_ipp_buffer *buf) { + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; u32 cfg, h1, h2, v1, v2; /* cropped image */ h1 = buf->rect.x; - h2 = buf->buf.width - buf->rect.w - buf->rect.x; + h2 = real_width - buf->rect.w - buf->rect.x; v1 = buf->rect.y; v2 = buf->buf.height - buf->rect.h - buf->rect.y; DRM_DEBUG_KMS("x[%d]y[%d]w[%d]h[%d]hsize[%d]vsize[%d]\n", buf->rect.x, buf->rect.y, buf->rect.w, buf->rect.h, - buf->buf.width, buf->buf.height); + real_width, buf->buf.height); DRM_DEBUG_KMS("h1[%d]h2[%d]v1[%d]v2[%d]\n", h1, h2, v1, v2); /* @@ -503,12 +504,13 @@ static void fimc_set_window(struct fimc_context *ctx, static void fimc_src_set_size(struct fimc_context *ctx, struct exynos_drm_ipp_buffer *buf) { + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; u32 cfg; - DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", buf->buf.width, buf->buf.height); + DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", real_width, buf->buf.height); /* original size */ - cfg = (EXYNOS_ORGISIZE_HORIZONTAL(buf->buf.width) | + cfg = (EXYNOS_ORGISIZE_HORIZONTAL(real_width) | EXYNOS_ORGISIZE_VERTICAL(buf->buf.height)); fimc_write(ctx, cfg, EXYNOS_ORGISIZE); @@ -529,7 +531,7 @@ static void fimc_src_set_size(struct fimc_context *ctx, * for now, we support only ITU601 8 bit mode */ cfg = (EXYNOS_CISRCFMT_ITU601_8BIT | - EXYNOS_CISRCFMT_SOURCEHSIZE(buf->buf.width) | + EXYNOS_CISRCFMT_SOURCEHSIZE(real_width) | EXYNOS_CISRCFMT_SOURCEVSIZE(buf->buf.height)); fimc_write(ctx, cfg, EXYNOS_CISRCFMT); @@ -842,12 +844,13 @@ static void fimc_set_scaler(struct fimc_context *ctx, struct fimc_scaler *sc) static void fimc_dst_set_size(struct fimc_context *ctx, struct exynos_drm_ipp_buffer *buf) { + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; u32 cfg, cfg_ext; - DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", buf->buf.width, buf->buf.height); + DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", real_width, buf->buf.height); /* original size */ - cfg = (EXYNOS_ORGOSIZE_HORIZONTAL(buf->buf.width) | + cfg = (EXYNOS_ORGOSIZE_HORIZONTAL(real_width) | EXYNOS_ORGOSIZE_VERTICAL(buf->buf.height)); fimc_write(ctx, cfg, EXYNOS_ORGOSIZE); diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 6e1494fa71b4..bdf5a7655228 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -143,7 +143,7 @@ static int exynos_drm_gem_handle_create(struct drm_gem_object *obj, DRM_DEBUG_KMS("gem handle = 0x%x\n", *handle); /* drop reference from allocate - handle holds it now. */ - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_put_unlocked(obj); return 0; } @@ -186,7 +186,7 @@ unsigned long exynos_drm_gem_get_size(struct drm_device *dev, exynos_gem = to_exynos_gem(obj); - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_put_unlocked(obj); return exynos_gem->size; } @@ -329,13 +329,13 @@ void exynos_drm_gem_put_dma_addr(struct drm_device *dev, return; } - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_put_unlocked(obj); /* * decrease obj->refcount one more time because we has already * increased it at exynos_drm_gem_get_dma_addr(). */ - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_put_unlocked(obj); } static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem, @@ -383,7 +383,7 @@ int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data, args->flags = exynos_gem->flags; args->size = exynos_gem->size; - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_put_unlocked(obj); return 0; } diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c index 35ac66730563..7ba414b52faa 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c @@ -492,21 +492,25 @@ static void gsc_src_set_fmt(struct gsc_context *ctx, u32 fmt) GSC_IN_CHROMA_ORDER_CRCB); break; case DRM_FORMAT_NV21: + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_2P); + break; case DRM_FORMAT_NV61: - cfg |= (GSC_IN_CHROMA_ORDER_CRCB | - GSC_IN_YUV420_2P); + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV422_2P); break; case DRM_FORMAT_YUV422: cfg |= GSC_IN_YUV422_3P; break; case DRM_FORMAT_YUV420: + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_3P); + break; case DRM_FORMAT_YVU420: - cfg |= GSC_IN_YUV420_3P; + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_3P); break; case DRM_FORMAT_NV12: + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_2P); + break; case DRM_FORMAT_NV16: - cfg |= (GSC_IN_CHROMA_ORDER_CBCR | - GSC_IN_YUV420_2P); + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV422_2P); break; } @@ -523,30 +527,30 @@ static void gsc_src_set_transf(struct gsc_context *ctx, unsigned int rotation) switch (degree) { case DRM_MODE_ROTATE_0: - if (rotation & DRM_MODE_REFLECT_Y) - cfg |= GSC_IN_ROT_XFLIP; if (rotation & DRM_MODE_REFLECT_X) + cfg |= GSC_IN_ROT_XFLIP; + if (rotation & DRM_MODE_REFLECT_Y) cfg |= GSC_IN_ROT_YFLIP; break; case DRM_MODE_ROTATE_90: cfg |= GSC_IN_ROT_90; - if (rotation & DRM_MODE_REFLECT_Y) - cfg |= GSC_IN_ROT_XFLIP; if (rotation & DRM_MODE_REFLECT_X) + cfg |= GSC_IN_ROT_XFLIP; + if (rotation & DRM_MODE_REFLECT_Y) cfg |= GSC_IN_ROT_YFLIP; break; case DRM_MODE_ROTATE_180: cfg |= GSC_IN_ROT_180; - if (rotation & DRM_MODE_REFLECT_Y) - cfg &= ~GSC_IN_ROT_XFLIP; if (rotation & DRM_MODE_REFLECT_X) + cfg &= ~GSC_IN_ROT_XFLIP; + if (rotation & DRM_MODE_REFLECT_Y) cfg &= ~GSC_IN_ROT_YFLIP; break; case DRM_MODE_ROTATE_270: cfg |= GSC_IN_ROT_270; - if (rotation & DRM_MODE_REFLECT_Y) - cfg &= ~GSC_IN_ROT_XFLIP; if (rotation & DRM_MODE_REFLECT_X) + cfg &= ~GSC_IN_ROT_XFLIP; + if (rotation & DRM_MODE_REFLECT_Y) cfg &= ~GSC_IN_ROT_YFLIP; break; } @@ -577,7 +581,7 @@ static void gsc_src_set_size(struct gsc_context *ctx, cfg &= ~(GSC_SRCIMG_HEIGHT_MASK | GSC_SRCIMG_WIDTH_MASK); - cfg |= (GSC_SRCIMG_WIDTH(buf->buf.width) | + cfg |= (GSC_SRCIMG_WIDTH(buf->buf.pitch[0] / buf->format->cpp[0]) | GSC_SRCIMG_HEIGHT(buf->buf.height)); gsc_write(cfg, GSC_SRCIMG_SIZE); @@ -672,18 +676,25 @@ static void gsc_dst_set_fmt(struct gsc_context *ctx, u32 fmt) GSC_OUT_CHROMA_ORDER_CRCB); break; case DRM_FORMAT_NV21: - case DRM_FORMAT_NV61: cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_2P); break; + case DRM_FORMAT_NV61: + cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV422_2P); + break; case DRM_FORMAT_YUV422: + cfg |= GSC_OUT_YUV422_3P; + break; case DRM_FORMAT_YUV420: + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_3P); + break; case DRM_FORMAT_YVU420: - cfg |= GSC_OUT_YUV420_3P; + cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_3P); break; case DRM_FORMAT_NV12: + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_2P); + break; case DRM_FORMAT_NV16: - cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | - GSC_OUT_YUV420_2P); + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV422_2P); break; } @@ -868,7 +879,7 @@ static void gsc_dst_set_size(struct gsc_context *ctx, /* original size */ cfg = gsc_read(GSC_DSTIMG_SIZE); cfg &= ~(GSC_DSTIMG_HEIGHT_MASK | GSC_DSTIMG_WIDTH_MASK); - cfg |= GSC_DSTIMG_WIDTH(buf->buf.width) | + cfg |= GSC_DSTIMG_WIDTH(buf->buf.pitch[0] / buf->format->cpp[0]) | GSC_DSTIMG_HEIGHT(buf->buf.height); gsc_write(cfg, GSC_DSTIMG_SIZE); @@ -1341,7 +1352,7 @@ static const struct drm_exynos_ipp_limit gsc_5420_limits[] = { }; static const struct drm_exynos_ipp_limit gsc_5433_limits[] = { - { IPP_SIZE_LIMIT(BUFFER, .h = { 32, 8191, 2 }, .v = { 16, 8191, 2 }) }, + { IPP_SIZE_LIMIT(BUFFER, .h = { 32, 8191, 16 }, .v = { 16, 8191, 2 }) }, { IPP_SIZE_LIMIT(AREA, .h = { 16, 4800, 1 }, .v = { 8, 3344, 1 }) }, { IPP_SIZE_LIMIT(ROTATED, .h = { 32, 2047 }, .v = { 8, 8191 }) }, { IPP_SCALE_LIMIT(.h = { (1 << 16) / 16, (1 << 16) * 8 }, diff --git a/drivers/gpu/drm/exynos/exynos_drm_ipp.c b/drivers/gpu/drm/exynos/exynos_drm_ipp.c index 26374e58c557..b435db8fc916 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_ipp.c +++ b/drivers/gpu/drm/exynos/exynos_drm_ipp.c @@ -345,27 +345,6 @@ static int exynos_drm_ipp_task_setup_buffer(struct exynos_drm_ipp_buffer *buf, int ret = 0; int i; - /* basic checks */ - if (buf->buf.width == 0 || buf->buf.height == 0) - return -EINVAL; - buf->format = drm_format_info(buf->buf.fourcc); - for (i = 0; i < buf->format->num_planes; i++) { - unsigned int width = (i == 0) ? buf->buf.width : - DIV_ROUND_UP(buf->buf.width, buf->format->hsub); - - if (buf->buf.pitch[i] == 0) - buf->buf.pitch[i] = width * buf->format->cpp[i]; - if (buf->buf.pitch[i] < width * buf->format->cpp[i]) - return -EINVAL; - if (!buf->buf.gem_id[i]) - return -ENOENT; - } - - /* pitch for additional planes must match */ - if (buf->format->num_planes > 2 && - buf->buf.pitch[1] != buf->buf.pitch[2]) - return -EINVAL; - /* get GEM buffers and check their size */ for (i = 0; i < buf->format->num_planes; i++) { unsigned int height = (i == 0) ? buf->buf.height : @@ -428,7 +407,7 @@ enum drm_ipp_size_id { IPP_LIMIT_BUFFER, IPP_LIMIT_AREA, IPP_LIMIT_ROTATED, IPP_LIMIT_MAX }; -static const enum drm_ipp_size_id limit_id_fallback[IPP_LIMIT_MAX][4] = { +static const enum drm_exynos_ipp_limit_type limit_id_fallback[IPP_LIMIT_MAX][4] = { [IPP_LIMIT_BUFFER] = { DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER }, [IPP_LIMIT_AREA] = { DRM_EXYNOS_IPP_LIMIT_SIZE_AREA, DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER }, @@ -495,12 +474,13 @@ static int exynos_drm_ipp_check_size_limits(struct exynos_drm_ipp_buffer *buf, enum drm_ipp_size_id id = rotate ? IPP_LIMIT_ROTATED : IPP_LIMIT_AREA; struct drm_ipp_limit l; struct drm_exynos_ipp_limit_val *lh = &l.h, *lv = &l.v; + int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; if (!limits) return 0; __get_size_limit(limits, num_limits, IPP_LIMIT_BUFFER, &l); - if (!__size_limit_check(buf->buf.width, &l.h) || + if (!__size_limit_check(real_width, &l.h) || !__size_limit_check(buf->buf.height, &l.v)) return -EINVAL; @@ -560,10 +540,62 @@ static int exynos_drm_ipp_check_scale_limits( return 0; } +static int exynos_drm_ipp_check_format(struct exynos_drm_ipp_task *task, + struct exynos_drm_ipp_buffer *buf, + struct exynos_drm_ipp_buffer *src, + struct exynos_drm_ipp_buffer *dst, + bool rotate, bool swap) +{ + const struct exynos_drm_ipp_formats *fmt; + int ret, i; + + fmt = __ipp_format_get(task->ipp, buf->buf.fourcc, buf->buf.modifier, + buf == src ? DRM_EXYNOS_IPP_FORMAT_SOURCE : + DRM_EXYNOS_IPP_FORMAT_DESTINATION); + if (!fmt) { + DRM_DEBUG_DRIVER("Task %pK: %s format not supported\n", task, + buf == src ? "src" : "dst"); + return -EINVAL; + } + + /* basic checks */ + if (buf->buf.width == 0 || buf->buf.height == 0) + return -EINVAL; + + buf->format = drm_format_info(buf->buf.fourcc); + for (i = 0; i < buf->format->num_planes; i++) { + unsigned int width = (i == 0) ? buf->buf.width : + DIV_ROUND_UP(buf->buf.width, buf->format->hsub); + + if (buf->buf.pitch[i] == 0) + buf->buf.pitch[i] = width * buf->format->cpp[i]; + if (buf->buf.pitch[i] < width * buf->format->cpp[i]) + return -EINVAL; + if (!buf->buf.gem_id[i]) + return -ENOENT; + } + + /* pitch for additional planes must match */ + if (buf->format->num_planes > 2 && + buf->buf.pitch[1] != buf->buf.pitch[2]) + return -EINVAL; + + /* check driver limits */ + ret = exynos_drm_ipp_check_size_limits(buf, fmt->limits, + fmt->num_limits, + rotate, + buf == dst ? swap : false); + if (ret) + return ret; + ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, + fmt->limits, + fmt->num_limits, swap); + return ret; +} + static int exynos_drm_ipp_task_check(struct exynos_drm_ipp_task *task) { struct exynos_drm_ipp *ipp = task->ipp; - const struct exynos_drm_ipp_formats *src_fmt, *dst_fmt; struct exynos_drm_ipp_buffer *src = &task->src, *dst = &task->dst; unsigned int rotation = task->transform.rotation; int ret = 0; @@ -607,37 +639,11 @@ static int exynos_drm_ipp_task_check(struct exynos_drm_ipp_task *task) return -EINVAL; } - src_fmt = __ipp_format_get(ipp, src->buf.fourcc, src->buf.modifier, - DRM_EXYNOS_IPP_FORMAT_SOURCE); - if (!src_fmt) { - DRM_DEBUG_DRIVER("Task %pK: src format not supported\n", task); - return -EINVAL; - } - ret = exynos_drm_ipp_check_size_limits(src, src_fmt->limits, - src_fmt->num_limits, - rotate, false); - if (ret) - return ret; - ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, - src_fmt->limits, - src_fmt->num_limits, swap); + ret = exynos_drm_ipp_check_format(task, src, src, dst, rotate, swap); if (ret) return ret; - dst_fmt = __ipp_format_get(ipp, dst->buf.fourcc, dst->buf.modifier, - DRM_EXYNOS_IPP_FORMAT_DESTINATION); - if (!dst_fmt) { - DRM_DEBUG_DRIVER("Task %pK: dst format not supported\n", task); - return -EINVAL; - } - ret = exynos_drm_ipp_check_size_limits(dst, dst_fmt->limits, - dst_fmt->num_limits, - false, swap); - if (ret) - return ret; - ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, - dst_fmt->limits, - dst_fmt->num_limits, swap); + ret = exynos_drm_ipp_check_format(task, dst, src, dst, false, swap); if (ret) return ret; diff --git a/drivers/gpu/drm/exynos/exynos_drm_plane.c b/drivers/gpu/drm/exynos/exynos_drm_plane.c index 38a2a7f1204b..7098c6d35266 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_plane.c +++ b/drivers/gpu/drm/exynos/exynos_drm_plane.c @@ -132,7 +132,7 @@ static void exynos_drm_plane_reset(struct drm_plane *plane) if (plane->state) { exynos_state = to_exynos_plane_state(plane->state); if (exynos_state->base.fb) - drm_framebuffer_unreference(exynos_state->base.fb); + drm_framebuffer_put(exynos_state->base.fb); kfree(exynos_state); plane->state = NULL; } diff --git a/drivers/gpu/drm/exynos/exynos_drm_rotator.c b/drivers/gpu/drm/exynos/exynos_drm_rotator.c index 1a76dd3d52e1..a820a68429b9 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_rotator.c +++ b/drivers/gpu/drm/exynos/exynos_drm_rotator.c @@ -168,9 +168,9 @@ static void rotator_dst_set_transf(struct rot_context *rot, val &= ~ROT_CONTROL_FLIP_MASK; if (rotation & DRM_MODE_REFLECT_X) - val |= ROT_CONTROL_FLIP_HORIZONTAL; - if (rotation & DRM_MODE_REFLECT_Y) val |= ROT_CONTROL_FLIP_VERTICAL; + if (rotation & DRM_MODE_REFLECT_Y) + val |= ROT_CONTROL_FLIP_HORIZONTAL; val &= ~ROT_CONTROL_ROT_MASK; diff --git a/drivers/gpu/drm/exynos/exynos_drm_scaler.c b/drivers/gpu/drm/exynos/exynos_drm_scaler.c index 91d4382343d0..0ddb6eec7b11 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_scaler.c +++ b/drivers/gpu/drm/exynos/exynos_drm_scaler.c @@ -30,6 +30,7 @@ #define scaler_write(cfg, offset) writel(cfg, scaler->regs + (offset)) #define SCALER_MAX_CLK 4 #define SCALER_AUTOSUSPEND_DELAY 2000 +#define SCALER_RESET_WAIT_RETRIES 100 struct scaler_data { const char *clk_name[SCALER_MAX_CLK]; @@ -51,9 +52,9 @@ struct scaler_context { static u32 scaler_get_format(u32 drm_fmt) { switch (drm_fmt) { - case DRM_FORMAT_NV21: - return SCALER_YUV420_2P_UV; case DRM_FORMAT_NV12: + return SCALER_YUV420_2P_UV; + case DRM_FORMAT_NV21: return SCALER_YUV420_2P_VU; case DRM_FORMAT_YUV420: return SCALER_YUV420_3P; @@ -63,15 +64,15 @@ static u32 scaler_get_format(u32 drm_fmt) return SCALER_YUV422_1P_UYVY; case DRM_FORMAT_YVYU: return SCALER_YUV422_1P_YVYU; - case DRM_FORMAT_NV61: - return SCALER_YUV422_2P_UV; case DRM_FORMAT_NV16: + return SCALER_YUV422_2P_UV; + case DRM_FORMAT_NV61: return SCALER_YUV422_2P_VU; case DRM_FORMAT_YUV422: return SCALER_YUV422_3P; - case DRM_FORMAT_NV42: - return SCALER_YUV444_2P_UV; case DRM_FORMAT_NV24: + return SCALER_YUV444_2P_UV; + case DRM_FORMAT_NV42: return SCALER_YUV444_2P_VU; case DRM_FORMAT_YUV444: return SCALER_YUV444_3P; @@ -100,6 +101,23 @@ static u32 scaler_get_format(u32 drm_fmt) return 0; } +static inline int scaler_reset(struct scaler_context *scaler) +{ + int retry = SCALER_RESET_WAIT_RETRIES; + + scaler_write(SCALER_CFG_SOFT_RESET, SCALER_CFG); + do { + cpu_relax(); + } while (retry > 1 && + scaler_read(SCALER_CFG) & SCALER_CFG_SOFT_RESET); + do { + cpu_relax(); + scaler_write(1, SCALER_INT_EN); + } while (retry > 0 && scaler_read(SCALER_INT_EN) != 1); + + return retry ? 0 : -EIO; +} + static inline void scaler_enable_int(struct scaler_context *scaler) { u32 val; @@ -354,9 +372,13 @@ static int scaler_commit(struct exynos_drm_ipp *ipp, u32 dst_fmt = scaler_get_format(task->dst.buf.fourcc); struct drm_exynos_ipp_task_rect *dst_pos = &task->dst.rect; - scaler->task = task; - pm_runtime_get_sync(scaler->dev); + if (scaler_reset(scaler)) { + pm_runtime_put(scaler->dev); + return -EIO; + } + + scaler->task = task; scaler_set_src_fmt(scaler, src_fmt); scaler_set_src_base(scaler, &task->src); @@ -394,7 +416,11 @@ static inline void scaler_disable_int(struct scaler_context *scaler) static inline u32 scaler_get_int_status(struct scaler_context *scaler) { - return scaler_read(SCALER_INT_STATUS); + u32 val = scaler_read(SCALER_INT_STATUS); + + scaler_write(val, SCALER_INT_STATUS); + + return val; } static inline int scaler_task_done(u32 val) diff --git a/drivers/gpu/drm/exynos/regs-gsc.h b/drivers/gpu/drm/exynos/regs-gsc.h index 4704a993cbb7..16b39734115c 100644 --- a/drivers/gpu/drm/exynos/regs-gsc.h +++ b/drivers/gpu/drm/exynos/regs-gsc.h @@ -138,6 +138,7 @@ #define GSC_OUT_YUV420_3P (3 << 4) #define GSC_OUT_YUV422_1P (4 << 4) #define GSC_OUT_YUV422_2P (5 << 4) +#define GSC_OUT_YUV422_3P (6 << 4) #define GSC_OUT_YUV444 (7 << 4) #define GSC_OUT_TILE_TYPE_MASK (1 << 2) #define GSC_OUT_TILE_C_16x8 (0 << 2) diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c index b51c05d03f14..7f562410f9cf 100644 --- a/drivers/gpu/drm/i915/gvt/cmd_parser.c +++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c @@ -862,6 +862,7 @@ static int cmd_reg_handler(struct parser_exec_state *s, { struct intel_vgpu *vgpu = s->vgpu; struct intel_gvt *gvt = vgpu->gvt; + u32 ctx_sr_ctl; if (offset + 4 > gvt->device_info.mmio_size) { gvt_vgpu_err("%s access to (%x) outside of MMIO range\n", @@ -894,6 +895,28 @@ static int cmd_reg_handler(struct parser_exec_state *s, patch_value(s, cmd_ptr(s, index), VGT_PVINFO_PAGE); } + /* TODO + * Right now only scan LRI command on KBL and in inhibit context. + * It's good enough to support initializing mmio by lri command in + * vgpu inhibit context on KBL. + */ + if (IS_KABYLAKE(s->vgpu->gvt->dev_priv) && + intel_gvt_mmio_is_in_ctx(gvt, offset) && + !strncmp(cmd, "lri", 3)) { + intel_gvt_hypervisor_read_gpa(s->vgpu, + s->workload->ring_context_gpa + 12, &ctx_sr_ctl, 4); + /* check inhibit context */ + if (ctx_sr_ctl & 1) { + u32 data = cmd_val(s, index + 1); + + if (intel_gvt_mmio_has_mode_mask(s->vgpu->gvt, offset)) + intel_vgpu_mask_mmio_write(vgpu, + offset, &data, 4); + else + vgpu_vreg(vgpu, offset) = data; + } + } + /* TODO: Update the global mask if this MMIO is a masked-MMIO */ intel_gvt_mmio_set_cmd_accessed(gvt, offset); return 0; diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c index 6d8180e8d1e2..4b072ade8c38 100644 --- a/drivers/gpu/drm/i915/gvt/display.c +++ b/drivers/gpu/drm/i915/gvt/display.c @@ -196,7 +196,7 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | TRANS_DDI_PORT_MASK); vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | (PORT_B << TRANS_DDI_PORT_SHIFT) | TRANS_DDI_FUNC_ENABLE); if (IS_BROADWELL(dev_priv)) { @@ -216,7 +216,7 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | TRANS_DDI_PORT_MASK); vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | (PORT_C << TRANS_DDI_PORT_SHIFT) | TRANS_DDI_FUNC_ENABLE); if (IS_BROADWELL(dev_priv)) { @@ -236,7 +236,7 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | TRANS_DDI_PORT_MASK); vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | (PORT_D << TRANS_DDI_PORT_SHIFT) | TRANS_DDI_FUNC_ENABLE); if (IS_BROADWELL(dev_priv)) { diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c index 23296547da95..4efec8fa6c1d 100644 --- a/drivers/gpu/drm/i915/gvt/gtt.c +++ b/drivers/gpu/drm/i915/gvt/gtt.c @@ -1592,6 +1592,7 @@ static struct intel_vgpu_mm *intel_vgpu_create_ggtt_mm(struct intel_vgpu *vgpu) vgpu_free_mm(mm); return ERR_PTR(-ENOMEM); } + mm->ggtt_mm.last_partial_off = -1UL; return mm; } @@ -1616,6 +1617,7 @@ void _intel_vgpu_mm_release(struct kref *mm_ref) invalidate_ppgtt_mm(mm); } else { vfree(mm->ggtt_mm.virtual_ggtt); + mm->ggtt_mm.last_partial_off = -1UL; } vgpu_free_mm(mm); @@ -1868,6 +1870,62 @@ static int emulate_ggtt_mmio_write(struct intel_vgpu *vgpu, unsigned int off, memcpy((void *)&e.val64 + (off & (info->gtt_entry_size - 1)), p_data, bytes); + /* If ggtt entry size is 8 bytes, and it's split into two 4 bytes + * write, we assume the two 4 bytes writes are consecutive. + * Otherwise, we abort and report error + */ + if (bytes < info->gtt_entry_size) { + if (ggtt_mm->ggtt_mm.last_partial_off == -1UL) { + /* the first partial part*/ + ggtt_mm->ggtt_mm.last_partial_off = off; + ggtt_mm->ggtt_mm.last_partial_data = e.val64; + return 0; + } else if ((g_gtt_index == + (ggtt_mm->ggtt_mm.last_partial_off >> + info->gtt_entry_size_shift)) && + (off != ggtt_mm->ggtt_mm.last_partial_off)) { + /* the second partial part */ + + int last_off = ggtt_mm->ggtt_mm.last_partial_off & + (info->gtt_entry_size - 1); + + memcpy((void *)&e.val64 + last_off, + (void *)&ggtt_mm->ggtt_mm.last_partial_data + + last_off, bytes); + + ggtt_mm->ggtt_mm.last_partial_off = -1UL; + } else { + int last_offset; + + gvt_vgpu_err("failed to populate guest ggtt entry: abnormal ggtt entry write sequence, last_partial_off=%lx, offset=%x, bytes=%d, ggtt entry size=%d\n", + ggtt_mm->ggtt_mm.last_partial_off, off, + bytes, info->gtt_entry_size); + + /* set host ggtt entry to scratch page and clear + * virtual ggtt entry as not present for last + * partially write offset + */ + last_offset = ggtt_mm->ggtt_mm.last_partial_off & + (~(info->gtt_entry_size - 1)); + + ggtt_get_host_entry(ggtt_mm, &m, last_offset); + ggtt_invalidate_pte(vgpu, &m); + ops->set_pfn(&m, gvt->gtt.scratch_mfn); + ops->clear_present(&m); + ggtt_set_host_entry(ggtt_mm, &m, last_offset); + ggtt_invalidate(gvt->dev_priv); + + ggtt_get_guest_entry(ggtt_mm, &e, last_offset); + ops->clear_present(&e); + ggtt_set_guest_entry(ggtt_mm, &e, last_offset); + + ggtt_mm->ggtt_mm.last_partial_off = off; + ggtt_mm->ggtt_mm.last_partial_data = e.val64; + + return 0; + } + } + if (ops->test_present(&e)) { gfn = ops->get_pfn(&e); m = e; diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h index 3792f2b7f4ff..97e62647418a 100644 --- a/drivers/gpu/drm/i915/gvt/gtt.h +++ b/drivers/gpu/drm/i915/gvt/gtt.h @@ -150,6 +150,8 @@ struct intel_vgpu_mm { } ppgtt_mm; struct { void *virtual_ggtt; + unsigned long last_partial_off; + u64 last_partial_data; } ggtt_mm; }; }; diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index 05d15a095310..858967daf04b 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -268,6 +268,8 @@ struct intel_gvt_mmio { #define F_CMD_ACCESSED (1 << 5) /* This reg could be accessed by unaligned address */ #define F_UNALIGN (1 << 6) +/* This reg is saved/restored in context */ +#define F_IN_CTX (1 << 7) struct gvt_mmio_block *mmio_block; unsigned int num_mmio_block; @@ -639,6 +641,33 @@ static inline bool intel_gvt_mmio_has_mode_mask( return gvt->mmio.mmio_attribute[offset >> 2] & F_MODE_MASK; } +/** + * intel_gvt_mmio_is_in_ctx - check if a MMIO has in-ctx mask + * @gvt: a GVT device + * @offset: register offset + * + * Returns: + * True if a MMIO has a in-context mask, false if it isn't. + * + */ +static inline bool intel_gvt_mmio_is_in_ctx( + struct intel_gvt *gvt, unsigned int offset) +{ + return gvt->mmio.mmio_attribute[offset >> 2] & F_IN_CTX; +} + +/** + * intel_gvt_mmio_set_in_ctx - mask a MMIO in logical context + * @gvt: a GVT device + * @offset: register offset + * + */ +static inline void intel_gvt_mmio_set_in_ctx( + struct intel_gvt *gvt, unsigned int offset) +{ + gvt->mmio.mmio_attribute[offset >> 2] |= F_IN_CTX; +} + int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu); void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu); int intel_gvt_debugfs_init(struct intel_gvt *gvt); diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index bcbc47a88a70..8f1caacdc78a 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -3045,6 +3045,30 @@ int intel_vgpu_default_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, return 0; } +/** + * intel_vgpu_mask_mmio_write - write mask register + * @vgpu: a vGPU + * @offset: access offset + * @p_data: write data buffer + * @bytes: access data length + * + * Returns: + * Zero on success, negative error code if failed. + */ +int intel_vgpu_mask_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, + void *p_data, unsigned int bytes) +{ + u32 mask, old_vreg; + + old_vreg = vgpu_vreg(vgpu, offset); + write_vreg(vgpu, offset, p_data, bytes); + mask = vgpu_vreg(vgpu, offset) >> 16; + vgpu_vreg(vgpu, offset) = (old_vreg & ~mask) | + (vgpu_vreg(vgpu, offset) & mask); + + return 0; +} + /** * intel_gvt_in_force_nonpriv_whitelist - if a mmio is in whitelist to be * force-nopriv register diff --git a/drivers/gpu/drm/i915/gvt/mmio.h b/drivers/gpu/drm/i915/gvt/mmio.h index 71b620875943..dac8c6401e26 100644 --- a/drivers/gpu/drm/i915/gvt/mmio.h +++ b/drivers/gpu/drm/i915/gvt/mmio.h @@ -98,4 +98,6 @@ bool intel_gvt_in_force_nonpriv_whitelist(struct intel_gvt *gvt, int intel_vgpu_mmio_reg_rw(struct intel_vgpu *vgpu, unsigned int offset, void *pdata, unsigned int bytes, bool is_read); +int intel_vgpu_mask_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, + void *p_data, unsigned int bytes); #endif diff --git a/drivers/gpu/drm/i915/gvt/mmio_context.c b/drivers/gpu/drm/i915/gvt/mmio_context.c index 0f949554d118..5ca9caf7552a 100644 --- a/drivers/gpu/drm/i915/gvt/mmio_context.c +++ b/drivers/gpu/drm/i915/gvt/mmio_context.c @@ -581,7 +581,9 @@ void intel_gvt_init_engine_mmio_context(struct intel_gvt *gvt) for (mmio = gvt->engine_mmio_list.mmio; i915_mmio_reg_valid(mmio->reg); mmio++) { - if (mmio->in_context) + if (mmio->in_context) { gvt->engine_mmio_list.ctx_mmio_count[mmio->ring_id]++; + intel_gvt_mmio_set_in_ctx(gvt, mmio->reg.reg); + } } } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 34c125e2d90c..71e1aa54f774 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -340,14 +340,21 @@ struct drm_i915_file_private { unsigned int bsd_engine; -/* Client can have a maximum of 3 contexts banned before - * it is denied of creating new contexts. As one context - * ban needs 4 consecutive hangs, and more if there is - * progress in between, this is a last resort stop gap measure - * to limit the badly behaving clients access to gpu. +/* + * Every context ban increments per client ban score. Also + * hangs in short succession increments ban score. If ban threshold + * is reached, client is considered banned and submitting more work + * will fail. This is a stop gap measure to limit the badly behaving + * clients access to gpu. Note that unbannable contexts never increment + * the client ban score. */ -#define I915_MAX_CLIENT_CONTEXT_BANS 3 - atomic_t context_bans; +#define I915_CLIENT_SCORE_HANG_FAST 1 +#define I915_CLIENT_FAST_HANG_JIFFIES (60 * HZ) +#define I915_CLIENT_SCORE_CONTEXT_BAN 3 +#define I915_CLIENT_SCORE_BANNED 9 + /** ban_score: Accumulated score of all ctx bans and fast hangs. */ + atomic_t ban_score; + unsigned long hang_timestamp; }; /* Interface history: @@ -645,6 +652,7 @@ enum intel_sbi_destination { #define QUIRK_BACKLIGHT_PRESENT (1<<3) #define QUIRK_PIN_SWIZZLED_PAGES (1<<5) #define QUIRK_INCREASE_T12_DELAY (1<<6) +#define QUIRK_INCREASE_DDI_DISABLED_TIME (1<<7) struct intel_fbdev; struct intel_fbc_work; @@ -2238,9 +2246,6 @@ static inline struct scatterlist *____sg_next(struct scatterlist *sg) **/ static inline struct scatterlist *__sg_next(struct scatterlist *sg) { -#ifdef CONFIG_DEBUG_SG - BUG_ON(sg->sg_magic != SG_MAGIC); -#endif return sg_is_last(sg) ? NULL : ____sg_next(sg); } diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3704f4c0c2c9..17c5097721e8 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2002,7 +2002,6 @@ int i915_gem_fault(struct vm_fault *vmf) bool write = !!(vmf->flags & FAULT_FLAG_WRITE); struct i915_vma *vma; pgoff_t page_offset; - unsigned int flags; int ret; /* We don't use vmf->pgoff since that has the fake offset */ @@ -2038,27 +2037,34 @@ int i915_gem_fault(struct vm_fault *vmf) goto err_unlock; } - /* If the object is smaller than a couple of partial vma, it is - * not worth only creating a single partial vma - we may as well - * clear enough space for the full object. - */ - flags = PIN_MAPPABLE; - if (obj->base.size > 2 * MIN_CHUNK_PAGES << PAGE_SHIFT) - flags |= PIN_NONBLOCK | PIN_NONFAULT; /* Now pin it into the GTT as needed */ - vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, flags); + vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, + PIN_MAPPABLE | + PIN_NONBLOCK | + PIN_NONFAULT); if (IS_ERR(vma)) { /* Use a partial view if it is bigger than available space */ struct i915_ggtt_view view = compute_partial_view(obj, page_offset, MIN_CHUNK_PAGES); + unsigned int flags; - /* Userspace is now writing through an untracked VMA, abandon + flags = PIN_MAPPABLE; + if (view.type == I915_GGTT_VIEW_NORMAL) + flags |= PIN_NONBLOCK; /* avoid warnings for pinned */ + + /* + * Userspace is now writing through an untracked VMA, abandon * all hope that the hardware is able to track future writes. */ obj->frontbuffer_ggtt_origin = ORIGIN_CPU; - vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE); + vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags); + if (IS_ERR(vma) && !view.type) { + flags = PIN_MAPPABLE; + view.type = I915_GGTT_VIEW_PARTIAL; + vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags); + } } if (IS_ERR(vma)) { ret = PTR_ERR(vma); @@ -2933,32 +2939,54 @@ i915_gem_object_pwrite_gtt(struct drm_i915_gem_object *obj, return 0; } +static void i915_gem_client_mark_guilty(struct drm_i915_file_private *file_priv, + const struct i915_gem_context *ctx) +{ + unsigned int score; + unsigned long prev_hang; + + if (i915_gem_context_is_banned(ctx)) + score = I915_CLIENT_SCORE_CONTEXT_BAN; + else + score = 0; + + prev_hang = xchg(&file_priv->hang_timestamp, jiffies); + if (time_before(jiffies, prev_hang + I915_CLIENT_FAST_HANG_JIFFIES)) + score += I915_CLIENT_SCORE_HANG_FAST; + + if (score) { + atomic_add(score, &file_priv->ban_score); + + DRM_DEBUG_DRIVER("client %s: gained %u ban score, now %u\n", + ctx->name, score, + atomic_read(&file_priv->ban_score)); + } +} + static void i915_gem_context_mark_guilty(struct i915_gem_context *ctx) { - bool banned; + unsigned int score; + bool banned, bannable; atomic_inc(&ctx->guilty_count); - banned = false; - if (i915_gem_context_is_bannable(ctx)) { - unsigned int score; + bannable = i915_gem_context_is_bannable(ctx); + score = atomic_add_return(CONTEXT_SCORE_GUILTY, &ctx->ban_score); + banned = score >= CONTEXT_SCORE_BAN_THRESHOLD; - score = atomic_add_return(CONTEXT_SCORE_GUILTY, - &ctx->ban_score); - banned = score >= CONTEXT_SCORE_BAN_THRESHOLD; + DRM_DEBUG_DRIVER("context %s: guilty %d, score %u, ban %s\n", + ctx->name, atomic_read(&ctx->guilty_count), + score, yesno(banned && bannable)); - DRM_DEBUG_DRIVER("context %s marked guilty (score %d) banned? %s\n", - ctx->name, score, yesno(banned)); - } - if (!banned) + /* Cool contexts don't accumulate client ban score */ + if (!bannable) return; - i915_gem_context_set_banned(ctx); - if (!IS_ERR_OR_NULL(ctx->file_priv)) { - atomic_inc(&ctx->file_priv->context_bans); - DRM_DEBUG_DRIVER("client %s has had %d context banned\n", - ctx->name, atomic_read(&ctx->file_priv->context_bans)); - } + if (banned) + i915_gem_context_set_banned(ctx); + + if (!IS_ERR_OR_NULL(ctx->file_priv)) + i915_gem_client_mark_guilty(ctx->file_priv, ctx); } static void i915_gem_context_mark_innocent(struct i915_gem_context *ctx) @@ -5736,6 +5764,7 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file) INIT_LIST_HEAD(&file_priv->mm.request_list); file_priv->bsd_engine = -1; + file_priv->hang_timestamp = jiffies; ret = i915_gem_context_open(i915, file); if (ret) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 33f8a4b3c981..060335d3d9e0 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -652,7 +652,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv) static bool client_is_banned(struct drm_i915_file_private *file_priv) { - return atomic_read(&file_priv->context_bans) > I915_MAX_CLIENT_CONTEXT_BANS; + return atomic_read(&file_priv->ban_score) >= I915_CLIENT_SCORE_BANNED; } int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index f627a8c47c58..22df17c8ca9b 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -489,7 +489,9 @@ eb_validate_vma(struct i915_execbuffer *eb, } static int -eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma) +eb_add_vma(struct i915_execbuffer *eb, + unsigned int i, unsigned batch_idx, + struct i915_vma *vma) { struct drm_i915_gem_exec_object2 *entry = &eb->exec[i]; int err; @@ -522,6 +524,24 @@ eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma) eb->flags[i] = entry->flags; vma->exec_flags = &eb->flags[i]; + /* + * SNA is doing fancy tricks with compressing batch buffers, which leads + * to negative relocation deltas. Usually that works out ok since the + * relocate address is still positive, except when the batch is placed + * very low in the GTT. Ensure this doesn't happen. + * + * Note that actual hangs have only been observed on gen7, but for + * paranoia do it everywhere. + */ + if (i == batch_idx) { + if (!(eb->flags[i] & EXEC_OBJECT_PINNED)) + eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS; + if (eb->reloc_cache.has_fence) + eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE; + + eb->batch = vma; + } + err = 0; if (eb_pin_vma(eb, entry, vma)) { if (entry->offset != vma->node.start) { @@ -716,7 +736,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) { struct radix_tree_root *handles_vma = &eb->ctx->handles_vma; struct drm_i915_gem_object *obj; - unsigned int i; + unsigned int i, batch; int err; if (unlikely(i915_gem_context_is_closed(eb->ctx))) @@ -728,6 +748,8 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) INIT_LIST_HEAD(&eb->relocs); INIT_LIST_HEAD(&eb->unbound); + batch = eb_batch_index(eb); + for (i = 0; i < eb->buffer_count; i++) { u32 handle = eb->exec[i].handle; struct i915_lut_handle *lut; @@ -770,33 +792,16 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) lut->handle = handle; add_vma: - err = eb_add_vma(eb, i, vma); + err = eb_add_vma(eb, i, batch, vma); if (unlikely(err)) goto err_vma; GEM_BUG_ON(vma != eb->vma[i]); GEM_BUG_ON(vma->exec_flags != &eb->flags[i]); + GEM_BUG_ON(drm_mm_node_allocated(&vma->node) && + eb_vma_misplaced(&eb->exec[i], vma, eb->flags[i])); } - /* take note of the batch buffer before we might reorder the lists */ - i = eb_batch_index(eb); - eb->batch = eb->vma[i]; - GEM_BUG_ON(eb->batch->exec_flags != &eb->flags[i]); - - /* - * SNA is doing fancy tricks with compressing batch buffers, which leads - * to negative relocation deltas. Usually that works out ok since the - * relocate address is still positive, except when the batch is placed - * very low in the GTT. Ensure this doesn't happen. - * - * Note that actual hangs have only been observed on gen7, but for - * paranoia do it everywhere. - */ - if (!(eb->flags[i] & EXEC_OBJECT_PINNED)) - eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS; - if (eb->reloc_cache.has_fence) - eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE; - eb->args->flags |= __EXEC_VALIDATED; return eb_reserve(eb); diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index f9bc3aaa90d0..c16cb025755e 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -1893,9 +1893,17 @@ static void i9xx_pipestat_irq_ack(struct drm_i915_private *dev_priv, /* * Clear the PIPE*STAT regs before the IIR + * + * Toggle the enable bits to make sure we get an + * edge in the ISR pipe event bit if we don't clear + * all the enabled status bits. Otherwise the edge + * triggered IIR on i965/g4x wouldn't notice that + * an interrupt is still pending. */ - if (pipe_stats[pipe]) - I915_WRITE(reg, enable_mask | pipe_stats[pipe]); + if (pipe_stats[pipe]) { + I915_WRITE(reg, pipe_stats[pipe]); + I915_WRITE(reg, enable_mask); + } } spin_unlock(&dev_priv->irq_lock); } @@ -1990,10 +1998,38 @@ static void valleyview_pipestat_irq_handler(struct drm_i915_private *dev_priv, static u32 i9xx_hpd_irq_ack(struct drm_i915_private *dev_priv) { - u32 hotplug_status = I915_READ(PORT_HOTPLUG_STAT); + u32 hotplug_status = 0, hotplug_status_mask; + int i; - if (hotplug_status) + if (IS_G4X(dev_priv) || + IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) + hotplug_status_mask = HOTPLUG_INT_STATUS_G4X | + DP_AUX_CHANNEL_MASK_INT_STATUS_G4X; + else + hotplug_status_mask = HOTPLUG_INT_STATUS_I915; + + /* + * We absolutely have to clear all the pending interrupt + * bits in PORT_HOTPLUG_STAT. Otherwise the ISR port + * interrupt bit won't have an edge, and the i965/g4x + * edge triggered IIR will not notice that an interrupt + * is still pending. We can't use PORT_HOTPLUG_EN to + * guarantee the edge as the act of toggling the enable + * bits can itself generate a new hotplug interrupt :( + */ + for (i = 0; i < 10; i++) { + u32 tmp = I915_READ(PORT_HOTPLUG_STAT) & hotplug_status_mask; + + if (tmp == 0) + return hotplug_status; + + hotplug_status |= tmp; I915_WRITE(PORT_HOTPLUG_STAT, hotplug_status); + } + + WARN_ONCE(1, + "PORT_HOTPLUG_STAT did not clear (0x%08x)\n", + I915_READ(PORT_HOTPLUG_STAT)); return hotplug_status; } diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index f11bb213ec07..7720569f2024 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -2425,12 +2425,17 @@ enum i915_power_well_id { #define _3D_CHICKEN _MMIO(0x2084) #define _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB (1 << 10) #define _3D_CHICKEN2 _MMIO(0x208c) + +#define FF_SLICE_CHICKEN _MMIO(0x2088) +#define FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX (1 << 1) + /* Disables pipelining of read flushes past the SF-WIZ interface. * Required on all Ironlake steppings according to the B-Spec, but the * particular danger of not doing so is not specified. */ # define _3D_CHICKEN2_WM_READ_PIPELINED (1 << 14) #define _3D_CHICKEN3 _MMIO(0x2090) +#define _3D_CHICKEN_SF_PROVOKING_VERTEX_FIX (1 << 12) #define _3D_CHICKEN_SF_DISABLE_OBJEND_CULL (1 << 10) #define _3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE (1 << 5) #define _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL (1 << 5) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 9324d476e0a7..0531c01c3604 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -109,7 +109,7 @@ vma_create(struct drm_i915_gem_object *obj, obj->base.size >> PAGE_SHIFT)); vma->size = view->partial.size; vma->size <<= PAGE_SHIFT; - GEM_BUG_ON(vma->size >= obj->base.size); + GEM_BUG_ON(vma->size > obj->base.size); } else if (view->type == I915_GGTT_VIEW_ROTATED) { vma->size = intel_rotation_info_size(&view->rotated); vma->size <<= PAGE_SHIFT; diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c index de0e22322c76..072b326d5ee0 100644 --- a/drivers/gpu/drm/i915/intel_crt.c +++ b/drivers/gpu/drm/i915/intel_crt.c @@ -304,6 +304,9 @@ intel_crt_mode_valid(struct drm_connector *connector, int max_dotclk = dev_priv->max_dotclk_freq; int max_clock; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + if (mode->clock < 25000) return MODE_CLOCK_LOW; @@ -337,6 +340,12 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config, struct drm_connector_state *conn_state) { + struct drm_display_mode *adjusted_mode = + &pipe_config->base.adjusted_mode; + + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + return true; } @@ -344,6 +353,12 @@ static bool pch_crt_compute_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config, struct drm_connector_state *conn_state) { + struct drm_display_mode *adjusted_mode = + &pipe_config->base.adjusted_mode; + + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + pipe_config->has_pch_encoder = true; return true; @@ -354,6 +369,11 @@ static bool hsw_crt_compute_config(struct intel_encoder *encoder, struct drm_connector_state *conn_state) { struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct drm_display_mode *adjusted_mode = + &pipe_config->base.adjusted_mode; + + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; pipe_config->has_pch_encoder = true; diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c index f4a8598a2d39..fed26d6e4e27 100644 --- a/drivers/gpu/drm/i915/intel_ddi.c +++ b/drivers/gpu/drm/i915/intel_ddi.c @@ -1782,15 +1782,24 @@ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state) I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp); } -void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv, - enum transcoder cpu_transcoder) +void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state) { + struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; i915_reg_t reg = TRANS_DDI_FUNC_CTL(cpu_transcoder); uint32_t val = I915_READ(reg); val &= ~(TRANS_DDI_FUNC_ENABLE | TRANS_DDI_PORT_MASK | TRANS_DDI_DP_VC_PAYLOAD_ALLOC); val |= TRANS_DDI_PORT_NONE; I915_WRITE(reg, val); + + if (dev_priv->quirks & QUIRK_INCREASE_DDI_DISABLED_TIME && + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) { + DRM_DEBUG_KMS("Quirk Increase DDI disabled time\n"); + /* Quirk time at 100ms for reliable operation */ + msleep(100); + } } int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder, diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index dee3a8e659f1..dec0d60921bf 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -5809,7 +5809,7 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state, intel_ddi_set_vc_payload_alloc(intel_crtc->config, false); if (!transcoder_is_dsi(cpu_transcoder)) - intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder); + intel_ddi_disable_transcoder_func(old_crtc_state); if (INTEL_GEN(dev_priv) >= 9) skylake_scaler_disable(intel_crtc); @@ -14469,12 +14469,22 @@ static enum drm_mode_status intel_mode_valid(struct drm_device *dev, const struct drm_display_mode *mode) { + /* + * Can't reject DBLSCAN here because Xorg ddxen can add piles + * of DBLSCAN modes to the output's mode list when they detect + * the scaling mode property on the connector. And they don't + * ask the kernel to validate those modes in any way until + * modeset time at which point the client gets a protocol error. + * So in order to not upset those clients we silently ignore the + * DBLSCAN flag on such connectors. For other connectors we will + * reject modes with the DBLSCAN flag in encoder->compute_config(). + * And we always reject DBLSCAN modes in connector->mode_valid() + * as we never want such modes on the connector's mode list. + */ + if (mode->vscan > 1) return MODE_NO_VSCAN; - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) - return MODE_NO_DBLESCAN; - if (mode->flags & DRM_MODE_FLAG_HSKEW) return MODE_H_ILLEGAL; @@ -14636,6 +14646,18 @@ static void quirk_increase_t12_delay(struct drm_device *dev) DRM_INFO("Applying T12 delay quirk\n"); } +/* + * GeminiLake NUC HDMI outputs require additional off time + * this allows the onboard retimer to correctly sync to signal + */ +static void quirk_increase_ddi_disabled_time(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = to_i915(dev); + + dev_priv->quirks |= QUIRK_INCREASE_DDI_DISABLED_TIME; + DRM_INFO("Applying Increase DDI Disabled quirk\n"); +} + struct intel_quirk { int device; int subsystem_vendor; @@ -14722,6 +14744,13 @@ static struct intel_quirk intel_quirks[] = { /* Toshiba Satellite P50-C-18C */ { 0x191B, 0x1179, 0xF840, quirk_increase_t12_delay }, + + /* GeminiLake NUC */ + { 0x3185, 0x8086, 0x2072, quirk_increase_ddi_disabled_time }, + { 0x3184, 0x8086, 0x2072, quirk_increase_ddi_disabled_time }, + /* ASRock ITX*/ + { 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time }, + { 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time }, }; static void intel_init_quirks(struct drm_device *dev) diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 8320f0e8e3be..16faea30114a 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -420,6 +420,9 @@ intel_dp_mode_valid(struct drm_connector *connector, int max_rate, mode_rate, max_lanes, max_link_clock; int max_dotclk; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + max_dotclk = intel_dp_downstream_max_dotclock(intel_dp); if (intel_dp_is_edp(intel_dp) && fixed_mode) { @@ -1862,7 +1865,10 @@ intel_dp_compute_config(struct intel_encoder *encoder, conn_state->scaling_mode); } - if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + + if (HAS_GMCH_DISPLAY(dev_priv) && adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) return false; @@ -2782,16 +2788,6 @@ static void intel_disable_dp(struct intel_encoder *encoder, static void g4x_disable_dp(struct intel_encoder *encoder, const struct intel_crtc_state *old_crtc_state, const struct drm_connector_state *old_conn_state) -{ - intel_disable_dp(encoder, old_crtc_state, old_conn_state); - - /* disable the port before the pipe on g4x */ - intel_dp_link_down(encoder, old_crtc_state); -} - -static void ilk_disable_dp(struct intel_encoder *encoder, - const struct intel_crtc_state *old_crtc_state, - const struct drm_connector_state *old_conn_state) { intel_disable_dp(encoder, old_crtc_state, old_conn_state); } @@ -2807,13 +2803,19 @@ static void vlv_disable_dp(struct intel_encoder *encoder, intel_disable_dp(encoder, old_crtc_state, old_conn_state); } -static void ilk_post_disable_dp(struct intel_encoder *encoder, +static void g4x_post_disable_dp(struct intel_encoder *encoder, const struct intel_crtc_state *old_crtc_state, const struct drm_connector_state *old_conn_state) { struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); enum port port = encoder->port; + /* + * Bspec does not list a specific disable sequence for g4x DP. + * Follow the ilk+ sequence (disable pipe before the port) for + * g4x DP as it does not suffer from underruns like the normal + * g4x modeset sequence (disable pipe after the port). + */ intel_dp_link_down(encoder, old_crtc_state); /* Only ilk+ has port A */ @@ -6337,7 +6339,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port, drm_connector_init(dev, connector, &intel_dp_connector_funcs, type); drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs); - if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv)) + if (!HAS_GMCH_DISPLAY(dev_priv)) connector->interlace_allowed = true; connector->doublescan_allowed = 0; @@ -6436,15 +6438,11 @@ bool intel_dp_init(struct drm_i915_private *dev_priv, intel_encoder->enable = vlv_enable_dp; intel_encoder->disable = vlv_disable_dp; intel_encoder->post_disable = vlv_post_disable_dp; - } else if (INTEL_GEN(dev_priv) >= 5) { - intel_encoder->pre_enable = g4x_pre_enable_dp; - intel_encoder->enable = g4x_enable_dp; - intel_encoder->disable = ilk_disable_dp; - intel_encoder->post_disable = ilk_post_disable_dp; } else { intel_encoder->pre_enable = g4x_pre_enable_dp; intel_encoder->enable = g4x_enable_dp; intel_encoder->disable = g4x_disable_dp; + intel_encoder->post_disable = g4x_post_disable_dp; } intel_dig_port->dp.output_reg = output_reg; diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c index 9e6956c08688..5890500a3a8b 100644 --- a/drivers/gpu/drm/i915/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/intel_dp_mst.c @@ -48,6 +48,9 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder, bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_LIMITED_M_N); + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + pipe_config->has_pch_encoder = false; bpp = 24; if (intel_dp->compliance.test_data.bpc) { @@ -366,6 +369,9 @@ intel_dp_mst_mode_valid(struct drm_connector *connector, if (!intel_dp) return MODE_ERROR; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + max_link_clock = intel_dp_max_link_rate(intel_dp); max_lanes = intel_dp_max_lane_count(intel_dp); diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 0361130500a6..b8eefbffc77d 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -1388,8 +1388,7 @@ void hsw_fdi_link_train(struct intel_crtc *crtc, void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port); bool intel_ddi_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe); void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state); -void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv, - enum transcoder cpu_transcoder); +void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state); void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state); void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state); struct intel_encoder * diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c index cf39ca90d887..f349b3920199 100644 --- a/drivers/gpu/drm/i915/intel_dsi.c +++ b/drivers/gpu/drm/i915/intel_dsi.c @@ -326,6 +326,9 @@ static bool intel_dsi_compute_config(struct intel_encoder *encoder, conn_state->scaling_mode); } + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + /* DSI uses short packets for sync events, so clear mode flags for DSI */ adjusted_mode->flags = 0; @@ -1266,6 +1269,9 @@ intel_dsi_mode_valid(struct drm_connector *connector, DRM_DEBUG_KMS("\n"); + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + if (fixed_mode) { if (mode->hdisplay > fixed_mode->hdisplay) return MODE_PANEL; diff --git a/drivers/gpu/drm/i915/intel_dvo.c b/drivers/gpu/drm/i915/intel_dvo.c index a70d767313aa..61d908e0df0e 100644 --- a/drivers/gpu/drm/i915/intel_dvo.c +++ b/drivers/gpu/drm/i915/intel_dvo.c @@ -219,6 +219,9 @@ intel_dvo_mode_valid(struct drm_connector *connector, int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; int target_clock = mode->clock; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + /* XXX: Validate clock range */ if (fixed_mode) { @@ -254,6 +257,9 @@ static bool intel_dvo_compute_config(struct intel_encoder *encoder, if (fixed_mode) intel_fixed_panel_mode(fixed_mode, adjusted_mode); + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + return true; } diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c index ee929f31f7db..d8cb53ef4351 100644 --- a/drivers/gpu/drm/i915/intel_hdmi.c +++ b/drivers/gpu/drm/i915/intel_hdmi.c @@ -1557,6 +1557,9 @@ intel_hdmi_mode_valid(struct drm_connector *connector, bool force_dvi = READ_ONCE(to_intel_digital_connector_state(connector->state)->force_audio) == HDMI_AUDIO_OFF_DVI; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + clock = mode->clock; if ((mode->flags & DRM_MODE_FLAG_3D_MASK) == DRM_MODE_FLAG_3D_FRAME_PACKING) @@ -1677,6 +1680,9 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder, int desired_bpp; bool force_dvi = intel_conn_state->force_audio == HDMI_AUDIO_OFF_DVI; + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + pipe_config->has_hdmi_sink = !force_dvi && intel_hdmi->has_hdmi_sink; if (pipe_config->has_hdmi_sink) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 15434cad5430..7c4c8fb1dae4 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1545,11 +1545,21 @@ static u32 *gen9_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch) /* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt,glk */ batch = gen8_emit_flush_coherentl3_wa(engine, batch); + *batch++ = MI_LOAD_REGISTER_IMM(3); + /* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */ - *batch++ = MI_LOAD_REGISTER_IMM(1); *batch++ = i915_mmio_reg_offset(COMMON_SLICE_CHICKEN2); *batch++ = _MASKED_BIT_DISABLE( GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE); + + /* BSpec: 11391 */ + *batch++ = i915_mmio_reg_offset(FF_SLICE_CHICKEN); + *batch++ = _MASKED_BIT_ENABLE(FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX); + + /* BSpec: 11299 */ + *batch++ = i915_mmio_reg_offset(_3D_CHICKEN3); + *batch++ = _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_PROVOKING_VERTEX_FIX); + *batch++ = MI_NOOP; /* WaClearSlmSpaceAtContextSwitch:kbl */ @@ -2641,10 +2651,8 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx, context_size += LRC_HEADER_PAGES * PAGE_SIZE; ctx_obj = i915_gem_object_create(ctx->i915, context_size); - if (IS_ERR(ctx_obj)) { - ret = PTR_ERR(ctx_obj); - goto error_deref_obj; - } + if (IS_ERR(ctx_obj)) + return PTR_ERR(ctx_obj); vma = i915_vma_instance(ctx_obj, &ctx->i915->ggtt.base, NULL); if (IS_ERR(vma)) { diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index d278f24ba6ae..48f618dc9abb 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -380,6 +380,8 @@ intel_lvds_mode_valid(struct drm_connector *connector, struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode; int max_pixclk = to_i915(connector->dev)->max_dotclk_freq; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; if (mode->hdisplay > fixed_mode->hdisplay) return MODE_PANEL; if (mode->vdisplay > fixed_mode->vdisplay) @@ -429,6 +431,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder, intel_fixed_panel_mode(intel_connector->panel.fixed_mode, adjusted_mode); + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + if (HAS_PCH_SPLIT(dev_priv)) { pipe_config->has_pch_encoder = true; diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c index 25005023c243..26975df4e593 100644 --- a/drivers/gpu/drm/i915/intel_sdvo.c +++ b/drivers/gpu/drm/i915/intel_sdvo.c @@ -1160,6 +1160,9 @@ static bool intel_sdvo_compute_config(struct intel_encoder *encoder, adjusted_mode); } + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + /* * Make the CRTC code factor in the SDVO pixel multiplier. The * SDVO device will factor out the multiplier during mode_set. @@ -1621,6 +1624,9 @@ intel_sdvo_mode_valid(struct drm_connector *connector, struct intel_sdvo *intel_sdvo = intel_attached_sdvo(connector); int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + if (intel_sdvo->pixel_clock_min > mode->clock) return MODE_CLOCK_LOW; diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c index 885fc3809f7f..b55b5c157e38 100644 --- a/drivers/gpu/drm/i915/intel_tv.c +++ b/drivers/gpu/drm/i915/intel_tv.c @@ -850,6 +850,9 @@ intel_tv_mode_valid(struct drm_connector *connector, const struct tv_mode *tv_mode = intel_tv_mode_find(connector->state); int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; + if (mode->flags & DRM_MODE_FLAG_DBLSCAN) + return MODE_NO_DBLESCAN; + if (mode->clock > max_dotclk) return MODE_CLOCK_HIGH; @@ -877,16 +880,21 @@ intel_tv_compute_config(struct intel_encoder *encoder, struct drm_connector_state *conn_state) { const struct tv_mode *tv_mode = intel_tv_mode_find(conn_state); + struct drm_display_mode *adjusted_mode = + &pipe_config->base.adjusted_mode; if (!tv_mode) return false; - pipe_config->base.adjusted_mode.crtc_clock = tv_mode->clock; + if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) + return false; + + adjusted_mode->crtc_clock = tv_mode->clock; DRM_DEBUG_KMS("forcing bpc to 8 for TV\n"); pipe_config->pipe_bpp = 8*3; /* TV has it's own notion of sync and other mode flags, so clear them. */ - pipe_config->base.adjusted_mode.flags = 0; + adjusted_mode->flags = 0; /* * FIXME: We don't check whether the input mode is actually what we want diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c index 56dd7a9a8e25..dd5312b02a8d 100644 --- a/drivers/gpu/drm/imx/imx-ldb.c +++ b/drivers/gpu/drm/imx/imx-ldb.c @@ -612,6 +612,9 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) return PTR_ERR(imx_ldb->regmap); } + /* disable LDB by resetting the control register to POR default */ + regmap_write(imx_ldb->regmap, IOMUXC_GPR2, 0); + imx_ldb->dev = dev; if (of_id) @@ -652,14 +655,14 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) if (ret || i < 0 || i > 1) return -EINVAL; + if (!of_device_is_available(child)) + continue; + if (dual && i > 0) { dev_warn(dev, "dual-channel mode, ignoring second output\n"); continue; } - if (!of_device_is_available(child)) - continue; - channel = &imx_ldb->channel[i]; channel->ldb = imx_ldb; channel->chno = i; diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c index 32b1a6cdecfc..d3443125e661 100644 --- a/drivers/gpu/drm/meson/meson_drv.c +++ b/drivers/gpu/drm/meson/meson_drv.c @@ -197,8 +197,10 @@ static int meson_drv_bind_master(struct device *dev, bool has_components) priv->io_base = regs; res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hhi"); - if (!res) - return -EINVAL; + if (!res) { + ret = -EINVAL; + goto free_drm; + } /* Simply ioremap since it may be a shared register zone */ regs = devm_ioremap(dev, res->start, resource_size(res)); if (!regs) { @@ -215,8 +217,10 @@ static int meson_drv_bind_master(struct device *dev, bool has_components) } res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc"); - if (!res) - return -EINVAL; + if (!res) { + ret = -EINVAL; + goto free_drm; + } /* Simply ioremap since it may be a shared register zone */ regs = devm_ioremap(dev, res->start, resource_size(res)); if (!regs) { diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c index 501d2d290e9c..70dce544984e 100644 --- a/drivers/gpu/drm/nouveau/dispnv04/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c @@ -55,6 +55,9 @@ nv04_display_create(struct drm_device *dev) nouveau_display(dev)->init = nv04_display_init; nouveau_display(dev)->fini = nv04_display_fini; + /* Pre-nv50 doesn't support atomic, so don't expose the ioctls */ + dev->driver->driver_features &= ~DRIVER_ATOMIC; + nouveau_hw_save_vga_fonts(dev, 1); nv04_crtc_create(dev, 0); diff --git a/drivers/gpu/drm/nouveau/dispnv50/curs507a.c b/drivers/gpu/drm/nouveau/dispnv50/curs507a.c index 291c08117ab6..397143b639c6 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/curs507a.c +++ b/drivers/gpu/drm/nouveau/dispnv50/curs507a.c @@ -132,7 +132,7 @@ curs507a_new_(const struct nv50_wimm_func *func, struct nouveau_drm *drm, nvif_object_map(&wndw->wimm.base.user, NULL, 0); wndw->immd = func; - wndw->ctxdma.parent = &disp->core->chan.base.user; + wndw->ctxdma.parent = NULL; return 0; } diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index b83465ae7c1b..9bae4db84cfb 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -1585,8 +1585,9 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe) *****************************************************************************/ static void -nv50_disp_atomic_commit_core(struct nouveau_drm *drm, u32 *interlock) +nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock) { + struct nouveau_drm *drm = nouveau_drm(state->dev); struct nv50_disp *disp = nv50_disp(drm->dev); struct nv50_core *core = disp->core; struct nv50_mstm *mstm; @@ -1617,6 +1618,22 @@ nv50_disp_atomic_commit_core(struct nouveau_drm *drm, u32 *interlock) } } +static void +nv50_disp_atomic_commit_wndw(struct drm_atomic_state *state, u32 *interlock) +{ + struct drm_plane_state *new_plane_state; + struct drm_plane *plane; + int i; + + for_each_new_plane_in_state(state, plane, new_plane_state, i) { + struct nv50_wndw *wndw = nv50_wndw(plane); + if (interlock[wndw->interlock.type] & wndw->interlock.data) { + if (wndw->func->update) + wndw->func->update(wndw, interlock); + } + } +} + static void nv50_disp_atomic_commit_tail(struct drm_atomic_state *state) { @@ -1684,7 +1701,8 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state) help->disable(encoder); interlock[NV50_DISP_INTERLOCK_CORE] |= 1; if (outp->flush_disable) { - nv50_disp_atomic_commit_core(drm, interlock); + nv50_disp_atomic_commit_wndw(state, interlock); + nv50_disp_atomic_commit_core(state, interlock); memset(interlock, 0x00, sizeof(interlock)); } } @@ -1693,15 +1711,8 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state) /* Flush disable. */ if (interlock[NV50_DISP_INTERLOCK_CORE]) { if (atom->flush_disable) { - for_each_new_plane_in_state(state, plane, new_plane_state, i) { - struct nv50_wndw *wndw = nv50_wndw(plane); - if (interlock[wndw->interlock.type] & wndw->interlock.data) { - if (wndw->func->update) - wndw->func->update(wndw, interlock); - } - } - - nv50_disp_atomic_commit_core(drm, interlock); + nv50_disp_atomic_commit_wndw(state, interlock); + nv50_disp_atomic_commit_core(state, interlock); memset(interlock, 0x00, sizeof(interlock)); } } @@ -1762,18 +1773,14 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state) } /* Flush update. */ - for_each_new_plane_in_state(state, plane, new_plane_state, i) { - struct nv50_wndw *wndw = nv50_wndw(plane); - if (interlock[wndw->interlock.type] & wndw->interlock.data) { - if (wndw->func->update) - wndw->func->update(wndw, interlock); - } - } + nv50_disp_atomic_commit_wndw(state, interlock); if (interlock[NV50_DISP_INTERLOCK_CORE]) { if (interlock[NV50_DISP_INTERLOCK_BASE] || + interlock[NV50_DISP_INTERLOCK_OVLY] || + interlock[NV50_DISP_INTERLOCK_WNDW] || !atom->state.legacy_cursor_update) - nv50_disp_atomic_commit_core(drm, interlock); + nv50_disp_atomic_commit_core(state, interlock); else disp->core->func->update(disp->core, interlock, false); } @@ -1871,7 +1878,7 @@ nv50_disp_atomic_commit(struct drm_device *dev, nv50_disp_atomic_commit_tail(state); drm_for_each_crtc(crtc, dev) { - if (crtc->state->enable) { + if (crtc->state->active) { if (!drm->have_disp_power_ref) { drm->have_disp_power_ref = true; return 0; @@ -2119,10 +2126,6 @@ nv50_display_destroy(struct drm_device *dev) kfree(disp); } -MODULE_PARM_DESC(atomic, "Expose atomic ioctl (default: disabled)"); -static int nouveau_atomic = 0; -module_param_named(atomic, nouveau_atomic, int, 0400); - int nv50_display_create(struct drm_device *dev) { @@ -2147,8 +2150,6 @@ nv50_display_create(struct drm_device *dev) disp->disp = &nouveau_display(dev)->disp; dev->mode_config.funcs = &nv50_disp_func; dev->driver->driver_features |= DRIVER_PREFER_XBGR_30BPP; - if (nouveau_atomic) - dev->driver->driver_features |= DRIVER_ATOMIC; /* small shared memory area we use for notifiers and semaphores */ ret = nouveau_bo_new(&drm->client, 4096, 0x1000, TTM_PL_FLAG_VRAM, diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index 224963b533a6..c5a9bc1af5af 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -444,14 +444,17 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) if (ret) return ret; - ctxdma = nv50_wndw_ctxdma_new(wndw, fb); - if (IS_ERR(ctxdma)) { - nouveau_bo_unpin(fb->nvbo); - return PTR_ERR(ctxdma); + if (wndw->ctxdma.parent) { + ctxdma = nv50_wndw_ctxdma_new(wndw, fb); + if (IS_ERR(ctxdma)) { + nouveau_bo_unpin(fb->nvbo); + return PTR_ERR(ctxdma); + } + + asyw->image.handle[0] = ctxdma->object.handle; } asyw->state.fence = reservation_object_get_excl_rcu(fb->nvbo->bo.resv); - asyw->image.handle[0] = ctxdma->object.handle; asyw->image.offset[0] = fb->nvbo->bo.offset; if (wndw->func->prepare) { diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c index debbbf0fd4bd..408b955e5c39 100644 --- a/drivers/gpu/drm/nouveau/nouveau_backlight.c +++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c @@ -267,6 +267,7 @@ nouveau_backlight_init(struct drm_device *dev) struct nouveau_drm *drm = nouveau_drm(dev); struct nvif_device *device = &drm->client.device; struct drm_connector *connector; + struct drm_connector_list_iter conn_iter; INIT_LIST_HEAD(&drm->bl_connectors); @@ -275,7 +276,8 @@ nouveau_backlight_init(struct drm_device *dev) return 0; } - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + drm_connector_list_iter_begin(dev, &conn_iter); + drm_for_each_connector_iter(connector, &conn_iter) { if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && connector->connector_type != DRM_MODE_CONNECTOR_eDP) continue; @@ -292,7 +294,7 @@ nouveau_backlight_init(struct drm_device *dev) break; } } - + drm_connector_list_iter_end(&conn_iter); return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index 7b557c354307..af68eae4c626 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -1208,14 +1208,19 @@ nouveau_connector_create(struct drm_device *dev, int index) struct nouveau_display *disp = nouveau_display(dev); struct nouveau_connector *nv_connector = NULL; struct drm_connector *connector; + struct drm_connector_list_iter conn_iter; int type, ret = 0; bool dummy; - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + drm_connector_list_iter_begin(dev, &conn_iter); + nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) { nv_connector = nouveau_connector(connector); - if (nv_connector->index == index) + if (nv_connector->index == index) { + drm_connector_list_iter_end(&conn_iter); return connector; + } } + drm_connector_list_iter_end(&conn_iter); nv_connector = kzalloc(sizeof(*nv_connector), GFP_KERNEL); if (!nv_connector) diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.h b/drivers/gpu/drm/nouveau/nouveau_connector.h index a4d1a059bd3d..dc7454e7f19a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.h +++ b/drivers/gpu/drm/nouveau/nouveau_connector.h @@ -33,6 +33,7 @@ #include #include #include "nouveau_crtc.h" +#include "nouveau_encoder.h" struct nvkm_i2c_port; @@ -60,19 +61,46 @@ static inline struct nouveau_connector *nouveau_connector( return container_of(con, struct nouveau_connector, base); } +static inline bool +nouveau_connector_is_mst(struct drm_connector *connector) +{ + const struct nouveau_encoder *nv_encoder; + const struct drm_encoder *encoder; + + if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) + return false; + + nv_encoder = find_encoder(connector, DCB_OUTPUT_ANY); + if (!nv_encoder) + return false; + + encoder = &nv_encoder->base.base; + return encoder->encoder_type == DRM_MODE_ENCODER_DPMST; +} + +#define nouveau_for_each_non_mst_connector_iter(connector, iter) \ + drm_for_each_connector_iter(connector, iter) \ + for_each_if(!nouveau_connector_is_mst(connector)) + static inline struct nouveau_connector * nouveau_crtc_connector_get(struct nouveau_crtc *nv_crtc) { struct drm_device *dev = nv_crtc->base.dev; struct drm_connector *connector; + struct drm_connector_list_iter conn_iter; + struct nouveau_connector *nv_connector = NULL; struct drm_crtc *crtc = to_drm_crtc(nv_crtc); - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - if (connector->encoder && connector->encoder->crtc == crtc) - return nouveau_connector(connector); + drm_connector_list_iter_begin(dev, &conn_iter); + nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) { + if (connector->encoder && connector->encoder->crtc == crtc) { + nv_connector = nouveau_connector(connector); + break; + } } + drm_connector_list_iter_end(&conn_iter); - return NULL; + return nv_connector; } struct drm_connector * diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c index 774b429142bc..ec7861457b84 100644 --- a/drivers/gpu/drm/nouveau/nouveau_display.c +++ b/drivers/gpu/drm/nouveau/nouveau_display.c @@ -404,6 +404,7 @@ nouveau_display_init(struct drm_device *dev) struct nouveau_display *disp = nouveau_display(dev); struct nouveau_drm *drm = nouveau_drm(dev); struct drm_connector *connector; + struct drm_connector_list_iter conn_iter; int ret; ret = disp->init(dev); @@ -411,10 +412,12 @@ nouveau_display_init(struct drm_device *dev) return ret; /* enable hotplug interrupts */ - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + drm_connector_list_iter_begin(dev, &conn_iter); + nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) { struct nouveau_connector *conn = nouveau_connector(connector); nvif_notify_get(&conn->hpd); } + drm_connector_list_iter_end(&conn_iter); /* enable flip completion events */ nvif_notify_get(&drm->flip); @@ -427,6 +430,7 @@ nouveau_display_fini(struct drm_device *dev, bool suspend) struct nouveau_display *disp = nouveau_display(dev); struct nouveau_drm *drm = nouveau_drm(dev); struct drm_connector *connector; + struct drm_connector_list_iter conn_iter; if (!suspend) { if (drm_drv_uses_atomic_modeset(dev)) @@ -439,10 +443,12 @@ nouveau_display_fini(struct drm_device *dev, bool suspend) nvif_notify_put(&drm->flip); /* disable hotplug interrupts */ - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + drm_connector_list_iter_begin(dev, &conn_iter); + nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) { struct nouveau_connector *conn = nouveau_connector(connector); nvif_notify_put(&conn->hpd); } + drm_connector_list_iter_end(&conn_iter); drm_kms_helper_poll_disable(dev); disp->fini(dev); diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 775443c9af94..f5d3158f0378 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -81,6 +81,10 @@ MODULE_PARM_DESC(modeset, "enable driver (default: auto, " int nouveau_modeset = -1; module_param_named(modeset, nouveau_modeset, int, 0400); +MODULE_PARM_DESC(atomic, "Expose atomic ioctl (default: disabled)"); +static int nouveau_atomic = 0; +module_param_named(atomic, nouveau_atomic, int, 0400); + MODULE_PARM_DESC(runpm, "disable (0), force enable (1), optimus only default (-1)"); static int nouveau_runtime_pm = -1; module_param_named(runpm, nouveau_runtime_pm, int, 0400); @@ -509,6 +513,9 @@ static int nouveau_drm_probe(struct pci_dev *pdev, pci_set_master(pdev); + if (nouveau_atomic) + driver_pci.driver_features |= DRIVER_ATOMIC; + ret = drm_get_pci_dev(pdev, pent, &driver_pci); if (ret) { nvkm_device_del(&device); @@ -874,22 +881,11 @@ nouveau_pmops_runtime_resume(struct device *dev) static int nouveau_pmops_runtime_idle(struct device *dev) { - struct pci_dev *pdev = to_pci_dev(dev); - struct drm_device *drm_dev = pci_get_drvdata(pdev); - struct nouveau_drm *drm = nouveau_drm(drm_dev); - struct drm_crtc *crtc; - if (!nouveau_pmops_runtime()) { pm_runtime_forbid(dev); return -EBUSY; } - list_for_each_entry(crtc, &drm->dev->mode_config.crtc_list, head) { - if (crtc->enabled) { - DRM_DEBUG_DRIVER("failing to power off - crtc active\n"); - return -EBUSY; - } - } pm_runtime_mark_last_busy(dev); pm_runtime_autosuspend(dev); /* we don't want the main rpm_idle to call suspend - we want to autosuspend */ diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 300daee74209..e6ccafcb9c41 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -616,7 +616,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, struct nouveau_bo *nvbo; uint32_t data; - if (unlikely(r->bo_index > req->nr_buffers)) { + if (unlikely(r->bo_index >= req->nr_buffers)) { NV_PRINTK(err, cli, "reloc bo index invalid\n"); ret = -EINVAL; break; @@ -626,7 +626,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, if (b->presumed.valid) continue; - if (unlikely(r->reloc_bo_index > req->nr_buffers)) { + if (unlikely(r->reloc_bo_index >= req->nr_buffers)) { NV_PRINTK(err, cli, "reloc container bo index invalid\n"); ret = -EINVAL; break; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c index 73b5d46104bd..434d2fc5bb1c 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c @@ -140,6 +140,9 @@ nvkm_fb_init(struct nvkm_subdev *subdev) if (fb->func->init) fb->func->init(fb); + if (fb->func->init_remapper) + fb->func->init_remapper(fb); + if (fb->func->init_page) { ret = fb->func->init_page(fb); if (WARN_ON(ret)) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.c index dffe1f5e1071..8205ce436b3e 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.c @@ -36,6 +36,14 @@ gp100_fb_init_unkn(struct nvkm_fb *base) nvkm_wr32(device, 0x1faccc, nvkm_rd32(device, 0x100ccc)); } +void +gp100_fb_init_remapper(struct nvkm_fb *fb) +{ + struct nvkm_device *device = fb->subdev.device; + /* Disable address remapper. */ + nvkm_mask(device, 0x100c14, 0x00040000, 0x00000000); +} + void gp100_fb_init(struct nvkm_fb *base) { @@ -56,6 +64,7 @@ gp100_fb = { .dtor = gf100_fb_dtor, .oneinit = gf100_fb_oneinit, .init = gp100_fb_init, + .init_remapper = gp100_fb_init_remapper, .init_page = gm200_fb_init_page, .init_unkn = gp100_fb_init_unkn, .ram_new = gp100_ram_new, diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c index b84b9861ef26..b4d74e815674 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c @@ -31,6 +31,7 @@ gp102_fb = { .dtor = gf100_fb_dtor, .oneinit = gf100_fb_oneinit, .init = gp100_fb_init, + .init_remapper = gp100_fb_init_remapper, .init_page = gm200_fb_init_page, .ram_new = gp100_ram_new, }; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h index 2857f31466bf..1e4ad61c19e1 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h @@ -11,6 +11,7 @@ struct nvkm_fb_func { u32 (*tags)(struct nvkm_fb *); int (*oneinit)(struct nvkm_fb *); void (*init)(struct nvkm_fb *); + void (*init_remapper)(struct nvkm_fb *); int (*init_page)(struct nvkm_fb *); void (*init_unkn)(struct nvkm_fb *); void (*intr)(struct nvkm_fb *); @@ -69,5 +70,6 @@ int gf100_fb_init_page(struct nvkm_fb *); int gm200_fb_init_page(struct nvkm_fb *); +void gp100_fb_init_remapper(struct nvkm_fb *); void gp100_fb_init_unkn(struct nvkm_fb *); #endif diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index b8cda9449241..768207fbbae3 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -623,7 +623,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, struct qxl_cursor_cmd *cmd; struct qxl_cursor *cursor; struct drm_gem_object *obj; - struct qxl_bo *cursor_bo = NULL, *user_bo = NULL; + struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; int ret; void *user_ptr; int size = 64*64*4; @@ -677,7 +677,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, cursor_bo, 0); cmd->type = QXL_CURSOR_SET; - qxl_bo_unref(&qcrtc->cursor_bo); + old_cursor_bo = qcrtc->cursor_bo; qcrtc->cursor_bo = cursor_bo; cursor_bo = NULL; } else { @@ -697,6 +697,9 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); qxl_release_fence_buffer_objects(release); + if (old_cursor_bo) + qxl_bo_unref(&old_cursor_bo); + qxl_bo_unref(&cursor_bo); return; diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index df1578d6f42e..44d480768dfe 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -349,8 +349,13 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) struct dma_fence * fence = entity->dependency; struct drm_sched_fence *s_fence; - if (fence->context == entity->fence_context) { - /* We can ignore fences from ourself */ + if (fence->context == entity->fence_context || + fence->context == entity->fence_context + 1) { + /* + * Fence is a scheduled/finished fence from a job + * which belongs to the same entity, we can ignore + * fences from ourself + */ dma_fence_put(entity->dependency); return false; } diff --git a/drivers/gpu/drm/shmobile/Kconfig b/drivers/gpu/drm/shmobile/Kconfig index c987c826daa3..0426d66660d1 100644 --- a/drivers/gpu/drm/shmobile/Kconfig +++ b/drivers/gpu/drm/shmobile/Kconfig @@ -2,7 +2,6 @@ config DRM_SHMOBILE tristate "DRM Support for SH Mobile" depends on DRM && ARM depends on ARCH_SHMOBILE || COMPILE_TEST - depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM select BACKLIGHT_CLASS_DEVICE select BACKLIGHT_LCD_SUPPORT select DRM_KMS_HELPER diff --git a/drivers/gpu/drm/shmobile/shmob_drm_crtc.c b/drivers/gpu/drm/shmobile/shmob_drm_crtc.c index e7738939a86d..40df8887fc17 100644 --- a/drivers/gpu/drm/shmobile/shmob_drm_crtc.c +++ b/drivers/gpu/drm/shmobile/shmob_drm_crtc.c @@ -21,8 +21,6 @@ #include #include -#include

 
All this replication of the grace period numbers can only cause massive confusion. - Why not just keep a global pair of counters and be done with it??? + Why not just keep a global sequence number and be done with it???
Answer:
- Because if there was only a single global pair of grace-period + Because if there was only a single global sequence numbers, there would need to be a single global lock to allow - safely accessing and updating them. + safely accessing and updating it. And if we are not going to have a single global lock, we need to carefully manage the numbers on a per-node basis. Recall from the answer to a previous Quick Quiz that the consequences @@ -1091,8 +1099,8 @@ CPU has not yet passed through a quiescent state, while the ->core_needs_qs flag indicates that the RCU core needs a quiescent state from the corresponding CPU. The ->gpwrap field indicates that the corresponding -CPU has remained idle for so long that the completed -and gpnum counters are in danger of overflow, which +CPU has remained idle for so long that the +gp_seq counter is in danger of overflow, which will cause the CPU to disregard the values of its counters on its next exit from idle. Finally, the rcu_qs_ctr_snap field is used to detect @@ -1130,10 +1138,10 @@ The CPU advances the callbacks in its rcu_data structure whenever it notices that another RCU grace period has completed. The CPU detects the completion of an RCU grace period by noticing that the value of its rcu_data structure's -->completed field differs from that of its leaf +->gp_seq field differs from that of its leaf rcu_node structure. Recall that each rcu_node structure's -->completed field is updated at the end of each +->gp_seq field is updated at the beginnings and ends of each grace period.

diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html index 8651b0b4fd79..a346ce0116eb 100644 --- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html +++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html @@ -357,7 +357,7 @@ parts, starting in this section with the various phases of grace-period initialization.

The first ordering-related grace-period initialization action is to -increment the rcu_state structure's ->gpnum +advance the rcu_state structure's ->gp_seq grace-period-number counter, as shown below:

TreeRCU-gp-init-1.svg @@ -388,7 +388,7 @@ its last CPU and if the next rcu_node structure has no online CPUs).

The final rcu_gp_init() pass through the rcu_node tree traverses breadth-first, setting each rcu_node structure's -->gpnum field to the newly incremented value from the +->gp_seq field to the newly advanced value from the rcu_state structure, as shown in the following diagram.

TreeRCU-gp-init-1.svg @@ -398,9 +398,9 @@ tree traverses breadth-first, setting each rcu_node structure's to notice that a new grace period has started, as described in the next section. But because the grace-period kthread started the grace period at the -root (with the increment of the rcu_state structure's -->gpnum field) before setting each leaf rcu_node -structure's ->gpnum field, each CPU's observation of +root (with the advancing of the rcu_state structure's +->gp_seq field) before setting each leaf rcu_node +structure's ->gp_seq field, each CPU's observation of the start of the grace period will happen after the actual start of the grace period. @@ -466,7 +466,7 @@ section that the grace period must wait on.

But a RCU read-side critical section might have started after the beginning of the grace period - (the ->gpnum++ from earlier), so why should + (the advancing of ->gp_seq from earlier), so why should the grace period wait on such a critical section?
Answer: