Files
linux/arch/mips/include/asm
Peter Zijlstra 42344113ba mips/atomic: Fix smp_mb__{before,after}_atomic()
Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

	*x = 1;
	atomic_inc(u);
	smp_mb__after_atomic();
	r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

	atomic_inc(u);
	*x = 1;
	smp_mb__after_atomic();
	r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

	atomic_inc(u);
	r0 = *y;
	*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
2019-08-31 11:06:02 +01:00
..
2019-07-30 10:41:54 -07:00
2019-08-06 14:32:15 -07:00
2019-08-23 15:42:40 +01:00
2019-08-26 11:42:40 +01:00
2014-08-26 02:18:56 +02:00
2014-05-24 00:07:01 +02:00
2019-05-22 18:45:52 -07:00
2019-02-04 10:56:41 -08:00
2019-02-04 10:56:41 -08:00
2019-07-23 14:33:51 -07:00
2018-05-07 07:15:41 +02:00
2019-07-01 17:51:40 +02:00
2014-08-02 00:06:38 +02:00
2016-04-03 12:32:09 +02:00
2018-08-01 13:20:15 -07:00
2019-04-03 10:32:54 +02:00
2019-07-25 21:45:05 -07:00
2016-05-09 12:00:02 +02:00