From: Lad Prabhakar <prabhakar.mahadev-lad.rj@xxxxxxxxxxxxxx>
Ensure TX descriptor type fields are written in a safe order so the DMA
engine does not begin processing a chain before all descriptors are
fully initialised.
For multi-descriptor transmissions the driver writes DT_FEND into the
last descriptor and DT_FSTART into the first. The DMA engine starts
processing when it sees DT_FSTART. If the compiler or CPU reorders the
writes and publishes DT_FSTART before DT_FEND, the DMA can start early
and process an incomplete chain, leading to corrupted transmissions or
DMA errors.
Fix this by writing DT_FEND before the dma_wmb() barrier, executing
dma_wmb() immediately before DT_FSTART (or DT_FSINGLE in the single
descriptor case), and then adding a wmb() after the type updates to
ensure CPU-side ordering before ringing the hardware doorbell.
On an RZ/G2L platform running an RT kernel, this reordering hazard was
observed as TX stalls and timeouts:
[ 372.968431] NETDEV WATCHDOG: end0 (ravb): transmit queue 0 timed out
[ 372.968494] WARNING: CPU: 0 PID: 10 at net/sched/sch_generic.c:467 dev_watchdog+0x4a4/0x4ac
[ 373.969291] ravb 11c20000.ethernet end0: transmit timed out, status 00000000, resetting...
This change enforces the required ordering and prevents the DMA engine
from observing DT_FSTART before the rest of the descriptor chain is
valid.
Fixes: 2f45d1902acf ("ravb: minimize TX data copying")
Cc: stable@xxxxxxxxxxxxxxx
Co-developed-by: Fabrizio Castro <fabrizio.castro.jz@xxxxxxxxxxx>
Signed-off-by: Fabrizio Castro <fabrizio.castro.jz@xxxxxxxxxxx>
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@xxxxxxxxxxxxxx>
---
drivers/net/ethernet/renesas/ravb_main.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index a200e205825a..2a995fa9bfff 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -2211,15 +2211,19 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
skb_tx_timestamp(skb);
}
- /* Descriptor type must be set after all the above writes */
- dma_wmb();
+
+ /* For multi-descriptors set DT_FEND before calling dma_wmb() */
if (num_tx_desc > 1) {
desc->die_dt = DT_FEND;
desc--;
- desc->die_dt = DT_FSTART;
- } else {
- desc->die_dt = DT_FSINGLE;
}
+
+ /* Descriptor type must be set after all the above writes */
+ dma_wmb();
+ desc->die_dt = (num_tx_desc > 1) ? DT_FSTART : DT_FSINGLE;
+
+ /* Ensure data is written to RAM before initiating DMA transfer */
+ wmb();
ravb_modify(ndev, TCCR, TCCR_TSRQ0 << q, TCCR_TSRQ0 << q);
priv->cur_tx[q] += num_tx_desc;
--
2.43.0