qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] coverity: physmem: use simple assertions instead of model


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v2] coverity: physmem: use simple assertions instead of modelling
Date: Wed, 8 Nov 2023 12:50:20 +0300
User-agent: Mozilla Thunderbird

ping) Is it queued?

On 06.10.23 01:53, Paolo Bonzini wrote:
On Thu, Oct 5, 2023 at 4:04 PM Vladimir Sementsov-Ogievskiy
<vsementsov@yandex-team.ru> wrote:
+            /*
+             * Assure Coverity (and ourselves) that we are not going to OVERRUN
+             * the buffer by following ldn_he_p().
+             */
+            assert((l == 1 && len >= 1) ||
+                   (l == 2 && len >= 2) ||
+                   (l == 4 && len >= 4) ||
+                   (l == 8 && len >= 8));

I'll queue it shortly, but perhaps you can try if assert(l <= len) is enough?

Alternatively I can try applying the patch on top of the tree that we
test with, and see how things go.

Paolo

              val = ldn_he_p(buf, l);
              result |= memory_region_dispatch_write(mr, addr1, val,
                                                     size_memop(l), attrs);
@@ -2784,6 +2793,15 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr 
addr,
              l = memory_access_size(mr, l, addr1);
              result |= memory_region_dispatch_read(mr, addr1, &val,
                                                    size_memop(l), attrs);
+
+            /*
+             * Assure Coverity (and ourselves) that we are not going to OVERRUN
+             * the buffer by following stn_he_p().
+             */
+            assert((l == 1 && len >= 1) ||
+                   (l == 2 && len >= 2) ||
+                   (l == 4 && len >= 4) ||
+                   (l == 8 && len >= 8));
              stn_he_p(buf, l, val);
          } else {
              /* RAM case */
--
2.34.1



--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]