qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ceph/rbd block driver for qemu-kvm (v6)


From: Yehuda Sadeh Weinraub
Subject: Re: [Qemu-devel] [PATCH] ceph/rbd block driver for qemu-kvm (v6)
Date: Thu, 14 Oct 2010 17:34:41 -0700

See my comments:

On Wed, Oct 13, 2010 at 1:41 AM, Stefan Hajnoczi <address@hidden> wrote:
>> +
>> +    cpu_to_le32s(&snap_count);
>> +    cpu_to_le64s(&snap_names_len);

Redone all endianity conversions, made it so that it keeps the header
as little endian, and whenever reading the header, do the endianity
conversion.

>
> It is clearer to do byteswapping immediately, rather than having the
> variable take on different endianness at different times:
> uint32_t snap_count = cpu_to_le32(header->snap_count);
> uint64_t snap_names_len = cpu_to_le64(header->snap_names_len);

Right.

>
>> +    if (snap_count) {
>> +        const char *header_snap = (const char *)&header->snaps[snap_count];
>> +        const char *end = header_snap + snap_names_len;
>
> snap_names_len is little-endian.  This won't work on big-endian hosts.
>  Did you mean le64_to_cpu() instead of cpu_to_le64()?
Yes, fixed that.

>
>> +        snaps = qemu_malloc(sizeof(rados_snap_t) * header->snap_count);
>
> snaps is allocated here...
>
>> +
>> +        for (i=0; i < snap_count; i++) {
>> +            snaps[i] = (uint64_t)header->snaps[i].id;
>> +            cpu_to_le64s(&snaps[i]);
>> +
>> +            if (snap && strcmp(snap, header_snap) == 0) {
>> +                snapid = snaps[i];
>> +            }
>> +
>> +            header_snap += strlen(header_snap) + 1;
>> +            if (header_snap > end) {
>> +                error_report("bad header, snapshot list broken");
>> +            }
>> +        }
>> +    }
>> +
>> +    if (snap && !snapid) {
>> +        error_report("snapshot not found");
>> +        return -ENOENT;
>
> ...but never freed here.

Freed now.



Thanks,
Yehuda



reply via email to

[Prev in Thread] Current Thread [Next in Thread]