qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] travis-ci 'make check' timeouts (was Re: [PULL 00/11] x


From: Paolo Bonzini
Subject: Re: [Qemu-devel] travis-ci 'make check' timeouts (was Re: [PULL 00/11] x86 queue, 2017-02-27)
Date: Thu, 2 Mar 2017 17:22:51 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0


On 02/03/2017 17:07, Eduardo Habkost wrote:
> On Thu, Mar 02, 2017 at 04:54:26PM +0100, Paolo Bonzini wrote:
>> On 02/03/2017 16:39, Eduardo Habkost wrote:
>>> On Tue, Feb 28, 2017 at 07:17:39PM +0000, Peter Maydell wrote:
>>>> On 28 February 2017 at 19:12, Eduardo Habkost <address@hidden> wrote:
>>>>> I saw a failure on x86-pull-request that seemed to be because of
>>>>> vhost-user-test[1]. However, after restarting the job, it
>>>>> passed[2].
>>>>
>>>> I'm currently processing a patch which (hopefully) fixes
>>>> vhost-user-test's intermittent failures:
>>>> http://patchwork.ozlabs.org/patch/732747/
>>>
>>> I'm not sure it will solve the issues on hosts without KVM. As
>>> far as I can see, if vhost-user-test is working without KVM, it
>>> is working by accident.
>>
>> Well, it has worked for a while before the patch.
> 
> Before which patch?

The one mentioned in the commit message by Marc-André:
b0a335e351103bf92f3f9d0bd5759311be8156ac.

Paolo

> 
>>                                                    As long as you don't
>> overwrite code with vhost-user data and then try to run that data,
>> things will be fine.  Just not something you can use in practice, but it
>> works in tests.
> 
> Earlier this week I saw the wait_for_fds assertion (mentioned at
> the thread above) on a travis-ci job again, and I was suspecting
> it was the same vhost_set_mem_table() + TCG error seen at the
> thread above.
> 
> Unfortunately travis-ci overwrote the previous logs when I
> restarted the job, and now I can't confirm if it was really the
> same vhost_set_mem_table() error. I guess we'll have to simply
> wait and see if it fails again.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]