coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enhancement request for tee - please add the option to not quit on S


From: Jirka Hladky
Subject: Re: Enhancement request for tee - please add the option to not quit on SIGPIPE when someother files are still opened
Date: Fri, 20 Nov 2015 00:09:03 +0100

If you ignore SIGPIPE in tee in the above then what will terminate the
tee process?  Since the input is not ever terminated.

That's why I would like to have the option to suppress writing to STDOUT. By default, tee will finish as soon as all files are closed. So without need to have >/dev/null redirection, it will run as long as at least one pipe is open.

  while (n_outputs)
    {
     //read data;

      /* Write to all NFILES + 1 descriptors.
         Standard output is the first one.  */
      for (i = 0; i < nfiles; i++)
        if (descriptors[i]
            && fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)
          {
            //exit on EPIPE error
            descriptors[i] = NULL;
            n_outputs--;
          }
    }

Also, a Useless-Use-Of-Cat in the above too.
Yes, it is. But anyway, it's not real world example. My real problem is to test RNG by multiple tests. I need to test huge amount of data (hundreds of GB) so storing the data on disk is not feasible. Each test will consume different amount of data - some test will stop after a RNG failure has been detected or some threshold for maximum amount of processed data is reached, others will dynailly change the amount of tested data needed by test results. The command I need to run is 

rng_generator | tee >(test1) >(test2) >(test3)


> Already done in the previous v8.24 release:
I have tried it but I'm not able to get desirable behavior. See these examples:

A)
tee --output-error=warn </dev/zero >(head -c100M | wc -c ) >(head -c1 | wc -c ) >/dev/null 
1
src/tee: /dev/fd/62: Broken pipe
104857600
src/tee: /dev/fd/63: Broken pipe

=> it's almost there expect that it runs forever because of >/dev/null

B)
src/tee --output-error=warn </dev/zero >(head -c100M | wc -c ) | (head -c1 | wc -c )
1
src/tee: standard output: Broken pipe
src/tee: /dev/fd/63: Broken pipe

As you can see, output from (head -c100M | wc -c) is missing

Conclusion:
Case A) above is close to what I want to achieve but there is a problem with writing to stdout. --output-error=warn is part of the functionality I was looking for. However, to make it usable for scenario described here we need to add an option not to write to stdout. What do you think?

Thanks
Jirka

On Thu, Nov 19, 2015 at 12:13 AM, Bob Proulx <address@hidden> wrote:
Jirka Hladky wrote:
> I have recently run into an issue that tee will finish as soon as first
> pipe it's writing to is closed. Please consider this example:
>
> $cat /dev/zero | tee >(head -c1 | wc -c )  >(head -c100M | wc -c ) >/dev/null
> 1
> 65536
>
> Second wc command will receive only 64kB instead of expected 100MB.

Expectations depend upon the beholder of the expectation.  :-)

> IMHO, tee should have a command line option to proceed as long some file is
> opened.
>
>  cat /dev/zero | mytee --skip_stdout_output --continue_on_sigpipe >(head -c1 | wc -c ) >(head -c100M | wc -c )

If you ignore SIGPIPE in tee in the above then what will terminate the
tee process?  Since the input is not ever terminated.

Also, a Useless-Use-Of-Cat in the above too.

> It should be accompanied by another switch which will suppress
> writing to STDOUT.

Isn't >/dev/null already a sufficient switch to supress stdout?

Bob


reply via email to

[Prev in Thread] Current Thread [Next in Thread]