|
From: | Dmitry Gutov |
Subject: | Re: Generation of tags for the current project on the fly |
Date: | Fri, 9 Feb 2018 03:22:40 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:58.0) Gecko/20100101 Thunderbird/58.0 |
On 2/8/18 23:31, John Yates wrote:
Git seems to be able to compute new/modified/dropped with quite tolerable efficiency even fir large projects. Are there lessons to be learned there?
One way to interpret that is maybe checking the presence of even a large number of files is a fast enough operation. Would you like to give it a test?
Alternatively, Git uses some smart caching somewhere.
[Prev in Thread] | Current Thread | [Next in Thread] |