gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 58706c6a: Release: necessary changes for versi


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 58706c6a: Release: necessary changes for version 0.23
Date: Sat, 13 Jul 2024 14:43:05 -0400 (EDT)

branch: master
commit 58706c6afbd3eebe7067395441f28412d2fb45e9
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Release: necessary changes for version 0.23
    
    Until now, Gnuastro's latest official version was 0.22, but that was almost
    5 months ago and many things have been added and improved. So it is time
    for Gnuastro 0.23.
    
    With this commit, all the necessary changes have been made to make the
    official release of Gnuastro 0.23 from 'doc/release-checklist.txt'. This
    includes:
    
      - A spell-check in the newly added parts of the book.
      - Synchronizing THANKS and acknowledgment section of the book.
      - Updating the versions in the webpage and NEWS file.
---
 NEWS                         |  2 +-
 THANKS                       |  8 ++--
 doc/announce-acknowledge.txt | 16 -------
 doc/gnuastro.en.html         |  8 ++--
 doc/gnuastro.fr.html         |  6 +--
 doc/gnuastro.texi            | 99 ++++++++++++++++++++++++--------------------
 6 files changed, 67 insertions(+), 72 deletions(-)

diff --git a/NEWS b/NEWS
index 10efd8a9..efc15207 100644
--- a/NEWS
+++ b/NEWS
@@ -3,7 +3,7 @@ GNU Astronomy Utilities NEWS                          -*- 
outline -*-
 Copyright (C) 2015-2024 Free Software Foundation, Inc.
 See the end of the file for license conditions.
 
-* Noteworthy changes in release X.XX (library XX.X.X) (YYYY-MM-DD)
+* Noteworthy changes in release 0.23 (library 21.0.0) (YYYY-MM-DD)
 ** New publications
 
   - https://ui.adsabs.harvard.edu/abs/2024RNAAS...8..168E by Eskandarlou
diff --git a/THANKS b/THANKS
index e318febe..0759d6b6 100644
--- a/THANKS
+++ b/THANKS
@@ -16,7 +16,7 @@ People
 
 The following people provided valuable feedback (suggestions, ideas) to the
 authors of Gnuastro. We hereby gratefully acknowledge their help and
-support in Gnuastro. The list is ordered alphabetically (by family name).
+support in Gnuastro. The list is ordered alphabetically (by first name).
 
     Aaron Watkins                        aaron.watkins@oulu.fi
     Adrian Bunk                          bunk@debian.org
@@ -49,6 +49,7 @@ support in Gnuastro. The list is ordered alphabetically (by 
family name).
     Craig Gordon                         craig.a.gordon@nasa.gov
     David Shupe                          shupe@ipac.caltech.edu
     David Valls-Gabaud                   david.valls-gabaud@obspm.fr
+    Dennis Williamson                    dennistwilliamson@gmail.com
     Dmitrii Oparin                       doparin2@gmail.com
     Elham Eftekhari                      elhamea@iac.es
     Elham Saremi                         saremi@ipm.ir
@@ -62,6 +63,7 @@ support in Gnuastro. The list is ordered alphabetically (by 
family name).
     Geoffry Krouchi                      geoffrey.krouchi@etu.univ-lyon1.fr
     Giacomo Lorenzetti                   glorenzetti@cefca.es
     Giulia Golini                        giulia.golini@gmail.com
+    Greg Wooledge                        greg@wooledge.org
     Guillaume Mahler                     guillaume.mahler@univ-lyon1.fr
     Hamed Altafi                         hamed.altafi2@gmail.com
     Helena Domínguez Sánchez             hdominguez@cefca.es
@@ -81,8 +83,8 @@ support in Gnuastro. The list is ordered alphabetically (by 
family name).
     Joseph Mazzarella                    mazz@ipac.caltech.edu
     Joseph Putko                         josephputko@gmail.com
     Juan Antonio Fernández Ontiveros     jafernandez@cefca.es
-    Juan Castillo Ramírez                jcastillo@cefca.es
     Juan C. Tello                        jtello@iaa.es
+    Juan Castillo Ramírez                jcastillo@cefca.es
     Juan Miro                            miro.juan@gmail.com
     Juan Molina Tobar                    juan.a.molina.t@gmail.com
     Karl Berry                           karl@gnu.org
@@ -140,8 +142,8 @@ support in Gnuastro. The list is ordered alphabetically (by 
family name).
     Tamara Civera Lorenzo                tcivera@cefca.es
     Teet Kuutma                          tkuutma@cefca.es
     Teymoor Saifollahi                   teymur.saif@gmail.com
-    Thérèse Godefroy                     godef.th@free.fr
     Thorsten Alteholz                    thorsten@alteholz.dev
+    Thérèse Godefroy                     godef.th@free.fr
     Valentina Abril-melgarejo            valentina.abril@lam.fr
     Vincenzo Testa                       vincenzo.testa@inaf.it
     William Pence                        william.pence@nasa.gov
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index 896791a8..bf7b9d63 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,21 +1,5 @@
 Alphabetically ordered list to acknowledge in the next release.
 
-Dennis Williamson
-Fernando Buitrago Alonso
-Greg Wooledge
-Hamed Altafi
-Jesús Vega
-Juan Castillo Ramírez
-Mathias Urbano
-Ooldooz Kabood
-Paola Dimauro
-Phil Wyett
-Rahna Payyasseri Thanduparackal
-Raul Infante-Sainz
-Sepideh Eskandarlou
-Takashi Ichikawa
-Zahra Sharbaf
-
 
 
 
diff --git a/doc/gnuastro.en.html b/doc/gnuastro.en.html
index 028171b2..b21660c5 100644
--- a/doc/gnuastro.en.html
+++ b/doc/gnuastro.en.html
@@ -93,9 +93,9 @@ for entertaining and easy to read real world examples of using
 
 <p>
   The current stable release
-  is <strong><a 
href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.22.tar.gz";>Gnuastro
-  0.22</a></strong> (released on February 3rd, 2024).
-  Use <a href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.22.tar.gz";>a
+  is <strong><a 
href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.23.tar.gz";>Gnuastro
+  0.23</a></strong> (released on July 13th, 2024).
+  Use <a href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.23.tar.gz";>a
   mirror</a> if possible.
 
   <!-- Comment the test release notice when the test release is not more
@@ -106,7 +106,7 @@ for entertaining and easy to read real world examples of 
using
   To stay up to date, please subscribe.</em></p>
 
 <p>For details of the significant changes in this release, please see the
-  <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.22";>NEWS</a>
+  <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.23";>NEWS</a>
   file.</p>
 
 <p>The
diff --git a/doc/gnuastro.fr.html b/doc/gnuastro.fr.html
index 7835d58c..bdb9b2b6 100644
--- a/doc/gnuastro.fr.html
+++ b/doc/gnuastro.fr.html
@@ -81,14 +81,14 @@
 <h3 id="download">Téléchargement</h3>
 
 <p>La version stable actuelle
-  est <strong><a 
href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.22.tar.gz";>Gnuastro
-  0.22</a></strong> (sortie le 3 février 2024). Utilisez <a 
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.22.tar.gz";>un
+  est <strong><a 
href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.23.tar.gz";>Gnuastro
+  0.23</a></strong> (sortie le 13 juillet 2024). Utilisez <a 
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.23.tar.gz";>un
   miroir</a> si possible.  <br /><em>Les nouvelles versions sont annoncées
   sur <a 
href="https://lists.gnu.org/mailman/listinfo/info-gnuastro";>info-gnuastro</a>.
   Abonnez-vous pour rester au courant.</em></p>
 
 <p>Les changements importants sont décrits dans le
-  fichier <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.22";>
+  fichier <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.23";>
   NEWS</a>.</p>
 
 <p>Le lien
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index e12382c9..78e74885 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -151,17 +151,17 @@ A copy of the license is included in the section entitled 
``GNU Free Documentati
 @subtitle
 @subtitle
 @end iftex
-@subtitle @strong{Important note:}
-@subtitle This is an @strong{under-development} Gnuastro release 
(bleeding-edge!).
-@subtitle It is not yet officially released.
-@subtitle The source tarball corresponding to this version is (temporarily) 
available at this URL:
-@subtitle @url{http://akhlaghi.org/src/gnuastro-@value{VERSION}.tar.lz}
-@subtitle (the tarball link above will not be available after the next 
official release)
-@subtitle The most recent under-development source and its corresponding book 
are available at:
-@subtitle @url{http://akhlaghi.org/gnuastro.pdf}
-@subtitle @url{http://akhlaghi.org/gnuastro-latest.tar.lz}
-@subtitle To stay up to date with Gnuastro's official releases, please 
subscribe to this mailing list:
-@subtitle @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}
+@c @subtitle @strong{Important note:}
+@c @subtitle This is an @strong{under-development} Gnuastro release 
(bleeding-edge!).
+@c @subtitle It is not yet officially released.
+@c @subtitle The source tarball corresponding to this version is (temporarily) 
available at this URL:
+@c @subtitle @url{http://akhlaghi.org/src/gnuastro-@value{VERSION}.tar.lz}
+@c @subtitle (the tarball link above will not be available after the next 
official release)
+@c @subtitle The most recent under-development source and its corresponding 
book are available at:
+@c @subtitle @url{http://akhlaghi.org/gnuastro.pdf}
+@c @subtitle @url{http://akhlaghi.org/gnuastro-latest.tar.lz}
+@c @subtitle To stay up to date with Gnuastro's official releases, please 
subscribe to this mailing list:
+@c @subtitle @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}
 @author Mohammad Akhlaghi
 
 @page
@@ -1149,7 +1149,7 @@ It uses a technique to detect very faint and diffuse, 
irregularly shaped signal
 (@file{astsegment}, see @ref{Segment}) Segment detected regions based on the 
structure of signal and the input dataset's noise properties.
 
 @item Statistics
-(@file{aststatistics}, see @ref{Statistics}) Statistical calculations on the 
input dataset (column in a table, image or datacube).
+(@file{aststatistics}, see @ref{Statistics}) Statistical calculations on the 
input dataset (column in a table, image or data cube).
 This includes man operations such as generating histogram, sigma clipping, and 
least squares fitting.
 
 @item Table
@@ -1489,7 +1489,7 @@ Like all software, version 1.0 is a unique milestone: a 
point where the develope
 In Gnuastro, the goal to achieve for version 1.0 is to have all the necessary 
tools for optical imaging data reduction: starting from raw images of 
individual exposures to the final deep image ready for high-level science.
 
 While various software did already exist and were commonly used when Gnuastro 
was first released in 2016.
-The existing software are mosly written without following any robust, or even 
common, coding and usage standards or up-to-date and well-maintained 
documentation.
+The existing software are mostly written without following any robust, or even 
common, coding and usage standards or up-to-date and well-maintained 
documentation.
 This makes it very hard to reduce astronomical data without learning those 
software's peculiarities through trial and error.
 
 
@@ -1873,10 +1873,11 @@ Faezeh Bidjarchian,
 Leindert Boogaard,
 Nicolas Bouch@'e,
 Stefan Br@"uns,
-Fernando Buitrago,
+Fernando Buitrago Alonso,
 Adrian Bunk,
 Rosa Calvi,
-Mark Calabretta
+Mark Calabretta,
+Juan Castillo Ramírez,
 Nushkia Chamba,
 Sergio Chueca Urzay,
 Tamara Civera Lorenzo,
@@ -1911,6 +1912,7 @@ Ra@'ul Infante Sainz,
 Brandon Invergo,
 Oryna Ivashtenko,
 Aur@'elien Jarno,
+Ooldooz Kabood,
 Lee Kelvin,
 Brandon Kelly,
 Mohammad-Reza Khellat,
@@ -1923,6 +1925,7 @@ Floriane Leclercq,
 Alan Lefor,
 Javier Licandro,
 Jeremy Lim,
+Giacomo Lorenzetti,
 Alejandro Lumbreras Calle,
 Sebasti@'an Luna Valero,
 Alberto Madrigal,
@@ -1938,6 +1941,7 @@ Sylvain Mottet,
 Dmitrii Oparin,
 Fran@,{c}ois Ochsenbein,
 Bertrand Pain,
+Rahna Payyasseri Thanduparackal,
 William Pence,
 Irene Pintos Castro,
 Mamta Pommier,
@@ -1969,12 +1973,17 @@ Vincenzo Testa,
 @'Eric Thi@'ebaut,
 Ignacio Trujillo,
 Peter Teuben,
+Mathias Urbano,
 David Valls-Gabaud,
 Jes@'us Varela,
+Jesús Vega,
 Aaron Watkins,
 Richard Wilbur,
+Phil Wyett,
+Dennis Williamson,
 Michael H.F. Wilkinson,
 Christopher Willmer,
+Greg Wooledge,
 Xiuqin Wu,
 Sara Yousefi Taemeh,
 Johannes Zabl.
@@ -2588,7 +2597,7 @@ $ for z in $(seq 0.1 0.1 5); do                           
       \
 @end example
 
 Have a look at the two printed columns.
-The first is the redshift, and the second is the area of this image at that 
redshift (in Megaparsecs squared).
+The first is the redshift, and the second is the area of this image at that 
redshift (in mega-parsecs squared).
 @url{https://en.wikipedia.org/wiki/Redshift, Redshift} (@mymath{z}) is often 
used as a proxy for distance in galaxy evolution and cosmology: a higher 
redshift corresponds to larger line-of-sight comoving distance.
 
 @cindex Turn over point (angular diameter distance)
@@ -3615,7 +3624,7 @@ You should use the upper-limit magnitude instead (with an 
arrow in your plots to
 
 But the main point (in relation to the magnitude limit) with the upper-limit, 
is the @code{UPPERLIMIT_SIGMA} column.
 you can think of this as a @emph{realistic} S/N for extremely 
faint/diffuse/small objects).
-The raw S/N column is simply calculated on a pixel-by-pixel basis, however, 
the upper-limit sigma is produced by actually taking the label's footprint, and 
randomly placing it thousands of time over un-detected parts of the image and 
measuring the brightness of the sky.
+The raw S/N column is simply calculated on a pixel-by-pixel basis, however, 
the upper-limit sigma is produced by actually taking the label's footprint, and 
randomly placing it thousands of time over undetected parts of the image and 
measuring the brightness of the sky.
 The clump's brightness is then divided by the standard deviation of the 
resulting distribution to give you exactly how significant it is (accounting 
for inter-pixel issues like correlated noise, which are strong in this dataset).
 You can actually compare the two values with the command below:
 
@@ -5739,7 +5748,7 @@ The @code{MEDSTD} value is very similar to the standard 
deviation derived above,
 @cartouche
 @noindent
 @strong{@code{MEDSTD} is more reliable than the standard deviation of masked 
pixels:} it may happen that differences between these two become more 
significant than the experiment above.
-In such cases, the @code{MEDSTD} is more reliable because NoiseChisel 
estimates it within the tiles and after several steps of outlier rejection (for 
example due to un-detected signal) and before interpolation.
+In such cases, the @code{MEDSTD} is more reliable because NoiseChisel 
estimates it within the tiles and after several steps of outlier rejection (for 
example due to undetected signal) and before interpolation.
 Whereas the standard deviation of the masked image is calculated based on the 
final detection, does no higher-level outlier rejection and is based on the 
interpolated image.
 Therefore, it can be easily biased by signal or artifacts in the image and 
besides being easier to measure, @code{MEDSTD} is also more statistically 
robust.
 @end cartouche
@@ -7921,7 +7930,7 @@ $ cd tutorial-3d
 $ wget http://akhlaghi.org/data/a370-crop.fits    # Downloads 287 MB
 @end example
 
-In the sections below, we will first review how you can visually inspect a 3D 
datacube in DS9 and interactively see the spectra of any region.
+In the sections below, we will first review how you can visually inspect a 3D 
data cube in DS9 and interactively see the spectra of any region.
 We will then subtract the continuum emission, detect the emission-lines within 
this cube and extract their spectra.
 We will finish with creating pseudo narrow-band images optimized for some of 
the emission lines.
 
@@ -7939,7 +7948,7 @@ We will finish with creating pseudo narrow-band images 
optimized for some of the
 @subsection Viewing spectra and redshifted lines
 
 In @ref{Detecting lines and extracting spectra in 3D data} we downloaded a 
small crop from the Pilot-WINGS survey of Abell 370 cluster; observed with MUSE.
-In this section, we will review how you can visualize/inspect a datacube using 
that example.
+In this section, we will review how you can visualize/inspect a data cube 
using that example.
 With the first command below, we'll open DS9 such that each 2D slice of the 
cube (at a fixed wavelength) is seen as a single image.
 If you move the slider in the ``Cube'' window (that also opens), you can view 
the same field at different wavelengths.
 We are ending the first command with a `@code{&}' so you can continue viewing 
DS9 while using the command-line (press one extra @code{ENTER} to see the 
prompt).
@@ -11071,7 +11080,7 @@ $ astscript-fits-view build/collapsed-*.fits
 After TOPCAT has opened, select @file{collapsed-1.fits} in the ``Table List'' 
side-bar.
 In the ``Graphics'' menu, select ``Plane plot'' and you will see all the 
values fluctuating around 10 (with a maximum/minimum around @mymath{\pm2}).
 Afterwards, click on the ``Layers'' menu of the new window (with a plot) and 
click on ``Add position control''.
-Tt the bottom of the window (where the scroll bar in front of ``Table'' is 
empty), select @file{collapsed-9.fits}.
+At the bottom of the window (where the scroll bar in front of ``Table'' is 
empty), select @file{collapsed-9.fits}.
 In the regions that there was no circle in any of the vertical axes, the two 
match nicely (the noise level is the same).
 However, you see that the regions that were partly covered by the outlying 
circle gradually get more affected as the width of the circle in that column 
increases (the full diameter of the circle was in the middle of the image).
 This shows how the median is biased by outliers as their number increases.
@@ -11223,7 +11232,7 @@ The example above provided a single statistic from a 
single dataset.
 Other scenarios where sigma-clipping becomes necessary are stacking and 
collapsing (that was the main goal of the script in @ref{Building inputs and 
analysis without clipping}).
 To generate @mymath{\sigma}-clipped stacks and collapsed tables, you just need 
to change the values of the three variables of the script (shown below).
 After making this change in your favorite text editor, have a look at the 
outputs.
-By the way, if you have still not read (and understood) the commands in that 
script, this is a good time to do it so the steps below do not appear as a 
black box to you (for more on writig shell scripts, see @ref{Writing scripts to 
automate the steps}).
+By the way, if you have still not read (and understood) the commands in that 
script, this is a good time to do it so the steps below do not appear as a 
black box to you (for more on writing shell scripts, see @ref{Writing scripts 
to automate the steps}).
 
 @example
 $ grep ^clip_ script.sh
@@ -11256,7 +11265,7 @@ The pixels in this image only have two values: 8 or 9.
 Over the footprint of the circle, most pixels have a value of 8: only 8 inputs 
were used for these (one of the inputs was clipped out).
 In the other regions of the image, you see that the pixels almost consistently 
have a value of 9 (except for some noisy pixels here and there).
 
-It is the ``holes'' (with value 9) within the footprint of the circle that 
keep the circle visible in the final stack of the ouput (as we saw previously 
in the 2-column DS9 command before).
+It is the ``holes'' (with value 9) within the footprint of the circle that 
keep the circle visible in the final stack of the output (as we saw previously 
in the 2-column DS9 command before).
 Spoiler alert: in a later section of this tutorial (@ref{Contiguous outliers}) 
you will see how we fix this problem.
 But please be patient and continue reading and running the commands for now.
 
@@ -11516,16 +11525,16 @@ MAD-clipping was the opposite: it masked many 
outliers (good completeness), but
 
 Fortunately there is a good way to benefit from the best of both worlds.
 Recall that in the numbers image of the MAD-clipping output, the wrongly 
clipped pixels were randomly distributed are barely connected.
-On the other hand, those that covered the circle were nicely connected, with 
un-clipped pixels scattered within it.
+On the other hand, those that covered the circle were nicely connected, with 
unclipped pixels scattered within it.
 Therefore, using their spatial distribution, we can improve the completeness 
(not have any ``holes'' within the masked circle) and purity (remove the false 
clips).
 This is done through the @code{madclip-maskfilled} operator:
 
 @enumerate
 @item
-MAD-clipping is applied (@mymath{\sigma}-clipping is also possible, but less 
affective).
+MAD-clipping is applied (@mymath{\sigma}-clipping is also possible, but less 
effective).
 @item
 A binary image is created for each input: any outlying pixel of each input is 
set to 1 (foreground); the rest are set to 0 (background).
-Mathematical morphology operators are then used in prepartion to filling the 
holes (to close the boudary of the contiguous outlier):
+Mathematical morphology operators are then used in preparation to filling the 
holes (to close the boundary of the contiguous outlier):
 @itemize
 @item
 For 2D images (were each pixel has 8 neighbors) the foreground pixels are 
dilated with a ``connectivity'' of 1 (only the nearest neighbors: 
4-connectivity in a 2D image).
@@ -15204,7 +15213,7 @@ The former is sufficient for data with less than 8 
significant decimal digits (m
 The representation of real numbers as bits is much more complex than integers.
 If you are interested to learn more about it, you can start with the 
@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
 
-Practically, you can use Gnuastro's Arithmetic program to convert/change the 
type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to 
convert a table column's data type (see @ref{Column arithmetic}).
+Practically, you can use Gnuastro's Arithmetic program to convert/change the 
type of an image/data-cube (see @ref{Arithmetic}), or Gnuastro Table program to 
convert a table column's data type (see @ref{Column arithmetic}).
 Conversion of a dataset's type is necessary in some contexts.
 For example, the program/library, that you intend to feed the data into, only 
accepts floating point values, but you have an integer image/column.
 Another situation that conversion can be helpful is when you know that your 
data only has values that fit within @code{int8} or @code{uint16}.
@@ -16224,7 +16233,7 @@ For lower-level information about the pixel scale in 
each dimension, see @option
 @item --skycoverage
 @cindex Image's sky coverage
 @cindex Coverage of image over sky
-Print the rectangular area (or 3D cube) covered by the given image/datacube 
HDU over the Sky in the WCS units.
+Print the rectangular area (or 3D cube) covered by the given image/data-cube 
HDU over the Sky in the WCS units.
 The covered area is reported in two ways:
 1) the center and full width in each dimension,
 2) the minimum and maximum sky coordinates in each dimension.
@@ -22046,7 +22055,7 @@ The first popped operand is the termination criteria of 
the clipping, the second
 If you are not yet familiar with @mymath{\sigma} or MAD clipping, it is 
recommended to read this tutorial: @ref{Clipping outliers}.
 
 When more than 95@mymath{\%} of the area of an operand is masked, the full 
operand will be masked.
-This is necesasry in like this: one of your inputs has many outliers (for 
example it is much more noisy than the rest or its sky level has not been 
subtracted properly).
+This is necessary in like this: one of your inputs has many outliers (for 
example it is much more noisy than the rest or its sky level has not been 
subtracted properly).
 Because this operator fills holes between outlying pixels, most of the area of 
the input will be masked, but the thin edges (where there are no ``holes'') 
will remain, causing different statistics in those thin edges of that input in 
your final stack.
 Through this mask coverage fraction (which is currently 
hard-coded@footnote{Please get in touch with us at @code{bug-gnuastro@@gnu.org} 
if you notice this problem and feel the fraction needs to be lowered (or 
generally to be set in each run).}), we ensure that such thin edges do not 
cause artifacts in the final stack.
 
@@ -22117,7 +22126,7 @@ For example, you have taken ten exposures of your 
scientific target, and you wou
 
 @cartouche
 @noindent
-@strong{Masking outliers (before stacking):} Outliers in one of the inputs 
(for example star ghosts, satellite trails, or cosmic rays, can leave their 
inprints in the final stack.
+@strong{Masking outliers (before stacking):} Outliers in one of the inputs 
(for example star ghosts, satellite trails, or cosmic rays, can leave their 
imprints in the final stack.
 One good way to remove them is the @code{madclip-maskfilled} operator that can 
be called before the operators here.
 It is described in @ref{Statistical operators}; and a full tutorial on 
understanding outliers and how best to remove them is available in 
@ref{Clipping outliers}.
 @end cartouche
@@ -22755,7 +22764,7 @@ For a more general tutorial on rejecting outliers, see 
@ref{Clipping outliers}.
 If you have not done this tutorial yet, we recommend you to take an hour or so 
and go through that tutorial for optimal understanding and results.
 
 When more than 95@mymath{\%} of the area of an operand is masked, the full 
operand will be masked.
-This is necesasry in like this: one of your inputs has many outliers (for 
example it is much more noisy than the rest or its sky level has not been 
subtracted properly).
+This is necessary in like this: one of your inputs has many outliers (for 
example it is much more noisy than the rest or its sky level has not been 
subtracted properly).
 Because this operator fills holes between outlying pixels, most of the area of 
the input will be masked, but the thin edges (where there are no ``holes'') 
will remain, causing different statistics in those thin edges of that input in 
your final stack.
 Through this mask coverage fraction (which is currently 
hard-coded@footnote{Please get in touch with us at @code{bug-gnuastro@@gnu.org} 
if you notice this problem and feel the fraction needs to be lowered (or 
generally to be set in each run).}), we ensure that such thin edges do not 
cause artifacts in the final stack.
 
@@ -23358,7 +23367,7 @@ Similar to @code{mknoise-sigma-from-mean}, it takes two 
operands:
 2. The second popped operand is the dataset that the noise should be added to.
 
 To demonstrate this noise pattern, let's use @code{mknoise-poisson} in the 
example of the description of @code{mknoise-sigma-from-mean} with the first 
command below.
-The second command below will show you the two images side-by-side, you will 
notice that the Poisson distribution's un-detected regions are slightly darker 
(this is because of the skewness of the Poisson distribution).
+The second command below will show you the two images side-by-side, you will 
notice that the Poisson distribution's undetected regions are slightly darker 
(this is because of the skewness of the Poisson distribution).
 Finally, with the last two commands, you can see the histograms of the two 
distributions:
 
 @example
@@ -24101,7 +24110,7 @@ $ astarithmetic image.fits index swap --writeall \
 Add N copies of the second popped operand to the stack of operands.
 N is the first popped operand.
 For example, let's assume @file{image.fits} is a @mymath{100\times100} image.
-The output of the command below will be a 3D datacube of size 
@mymath{100\times100\times20} voxels (volume-pixels):
+The output of the command below will be a 3D data cube of size 
@mymath{100\times100\times20} voxels (volume-pixels):
 
 @example
 $ astarithmetic image.fits 20 repeat 20 add-dimension-slow
@@ -27419,7 +27428,7 @@ However, if the distribution is concentrated around the 
median, the spacing betw
 Therefore, when we divide the width by the quantile difference, the value will 
be larger than one.
 
 The example commands below create two randomly distributed ``noisy'' images, 
one with a Gaussian distribution and one with a uniform distribution.
-We will then run this option on both to see the different 
concentrations@footnote{The values you get will be slightly different because 
of the different random seeeds.
+We will then run this option on both to see the different 
concentrations@footnote{The values you get will be slightly different because 
of the different random seeds.
 To get a reproducible result, see @ref{Generating random numbers}.}.
 See @ref{Generating histograms and cumulative frequency plots} on how you can 
generate the histogram of these two images on the command-line to visualize the 
distribution.
 
@@ -28116,9 +28125,9 @@ $ astnoisechisel --help | grep check
 For more, see @ref{Quantifying signal in a tile}.
 @end cartouche
 
-When working on 3D datacubes, the tessellation options need three values and 
updating them every time can be annoying/buggy.
+When working on 3D data cubes, the tessellation options need three values and 
updating them every time can be annoying/buggy.
 To simplify the job, NoiseChisel also installs a @file{astnoisechisel-3d.conf} 
configuration file (see @ref{Configuration files}).
-You can use this for default values on datacubes.
+You can use this for default values on data cubes.
 For example, if you installed Gnuastro with the prefix @file{/usr/local} (the 
default location, see @ref{Installation directory}), you can benefit from this 
configuration file by running NoiseChisel like the example below.
 
 @example
@@ -28426,12 +28435,12 @@ Therefore by expanding those holes, we are able to 
separate the regions harborin
 @item --erodengb=INT
 The type of neighborhood (structuring element) used in erosion, see 
@option{--erode} for an explanation on erosion.
 If the input is a 2D image, only two integer values are acceptable: 4 or 8.
-For a 3D input datacube, the acceptable values are: 6, 18 and 26.
+For a 3D input data cube, the acceptable values are: 6, 18 and 26.
 
 In 2D 4-connectivity, the neighbors of a pixel are defined as the four pixels 
on the top, bottom, right and left of a pixel that share an edge with it.
 The 8-connected neighbors on the other hand include the 4-connected neighbors 
along with the other 4 pixels that share a corner with this pixel.
 See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015) for a demonstration.
-A similar argument applies to 3D datacubes.
+A similar argument applies to 3D data cubes.
 
 @item --noerodequant
 Pure erosion is going to carve off sharp and small objects completely out of 
the detected regions.
@@ -31160,7 +31169,7 @@ By default (when @option{--spectrum} or 
@option{--clumpscat} are not called) onl
 if @option{--clumpscat} is called, a secondary catalog/table will also be 
created for ``clumps'' (one of the outputs of the Segment program, for more on 
``objects'' and ``clumps'', see @ref{Segment}).
 In short, if you only have one labeled image, you do not have to worry about 
clumps and just ignore this.
 @item
-When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region within a 3D datacube.
+When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region within a 3D data cube.
 For more, see the description of @option{--spectrum} in @ref{MakeCatalog 
measurements}.
 @end itemize
 
@@ -33306,7 +33315,7 @@ The angular diameter distance to an object at a given 
redshift in Megaparsecs (M
 
 @item -s
 @itemx --arcsectandist
-The tangential distance covered by 1 arc-second at a given redshift in 
physical (not comoving) kiloparsecs (kpc).
+The tangential distance covered by 1 arc-second at a given redshift in 
physical (not comoving) kilo-parsecs (kpc).
 This can be useful when trying to estimate the resolution or pixel scale of an 
instrument (usually in units of arc-seconds) required for a galaxy of a given 
physical size at a given redshift.
 
 For an arc subtending one degree at a high redshift @mymath{z}, multiplying 
the result by 3600 will, of course, give the (tangential) length of an arc 
subtending one degree, but it will still be in physical units.
@@ -33330,7 +33339,7 @@ Once the apparent magnitude and redshift of an object 
is known, this value may b
 
 @item -g
 @itemx --age
-Age of the universe at given redshift in Ga (Giga annum, or billion years).
+Age of the universe at given redshift in Ga (Giga-annum, or billion years).
 
 @item -b
 @itemx --lookbacktime
@@ -33980,7 +33989,7 @@ A polar plot is a projection of the original pixel grid 
into polar coordinates (
 By default it assumes the full azimuthal range (from 0 to 360 degrees); if a 
narrower azimuthal range is desired, use @option{--azimuth} (for example 
@option{--azimuth=30,50} to only generate the polar plot between 30 and 50 
degrees of azimuth).
 
 The output image contains WCS information to map the pixel coordinates into 
the polar coordinates.
-This is especially useful when the azimuthal range is not the full range: the 
first pixel in the horizonal axis is not 0 degrees.
+This is especially useful when the azimuthal range is not the full range: the 
first pixel in the horizontal axis is not 0 degrees.
 
 Currently, the polar plot cannot to be used with @option{--oversample} and 
@option{--undersample} options (please get in touch with us if you need it).
 Until it is implemented, you can use the @option{--scale} option of @ref{Warp} 
to do the oversampling of the input image yourself and generate the polar plot 
from that.
@@ -36138,7 +36147,7 @@ But when you hundreds/thousands of sub-sub-components, 
your computer may not hav
 In such cases, you want to build the sub-components to built in series, but 
the sub-sub-components of each sub-component to be built in parallel.
 This function allows just this in an easy manner as below: the 
sub-sub-components of each sub-component depend on the previous sub-component.
 
-To see the effect of this function put the example below in a @file{Makefile} 
and run @code{make -j12} (to simultaneously exectute 12 jobs); then 
comment/remove this function (so there is no prerequisite in @code{$(subsubs)}) 
and re-run @code{make -j12}.
+To see the effect of this function put the example below in a @file{Makefile} 
and run @code{make -j12} (to simultaneously execute 12 jobs); then 
comment/remove this function (so there is no prerequisite in @code{$(subsubs)}) 
and re-run @code{make -j12}.
 
 @example
 # Basic settings
@@ -36238,7 +36247,7 @@ The number of files in each batch is calculated 
internally by reading the availa
 Therefore this function is more generalizable to different computers (with 
very different RAM and/or CPU threads).
 But to avoid overlapping with other rules that may consume a lot of RAM, it is 
better to design your Makefile such that other rules are only executed once all 
instances of this rule have been completed.
 
-For example, assume evey instance of one rule in your Makefile requires a 
maximum of 5.2 GB of RAM during its execution, and your computer has 32 GB of 
RAM and 2 threads.
+For example, assume every instance of one rule in your Makefile requires a 
maximum of 5.2 GB of RAM during its execution, and your computer has 32 GB of 
RAM and 2 threads.
 In this case, you do not need to manage the targets at all: at the worst 
moment your pipeline will consume 10.4GB of RAM (much smaller than the 32GB of 
RAM that you have).
 However, you later run the same pipeline on another machine with identical 
RAM, but 12 threads!
 In this case, you will need @mymath{5.2\times12=62.4}GB of RAM; but the new 
system doesn't have that much RAM, causing your pipeline to crash.
@@ -43233,7 +43242,7 @@ When a histogram is given and it is normalized, the CFP 
will also be normalized
 Return the concentration around the median for the input distribution.
 For more on the algorithm and @code{width}, see the description of 
@option{--concentration} in @ref{Single value measurements}.
 
-If @code{inplace!=0}, then this function will use the actual input data's 
allocated space and not internally allocate a new dataset (which can have 
memory and CPU benefits); but will alter (sort and remove blank elements from) 
your input dataset.
+If @code{inplace!=0}, then this function will use the actual allocated space 
of the input data and will not internally allocate a new dataset (which can 
have memory and CPU benefits); but will alter (sort and remove blank elements 
from) your input dataset.
 @end deftypefun
 
 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]