[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gzz-commits] gzz/Documentation/Manuscripts/Paper paper.tex p...
From: |
Janne V. Kujala |
Subject: |
[Gzz-commits] gzz/Documentation/Manuscripts/Paper paper.tex p... |
Date: |
Mon, 25 Nov 2002 11:37:03 -0500 |
CVSROOT: /cvsroot/gzz
Module name: gzz
Changes by: Janne V. Kujala <address@hidden> 02/11/25 11:37:03
Modified files:
Documentation/Manuscripts/Paper: paper.tex perceptual-model.fig
Log message:
unique backgrounds
CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/Manuscripts/Paper/paper.tex.diff?tr1=1.67&tr2=1.68&r1=text&r2=text
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/Manuscripts/Paper/perceptual-model.fig.diff?tr1=1.2&tr2=1.3&r1=text&r2=text
Patches:
Index: gzz/Documentation/Manuscripts/Paper/paper.tex
diff -u gzz/Documentation/Manuscripts/Paper/paper.tex:1.67
gzz/Documentation/Manuscripts/Paper/paper.tex:1.68
--- gzz/Documentation/Manuscripts/Paper/paper.tex:1.67 Mon Nov 25 06:17:24 2002
+++ gzz/Documentation/Manuscripts/Paper/paper.tex Mon Nov 25 11:37:02 2002
@@ -241,11 +241,7 @@
(example: adventure game ``you are in a maze of twisty little passages, all
alike'' XXX
which game?!)
-Features orthogonal to human perception (e.g.~color, direction of fastest
luminance change)
-should be independently random, and features not orthogonal (e.g. colors of
neighbouring
-pixels.)...
-
-...
+... [PDF application] ... % XXX
When the view is focused on one part of a document,
context is provided by the fragments of documents, which the
part in focus is connected to.
@@ -258,8 +254,7 @@
identity of the focused and connected documents and
a more prominent target for tracking movement between views.
-% XXX:
-Additionally, black text should have good contrast with the background.
+% XXX
The identity is used as a seed for randomly choosing
an easily distinguishable unique background from a
@@ -267,16 +262,17 @@
%providing an infinite source of unique backgrounds.
%generating textures based on seed numbers [identity]
The basic assumption of the model is that an image
-is perceived as a set of features.
+is perceived as a set of features (see Fig.~\ref{fig-perceptual}).
-Current knowledge of visual perception \cite{bruce96visualperception}
+Current knowledge of visual perception (see,
e.g.~\cite{bruce96visualperception})
explains early visual processing very accurately.
In visual cortex, there are cells sensitive to different
frequencies, orientations, and locations in the visual field.
-A good mathematical model for the sensitivity of the receptive fields is
+A good mathematical model for the excitatory and inhibitory
+sensitivities of the receptive fields is
Gabor function, i.e., a two-dimensional Gaussian-modulated sinusoid.
-On a higher level, correlating local feature are combined
+On a higher level, correlating local features are combined
to global perception.
For example, contours are formed from consistent directions
of adjacent receptive fields and different objects are
@@ -285,22 +281,31 @@
and not throughly understood.
Theories of structural object perception (e.g. \cite{biederman87})
propose certain primitive shapes whose structure facilities recognition.
-
We simply assume that the intensities of different features,
such as local and global shapes and colors, form a \emph{feature vector},
which facilitates recognition and memorization of images.
+% The structure of the features is assumed to be irrelevant.
For the backgrounds to be distinguishable, they should produce
-distinct, random feature vectors in brain.
+distinct feature vectors in brain.
+To achieve this, the model should maximize the entropy of the feature vector.
+%We call this the principle of saving bits.
+Features orthogonal to human perception
+(e.g.~color, direction of fastest luminance change)
+should be independently random, and features not orthogonal
+(e.g. colors of neighbouring pixels)
+should be correlated so as to maximize the entropy of the set
+of the connected features
+(e.g. pixels on a small area should correlate enough to
+facilitate perception of contours).
In a sense, the perception model should invert the
visual processing to produce a unique background from
-a random vector seeded by the identity (see Fig.~\ref{fig-perceptual}).
-%We call this the principle of saving bits.
+a random vector seeded by the identity.
%distinguishability: should produce random vector in brain
% (perception model in Fig.~\ref{fig-perceptual}) -- saving of bits
-\begin{figure}
+\begin{figure}[h]
\centering
%\fbox{\vbox{\vskip 3in}}
\includegraphics{perceptual-model.eps}
@@ -323,6 +328,8 @@
if all circles were green and all squares yellow, a considerable amount of
bits would be wasted.
+--- XXX: % XXX
+
To understand why it is possible to learn to discriminate particular
backgrounds easily, consider the task of learning {\em one} background texture.
This is a two-class problem.
@@ -368,6 +375,9 @@
Of course, methods such as the ones presented in \cite{furnas00infinity} could
be used
to allow the unique background to look similar at different scales; however,
this would remove the use of the texture as a cue of scale.
+
+% XXX:
+Additionally, black text should have good contrast with the background.
\section{Hardware-accelerated implementation}
Index: gzz/Documentation/Manuscripts/Paper/perceptual-model.fig
diff -u gzz/Documentation/Manuscripts/Paper/perceptual-model.fig:1.2
gzz/Documentation/Manuscripts/Paper/perceptual-model.fig:1.3
--- gzz/Documentation/Manuscripts/Paper/perceptual-model.fig:1.2 Mon Nov
25 08:11:26 2002
+++ gzz/Documentation/Manuscripts/Paper/perceptual-model.fig Mon Nov 25
11:37:03 2002
@@ -21,7 +21,7 @@
6 360 1800 1890 2160
2 2 0 1 0 0 51 0 -1 0.000 0 0 -1 0 0 5
360 1800 1890 1800 1890 2160 360 2160 360 1800
-4 0 0 51 0 0 12 0.0000 4 135 1335 450 2025 Feature detection\001
+4 0 0 51 0 0 12 0.0000 4 135 1335 450 2025 Feature extraction\001
-6
6 225 2430 2025 5670
6 495 4185 1800 4500
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Gzz-commits] gzz/Documentation/Manuscripts/Paper paper.tex p...,
Janne V. Kujala <=