aleader-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Aleader-dev] Re: direction


From: Joshua N Pritikin
Subject: [Aleader-dev] Re: direction
Date: Tue, 5 Aug 2003 15:51:04 +0530
User-agent: Mutt/1.4i

[I am aware of your limited time.  I don't expect you to acknowledge
everything I wrote in this email.  I am keeping track of any issues
which I feel are unresolved so I can feed them back to you later.

For example, over the last few days I have made an effort to better
separate the science & philosophy portions of Aleader.  Eventually
(not now!) I want to revisit whether the new organization hides
the philosophy sufficiently well.]

On Mon, Aug 04, 2003 at 08:49:53PM -0500, William L. Jarrold wrote:
> I am finally attaching my comments on the tutorial.  They  way
> they are written is sort of halphharzed.  I first only
> put new stuff by commenting it out with % and putting at
> WLJ at the start of it.  Then I did a big comment blurb with
> WLJ.  Might be best to respond to that via email discussion
> rather than attachment.  Then I went back and changed a few
> lines of the original text without adding any WLJ indicators figuring
> you can use a diff like tool to find 'em easy.

The best way is just to change stuff without putting in any %WLJ
marker.  Diff (or emacs's emerge) works great.

I see a lot of uncontroversial edits, such as:

----
@@ -119,6 +139,8 @@
 
 @item An @code{Emotion Search} window appears.
 
+%WLJ put, "In that window" immediately before "Type" below.
+
 @item Type the letters @code{cele} into the @code{Sample} entry box.
 
 @item As you type, the list will change to show only those emotions
----

It is best if you simply commit these changes to CVS, but it's no big
deal.  I can also do the commit.

> My main belief is that
> there may be some robustness to your categories but not much.  Perhaps
> the following will make the point..."Arto Chirps a Reply" is slight
> different.  Different than what?  I forget exactly, my written notes
> (made a few days ago are unclear).  But it is probably different from
> all or most of the other "Celebrate Presence" clips.

Frankly I'm surprised to read this because I clearly stated in
the tutorial:

  The second row which says ``Artoo chirps a reply'' is dim gray
  (this indicates that the example may be weak or incorrectly classified).

In other words, you should have ignored the examples which are
shown in dim gray.  (Oops, now that's a bug.  Why do I show
these situations at all?  I'll fix that.  Sorry for the confusion.)
OK, let's move on ...

> Both "Master Yupas" are distinct from one another...One involves
> a set of collegial regard and the other is more like, yay, papa is back.
> The former emotions depicted are more adult in tone, the latter more
> childlike in tone.

Agreed.

In any classification scheme, some variation is unavoidable.
The challenge is to see how well we can minimize the variation.

In this particular case, I don't have any idea how to do better.
Subjectively, I find both "Master Yupa"s similar enough.  I do not
feel urgent motivation to further distinguish them.

> o I did not really get passed a thorough checkout of "celebrate
> presence", i.e. Item #31.  However, I did a random leap and ended up
> looking at "You think I'm afraid of you big fuck?" from Goodwill
> Hunting.  This appeared to be classified as admiration (there was a box
> checked next to "[+] admires [0].").

Perhaps it seems strange to classify this situation as "admires"?
"Admires" is actually a _general_ category.

This situation also classifies to the _specific_ category
"haughty / arrogant" (which is a sub-type of admires).

One more comment: I have not put much effort into translating
Aleader's affective assessment in idiomatic, common-sense
English.  This is one of the many reasons why I request people
to stop reading my explanation and start watching film clips.

> okay, well, basically I think I get it.   ...
> ... there is some similarity.  Sure, there is a happy feeling
> felt by certain characters in all of those scenes.
> ...
> So far, the chief value of your system seems to me to be able to
> provide a vivid and reliable depictor of emotions.

Great!  Now I won't have to run around in circles trying to verbally
explain what you have now seen & quickly absorbed first-hand.

If we publish an article, I guess we should "strongly recommend"
that readers try out the CD?

For those readers who don't try the CD, how much of an attempt
should we make to verbally explain what is on the CD?  Perhaps
such a description should go in an appendix or something?

> Maybe, if you replace your categories with OCC categories and/or Roseman
> categories you will get more attention from the academic community.  

I certainly want to compare/constrast with OCC & Roseman.  In fact, I
got a copy of Roseman96 today.  Sorry it took so long.  I am looking
forward to reading it.  However, I am not willing to dump the Aleader
model.  Perhaps I am stubborn or irrational about this, but I continue
to believe that Aleader offers a more precise affective model than any
other existing model.  Obviously, this belief does not rest on being
well-read.  It rests on a long inner struggle and deep introspection.
I will offer some comments on Roseman soon though, in any case.

In your first email, you suggested that I narrow the scope of my
research.  I agree that I am trying to do something "too big."
However, I don't see a problem with that.  I don't have a deadline.
We can proceed in small, manageable steps.  Be involved as much as
you want.  I appreciate your feedback very much.

> One important
> question is, is there more heterogenetiy between categories than within
> categories?  One way you could test this is ask people to view pairs of
> scenes and rate their affective similarity on a scale of 1-5.

That's a great idea!  Why didn't I think of that?  Here is an instance
where your broad awareness of the field of cognitive psychology really
helps me out.

I have already done most of the work to automate this type of test.
I will get busy and make a few more preparatory changes to the software.
Perhaps within a week, we'll be ready (software-wise) to get started
testing human subjects.

> o It would be interesting to apply different types of text understanding
> and/or statistical NLP to the scripts you have provided.  If it's
> inferences could be sync'd up to different film clips and its inferences
> compared to human's inferences watching (reading?) the same film
> (script), now we are getting something quite interesting.
> Specifically, by comparing affective judgements in each of these three
> conditions.

That does sound very exciting.  It also sounds like something that
requires big databases and more computing power than I have at
present.

On the other hand, I do want to collect these more ambitious research
ideas.  Who knows, semi-automated emotional analysis of complete films
may be of practical value to Hollywood studios.

Personally I am happy to begin with an investigation of the question
you raise above, "is there more heterogeneity between categories than
within categories?".  The test will require little effort to
administer.  Statistical analysis is straightforward.  There is lots
of precedent in the published literature (can you suggest a
particularly good article which I can use as a model?).  We could even
limit the scope of the test to the 10 easy categories in the "getting
started" guide (maybe, this choice has pros & cons).  It should be
relatively easy.  What do you think?

A simple article like this may be a necessary pre-cursor to a big
NLP & inferencing project anyway.  Isn't it?

> I am still quite unfamiliar with your theory.

Even so, I want to decide a course of action.  Unless you come up with
something better, I am going to plan my time according to the aim of
publishing an article addressing the question: "is there more
heterogeneity between categories than within categories?"

I guess I need to write a research proposal now?  Perhaps a one page
outline giving an overview of what we have, what we want to do with
it, and what we resources we need?

I hope you can guide me through the process and perhaps help with
corralling human subjects.  I _may_ be able to find English speaking
human subjects here in Nashik too.  Perhaps it would add something to
use human subjects from two different countries?

I hope you find time to explore Aleader in a bit more depth.  Here are
my predictions about what you will find:

+ The categories are more robust than you thought initially.

+ You will appreciate the elegance and computability of the model.

+ You will see how the model can be extended or scaled up.

+ Simultaneous with your growing appreciation of the model, you will
begin to feel that most people will need some training to go beyond
the ten easy categories.  I think this is a rather sticky problem.
I look forward to hearing your opinion about it.

-- 
.. Sensual .. Perceptual .. Cognitive .. Emotional .. Oh My!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]