aleader-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Aleader-dev] Re: busy busy


From: William L. Jarrold
Subject: [Aleader-dev] Re: busy busy
Date: Sat, 16 Aug 2003 18:22:13 -0500 (CDT)

Shoot, pine tells me I responded to this but I don't think I did.
So I'm going to respond again...

On Fri, 25 Jul 2003, Joshua N Pritikin wrote:

> [I am CC'ing to aleader-dev because other folks might like to
> read this and I want a permenant record.  You'll have to tell
> me explicitly if you don't want something broadcast through
> a mailing list.]

Okay.

>
> On Fri, Jul 25, 2003 at 12:20:54AM -0500, William L. Jarrold wrote:
> > My life has been and probably always will be frantic and behind.
> > ... I am
> > hoping beyond all hope to get serious about responding to your emails
> > over the weekend.  I took on a neuropsych assessment today that
> > will spill over into tommorrow (Friday).
> > > > .. I'll give you some random thoughts that have been
> > on my mind...
> >
> > Here's one question...You seem to say there is only one objective
> > emotional response to a given situation.  You don't seriously
> > believe this do you?
>
> No, but ...

Okay, it is very important to make this clear in your writings.

>
> > For starters, some situations make one feel happy and sad.  There
> > is the idea of mixed emotions.
>
> Yah, yah.  With this idea of "one correct objective emotional
> response," I am aiming to make the analysis more tractable and
> postpone generativity.  Here's how:
>
> I ask people to narrow down on the emotion of shortest duration
> ("duration" with respect to time) and ignore emotions which have
> longer duration.  This actually works pretty well.  I'd estimate
> that about 70-80% of film simplifies nicely this way.  However,
> there are cases when there is still more than one emotion going
> on at the same time even after narrowing down on the immediate
> emotion.  For example:
>
>   What if I am struggling against a pick-pocket?  Both
>   participants are trying to take initiative.  This must
>   produce two different emotions (one for each person)
>   because initiative is one of the key-questions of the
>   classification scheme.
>
> Actually there are quite a few different kinds of exceptional
> circumstances.  The pick-pocket is easy to explain.  I have
> developed some rules describing how to classify exceptional
> circumstances.  Using these rules, I estimate that 95-98% of a
> film can be classified unambiguously.  This is how Aleader is
> used to build a library of narrow, unambiguous emotion
> classifications.

The above is a little unclear.  In your writings, you should list a few,
2-3 of the exceptional cases.  One trivial case of multiple emotions is
when there is more than emoter in a given film clip.  List a few other
exeptional cases delineated by your theory and provide examples, spell it
all out in text.  Make up a story if you need to but the best examples
are "real", i.e. come from someone other than yourself -- e.g. the script
of some film.

>
> Now we can try to scale the model up to see how well it works
> in the general case.  I have made some steps in this direction
> by introducing an explicit notation for tracking conversation.
> However, I am not sure how much more the model can expand
> without being forced to admit multiple correct interpretations
> (a.k.a. generativity).  At some point, generativity
> comes back again because there just isn't a single correct
> way to understand a complex layered emotional story.

Hrmmm...You are making me think about generativity in a new way, a little
more deeply....I think when a person emotes (i.e. experiences an affective
state) they are experiencing a single thing.  Generativity comes in when
we are building a theory and when we humans are mindreading.

First let me talk about theories and AIs....Theories and AI's are
generally oversimplied.  The stuff they create when they attempt to
predict another's emotion is not rich and complex and varied and creative
and nauanced as what real humans do...We desire our models (i.e. our
theories and our AI systems) to be generative because they are more
believeable, more lifelike, more flexible and deeper.

Okay, now let me talk about humans...So, I suppose that the wisest
affective mindreader would come up with a very precise unitary
description of the feelings of the mindreadee...However, humans of
non-extra-ordinary emotional intelligence need to not jump to conclusions
too quickly.  We must be open to the fundamental unknowability of the
Other's experience.  We must resist the hubris of believing we know their
state exactly when we do not.  To fall prey to this error, is to commit
the cognitive distortation that Beck refers to as Mindreading.  Yes, only
God knows with absolute certainty what a given emoter is feeling.  On the
other hand, a mentally retarded person with autism probably lacks any kind
of decent ability to mindread.  So there is the type of mindreading needed
to be a normal functioning human and there is Beck's pathological
mindreading.  When does normal mindreading become pathological?  Hrmmm,
good question.  A nice messy and difficult issue needing sussing out!  One
of many such issues that plagues the vague and touchy feely field of
counseling psychology.  An issue to which I throw up my hands and want
too avoid.  Too hard!  Too many prickly counseling psychology
personalities who go non-linear when you engage in debate to deal with in
the process of coming to the truth.  (Don't give in to my cowardice)

Finally there is the issue of training software and emotional
intelligence.  I suppose that some of us have a very keen sense of
cognitive empathy yet are not overly prideful, overly at risk of being
Beck's pathological mindreader.  Although we must avoid the hubris of
Beck's mindreader, we must also avoid mental retardation!  We must help
affectively challenged Asperger's individuals and the like.  I suspect
that if the appropriate caveats against pathological mindreading are
given, I suspect that then we can move forward with the idea that it is
useful to come up with a right and a wrong regarding cognitive empathy.
So, deep down, I think I am on your side RE training software that INSISTS
on the ONE CORRECT OBJECTIVE reading of film clips.  But I want to
carefully delimit this to not applying to all circumstances or contexts.
In the context of training, of deepening our cognitive empathy, our
ability to read facial expressions etc, THEN it DOES make sense to
talk about such OBJECTIVE, unitary or repeatible judgments.  But this
skill must not be used in the service of making us more rigid, inflexible
or unidimension.

Phrew, it was good to get that out.  I hope it makes sense.  It seems
important to me.  I hope you and I both try to grok it fully.  I suggest
stuffing it into the appropriate part of your writings, documentation
whatever.  In time, it can be cleaned up and also fleshed out with real
examples.  This year while I am on internship dealing with the trials and
tribulations of poor little special ed kiddos (some of whom are socially
challenged) at Bridgepoint ELementary I will have excellent real world
grist for this mill.  Please keep pressing me for details.  Such real
world grist is make or break to the real success of this project.

>
> On the other hand, please recognize that if we stick with
> Aleader's simple, narrow view of emotion then we still
> get useful classifications ("useful" meaning that the
> classifications subjectively feel correct) without
> resorting to generativity.

Well, I guess I disagree.  I think that while building a useful
believable, scalable AI model we will go through a phase of more and more
generativity.  This will be progress.  Over time, the plethora of
constructions produced will winnow down as the system gets wiser....Oh
shut up BIll, how can you predict the future of how the model will
develop over time! (sorry, hope you don't mind me yelling at meself
while I write).

This is not to say that a useful educational tool that forces the student
to pick the ONE BEST TRUE interpretation of a  given seen can not be
built.  However, if there is any AI inside such a tool, it will be
trivial, a pale shadow of a real model.

Alas, given that you and I have finite time the AI goal and the build
educational software goal seem to be in tension....But fear not, there may
be sweet spots where both goals combine....To brainstorm, scenario
generation is one such area where AI and educational software may meet.

>
> I am curious to know how much of Ortony works without
> generativity, and how much of a role generativity plays
> in the full version of Ortony.

Ah.  I can email you a paper in which Ortony mentions generavity.  It was
a 2000 or 2001 paper.  No mention as I recall is made of G in OCC.  Make
sure I track that down and send it to you.  Nonetheless, I think I make a
much bigger deal about G than he does....Now to your question, I think
emotional intelligence requires generativity.  Okay, fine, say we've got
Joe The Affective Sage here.  Sure, Joe will watch "Good Will Hunting" and
like totally peg the single emotion being depicted by each character in
every single frame.  BUT, Joe will also be able to generate scaddzillions
of alternatives.   He will mull over a few difficult scenes in his
mind..."hmm, those tears could be joy but it could be anguish too...or
maybe it is just the physical pain...or the tiredness or..."...Thus, to
answer your question, in my opinion, OCC will not work w/o G.  G is
essential to a robust, deep, believable implemtation of G.

>
> Perhaps the Ortony model can be stacked on top of the Aleader
> model to produce a fully general theory?

Stacked on top?  More likely mashed together...To quote Clark Elliot,
"OCC is a thing of beauty."  Although I still am not familiar with
Aleader, my intution is that it too is a thing of beauty.  My own belief
is that a real working theory will be a Rube Goldberg like hairball.
About as ugly as a neurofibrolary tangle.  I believe what Marvin Minsky,
author of Socieity of Mind, believe.  I.e. that the Mind is a halphazard
assemblage of agents, much like New York City at rush hour.  Or to quote
Marvin, "like California on fire."

Btw, there his a *huge* amount of "background common sense" knowledge
required in order to make such a theory work.

>
> > Here's another one...You really need to have a controlled list of terms.
> > How many terms are emotion terms?  50, 5, 100, 500, 1000?
>
> Hrm .. I thought I explained in the prototype research paper
> that emotion _terms_ are secondary to the collection of
> examples of a given emotion.  In other words, emotion
> categories are defined by example and not by _terms_.

I disagree.  To be sure, note: a term is not a word.  Look at is this way.
You've got a collection or bundle or set of 57 examples that "define"
happiness or whatever.  Well, I need a pointer to that bundle.  That
pointer is a term.  Maybe I'll be like a C program and call that bundle
59092928098092.  Or maybe I'll be like a human, albeit a silly one, and
call it "snicklefritz".  Or maybe I'll be a like Ortony and call it
"Fears Confirmed" (note capitalization).  Note that given the vagueness,
ambiguity and complexity inherent in natural languaage, the english term
"fears confirmed" probably refers to a different set of experiences than
the official OCC tem "Fears Confirmed"...People at Cycorp (www.cyc.com)
would reify "Fears Confirmed" and to them it would be refered to as
#$FearsConfirmed.

Does this make sense?  Put it this way; a term is a label or name or
pointer for a category.

Btw, there is a big debate in cog sci RE are concepts defined by classical
category theory type stuff or are they more fuzzily defined by prototypes.
My belief is that when the theoriticians shut up and get to work on
building ontologies that work, they will see that what they create will be
so complex that it can either be viewed as a category or as a prototype.

Well, whatever, I'm digressing a little.  The main idea is that this issue
is basically academic.  A working model will silence all debates on
whether the categories are example based or rule based.  Btw, you are
writing in KM, right?  I believe you are writing a rule-based definition.
Maybe if it gets complicated enough it act like  a nice fuzzy human like
example based system.

>
> So far, the Aleader classification scheme has partitioned
> examples into about 50 categories.  I expect another 10-20
> categories as our survey expands, but not much more than that.

Personally, I'd shy away from predictions about how big and complex
your set of categories will be when the thing is done.  But, heck maybe
all my flaming about complexity will be shown to be wrong.

>
> If we increase the complexity of the model by tracking
> conversations then we may find quite a lot (10s or 100s)
> of recognizable emotion sequences

Sure.

>
> Even so, I deny that I have identified any "emotion terms."
>
> Does that address your question?

I'm not sure.  How does this aspect of the debate seem to you now?

Also, in a previous message, quoted above, I said you should control
the number of terms...Or "How many terms are emotion terms?  50, 5, 100,
500, 1000?"....Hmm, I wonder what I was driving at?  I think what I was
driving at was this:  I don't have a clear sense of what is inside and
outside of the definition of what is an emotion.  My sense is that it will
help publishability to define what is and what is not an emotion.  It
would be useful to give examples....

- what is clearly inside the category of "emotion"
- what is just barely inside the category of "emotion"
- what is just barely outside the category of "emotion"
- what is just clearly outside the category of "emotion"

...but as I have said earlier, I kind of prefer the word "affective
state".

>
> > You should buy
> > _The Cogntitive Structure of Emotion_ by Ortony Clore and Collins.
>
> Yah, I should .. OK, I spoke with a friend about a good book shop.
> I'll have a copy within a few days.
>
> > My friend Dan probably has a CD burner.  Maybe I can get him interested
> > in this project.
>
> --
> Victory to the Divine Mother!!         after all,
>   http://sahajayoga.org                  http://why-compete.org
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]