heartlogic-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Heartlogic-dev] help boxes


From: William L. Jarrold
Subject: Re: [Heartlogic-dev] help boxes
Date: Tue, 29 Mar 2005 00:18:38 -0600 (CST)

On Sun, 27 Mar 2005, Joshua N Pritikin wrote:

On Sat, 2005-03-26 at 14:23 -0600, William L. Jarrold wrote:
On Wed, 23 Mar 2005, Joshua N Pritikin wrote:
As I see useful explanations go by on the mailing list, I am copying
them into context help boxes on the OHL web site.  The help boxes appear
as a [?].  If you click them then the full text appears.  This seems
like a better approach than writing a manual which nobody will
read.  ;-)

Sounds like an excellent idea for user interfaces in general.

But, we loose experimental validity if we vary how items are presented to
subjects.  If some people click on the help boxes for a given item
and some do not this is an added variable that adds noise to ratings.

Ah, OK.

Parity check: This is similar to the problem of some people reading the
instructions and so people skipping the instructions, but at least we
shouldn't make it seem optional by using a popup help button.

Right...If we are doing an experiment, I just think we shouldn't have help buttons....As for your comment about people skipping instructions at the begninng, right, people can flip past the instructions. But if they are short and sweet this will be less likely.


Parity check: On the other hand, a popup help box explaining the meaning
of the statistics doesn't seem to introduce any problem.

Well, as I think I said elsewhere, we should just not show them any stats.
We don't want them to know what other people said. This will influence there ratings.

Now, if we want to do a cool demo instead of a scientific experiment, then maybe, like hotornot.com we would want to show them how others voted. My belief is that we do not have interesting enough, rich enough, organic system behavior yet to have an interesting demo like this such that, e.g.
seeing other's rating stats would be fun and interesting.

Nonetheless, the display the stats of what other's rated is a neat thing to have in our back pocket.


If these issues seem obvious then feel free to reply with a single word.

Me, reply with a single word?  Impossible!!! (-:


Also, I do not think that the average research participant should see how
other people are making ratings.  I think that a given
participant's judgements should be based stricly on what they think.  They
should be afforded the ability to see what the average other person was
thinking.

Uhm, I'm not sure I understand you here.

Are you saying that it is OK to show the participant an _average_ of
other ratings but it's not OK to show them the other ratings in a more
detailed way?  Is it too much to show the standard deviation?

I am saying that showing them ANY stats is too much. If I just gave a rating of 1 and then I see, "oh, my, the average rating for this item is a 5. hrm, i must be demanding too much. let me calibrate my thinking so that i mimic everyone else." We don't want people to calibrate their thinking. We want people's judgements to be completely independent.


I think our main goal is to collect data to provide feedback for the
improvement of a AI based model of social cognition.  Therefore we
should focus on experimental usage.

Yes, certainly.  The devil is in the details!

Good. Glad you agree with that. And the real devil will be in the KR. It is a pain in the a** to make these things work.

Bill







reply via email to

[Prev in Thread] Current Thread [Next in Thread]