l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

When to Deploy


From: Neal H. Walfield
Subject: When to Deploy
Date: Wed, 30 Aug 2006 05:59:04 -0400
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Tue, 29 Aug 2006 10:41:22 +0200,
Christian Stüble wrote:
> > If the technology is fundamentally flawed, then the correct answer is
> > "nobody", and instead it should be rejected outright.  
> IMO not. Maybe this is an influence of my PhD-advisor(s), but I would try to 
> _prove_ that the technology is fundamentally flawed. BTW, the abstract 
> security properties provides are IMO useful.

The policy that you are suggesting is, in my opinion, quite dangerous.
Before a technology is deployed we should try and prove that the
technology is not fundamentally flawed.  I do not believe that proof
that a technology is fundamentally flawed should be the requirement by
which we prevent deployment, reasonable suspicion is sufficient.

Let me provide two examples.  The Cane Toad was introduced to Eastern
Australia in the 1930s to eliminate can beetles.  Today they are
destroying the native wild life: "They carry a venom so powerful it
can kill crocodiles, snakes and other predators in minutes."  Western
Australia has petitioned the government to allow them to use the army
to help prevent their spread [1].

In Kenya, in the 1980s, the mathenge plant was introduced to stop the
advance of deserts.  It turns out that "the plant is not only
poisonous but also hazardous to [the locals] livestock.  Residents say
the mathenge seeds of the plant stick in the gums of their animals,
eventually causing their teeth to fall out."  "Can you imagine goats
unable to graze? Eventually they die."  But that's not all: "Some have
even had to move home, as the mathenge roots have destroyed their
houses." And "The plant is also blamed for making the soil loose and
unable to sustain water" [2].

There examples are not isolated cases.  Further examples can be found
in "Late lessons from early warnings: the precautionary principle
1896" [3], issued by the EU in 2001.

The reason that I have choosen environmental examples is that they are
so simple to understand: social implications are orders of magnitude
more difficult to grok.  The advocates of these above "solutions" were
not likely to have been looking to cause trouble.  They saw that
certain changes could affect other positive changes.  In both cases,
they were right: the cane toad stopped the cane beetle and the plant
helped curb desertification.  It was the other insufficiently explored
affects which caused the most trouble.

DRM and "trusted computing" is similar.  On the surface, they appear
to be solutions to some socially desirable properties
(i.e. limitations explicitly condoned by the law which I assume for
the sake of argument reflect social attitudes).  They, for instance,
help companies make a profit and protect privacy.  But maybe their
impact is broader.  Perhaps, "copy protection" will stifle creativity
as its impact corrodes fair use and, had a different solution been
used, companies could have made a profit in a different less
disruptive way.  Perhaps it is better to let these companies die and
experience a local minimum in creative output rather than allow
ourselves to enter a creative dark age.  Perhaps, as we use this
technology to protect our medical history, as we agree that it is
private, and we refuse to allow our doctor to not transfer our medical
data to others without explicit consent, the result will prevent us
from getting care that we required when abroad on vacation.  Perhaps
such barriers could have been avoided if the system was designed to
respect intent.  I don't know how such copy protection" mechanisms can
be designed to respect intent without necessarily reverting to a
system which compromises their stated goal of privacy through the
introduction of some big brother entity.

In these cases, I do not think that *proving* a fatal flaw should be
the metric we use to prevent such deployment.  If we have reasonable
to think that social values are at stake by the introduction of some
solution, I am convinced we must take the conservative approach and
reject that solution.  I think we are a far way from that regarding
DRM and "trusted computing".

Thanks,
Neal


[1] http://news.bbc.co.uk/2/hi/asia-pacific/5092226.stm
[2] http://news.bbc.co.uk/2/hi/africa/5252256.stm
[3] 
http://reports.eea.eu.int/environmental_issue_report_2001_22/en/Issue_Report_No_22.pdf






reply via email to

[Prev in Thread] Current Thread [Next in Thread]