discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Base enforcing permissions policy on Config file


From: Sheldon Gill
Subject: Re: Base enforcing permissions policy on Config file
Date: Sat, 25 Feb 2006 10:05:02 +0800
User-agent: Thunderbird 1.5 (Windows/20051201)

Richard Frith-Macdonald wrote:

On 23 Feb 2006, at 04:55, Sheldon Gill wrote:
{from Re: Creating NSImage in a tool}

I hope you don't mind if I cut a lot out ... this is getting far too long ... I'll try not to miss anything important.

Don't mind at all. In starting a new subject I thought I should include enough to bring new comers up to date.

Your inference is extreme ... I neither state nor imply that the test is 'thorough' ... however I do suggest that it is 'reasonable' ... in the sense that it is a reasonable policy to make things harder for hackers.

Well then we disagree on this point. In my view it fails to make things harder for crackers to any significant degree. The cost outweighs the benefits.

If we are talking about windows rather than unix here ... the current code checking permissions is completely non functional (actually I recently added ownership checking for personal config files, so it's no longer *completely* non functional) ... so technically you are correct ... but as it's non functional, it's not enforcing any permission on the file.

No I'm not taking about windows here. IMO it fails to make things significantly harder for crackers on any OS.

I listed in a previous mail a number specific trojan execution threats and how this permissions check fails to mitigate them at all.

My point is that we *should* be making sure it's hard for people to make us execute trojans, so we ought to make the code do that job.

Not necessarily in the base library code, no. We definitely should be security conscious and be good OS citizens. That doesn't immediately mean we should be adding code left right and center.

What we really *should* do is this:
(a) create a threat matrix analysis
(b) review and evaluate the analysis
(c) develop threat mitigation plans for each identified threat

It is perfectly reasonable for a threat mitigation plan to be one line:
"No additional action"

It is also perfectly reasonable for a mitigation plan to compromise on the best choice for security against other concerns such as convenience and practicality.

Why then is it not done widely in the unix world?
It is.

On configuration files? I don't agree. For example, none of these major applications do it:
kdm, gdm, apache, sshd, samba, inetd, svnserve, sendmail, postfix, bash

I concede ... I haven't time to check lots of stuff, and probably the vast majority don't check, and the term 'widely' is vague, and the point is not central.

Why is it not noted in any of the security programming guides?
Show me.

Your logic here is inverted. Show you that it's *not* noted? Surely you should be showing me where it *is*?

Sorry ... bad comment on my part ... just don't have time to go through huge number of documents etc for this. I don't think you are really trying to say that *no* security programming guide ever suggests trying to discourage trojans, so I think I assumed some pedantic point.

Let me be clear. I'm all for improved security in ways that are sensible. By sensible I mean the benefits outweigh the drawbacks.

I was the one to point out glaring holes in NSTemp() and how to properly cover them, if you recall. Separate issue but I think the act demonstrates I am security conscious.

Seeing that this is essential, I take it you've modified sshd on all your machines? And taken this 'security flaw' to OpenBSD?
Ssh performs precisely this sort of check ... try changing the protections on your encryption keys and see what it does.

I said "sshd", (ie the daemon). It makes no check on the permissions of its configuration file.

You are being pedantic about what you consider a config file here ... both ssh and sshd check security on some of their config files ... more below

I am mapping sshd's use of /etc/ssh/ssh_config to GNUstep's /etc/GNUstep/GNUstep.conf which I think is a reasonable example and not pedantic.

You are positing that the check is mitigating a trojan attack like:
  GNUSTEP_SYSTEM_ROOT=/home/cracker/system

I put forward that you could equivalently 'trojan' sshd by adding:
  IdentityFile /home/cracker/unsecured_key

Your specific example of ssh checking security on identity encryption keys doesn't apply at all here. The identity file contains highly sensitive material which should be read by anyone other than the owner in all identified use-case scenarios. Thus, it is an appropriate permissions policy to be enforced.

Of course it should (which is an example supporting my point that packages should be allowed to enforce their own security policies). However, I take your point that private keys are sensitive information and therefore not exactly equivalent to letting programs be run. Why not consider sshd's use of the authorised_keys file then? Those are public keys which aren't sensitive ... yet it checks the accessibility of that file too.

sshd's user of authorized_keys is more like the private keys then ssh_config or ~/.ssh/config.

It lists the public keys which are permitted for authentication and it looks for this in the directory for each and every user on the machine.

Quite different to a file which can be locked down on install, read once at application startup and won't change often.

Anyway please note ... I don't want to get into example/counter-example as that can go on forever.

Agreed.

So, are you now going to answer the questions I posed? Specifically:
I consider the files in an sshd installation most analagous to the GNUstep config file to be the authorised_keys files, so ...
Have you modified sshd on all your machines to enforce permissions on its configuration file?
No ... because sshd already enforces permsissions.
>> Have you reported this a security flaw to OpenBSD or OpenSSH?
> No ... because sshd already enforces permissions.

You are saying it enforces sufficient permissions checks for you to be happy with its security?

I note that it specifically omits the equivalent check that you are advocating in -base.

Of course, the above is irrelevant ... I don't go around hacking various libraries and reporting security holes in general, but that has no bearing on how secure I try to make a project I'm actively involved with. perhaps we can try to avoid this sort of diversion in future?

Sure. I thought a well known and widely trusted example could serve as a good comparison to convince you that the check can be omitted without compromising security.

This amounts to saying that the 'correct' advice is to ignore whatever security policy was in place and instead implement the one espoused within the code in -base.
No ... you use the term 'security policy' without defining any one. It is perfectly reasonable for a framework to impose a security policy and for other security policies to work in conjunction with that policy.

Actually, I use the term 'security policy' as per [JTC 1 SC 27]. You might prefer that we stick to a more colloquial definition as per RFC2504: A security policy is a formal statement of the rules by which users who are given access to a site's technology and information assets must abide. I hold that security policy should be defined by the site/organisation and that frameworks and applications should conform to that policy, not impose upon it.

My point was to talk about something concrete (a specific policy), not about definition of the abstract term.

We are talking about a specific security policy element; the one currently imposed by -base.

Your implication seems to be that other good security policies would conflict with what the base library tries to do, but if so I think you should provide examples of such policies and reasons why you think they are good.

Okay. Counter examples to current base policy:

NB. I have not argued that 'current' policy is ideal ... My argument has been that enforcing a security policy is the right thing to do and that we should improve (mostly tighten) security rather than slacken it. So I was asking for counter examples to what we are 'trying' to do. In fairness, it occurs to me that that's not a good request since we don't actually document what we are trying to do, and while I guess it's fairly clear that the aim is to help prevent crackers using config files to get people to execute trojans, that's not a precisely defined objective.


1) -rw-rw-r-- system admins

Here 'admins' is a group of personnel with rights granted under the security policy to administer the system. All operations to files with owner or group 'admins' is logged to the audit trail

There are two config files ... system wide and personal. IMO the system-wide one would ideally be aware of an admin group (though having admin users use sudo makes this a minor issue). For the personal config files, the system provides a mechanism for sysadmins to disable the personal files, but even if it didn't, the sysadmin can likely change permissions modify the file, and change it back if they want.

Not all systems have a sudo. Besides which, in many cases that isn't what you want anyway. In this situation 'admins' could be those who have authorisation to administer gnustep components but not granted any additional rights. (Principle of minimal authorization)

Anyway, it demonstrates one reasonable use-case not permitted with the current code. (ie: a security policy in conflict with that imposed by -base)

2) -rw-rw-r-- developer testers

Testers are allowed to make changes and see what happens. The developer is responsible for publishing but testers need to make changes in order to test. This example can be advanced further with an enhanced permissions model where testers don't have delete rights.

I can't really see this as a real-life scenario ... what sort of testers need to modify the system-wide config file rather than modifying their own personal config file?

Those testing what happens when you make changes to the system wide file?

3) Also consider the case where GNUstep.conf is actually a compound file which is accessible as GNUstep.conf/GNUSTEP_SYSTEM_ROOT etc. Permissions can then be set per key.

But it isn't.

Maybe not on your system. Install the latest ReiserFS4...

If you have a system where there is some kernel/filesystem module which can do that, it can presumably set the permissions on the pseudo-file too.

Yes, but fundamental POSIX law says you can't add or delete files to a directory without write permissions so GNUstep.conf has to be -rwxrwx___
in order for the group to add/delete keys.

4) -rw-rw-rw-

Machine is in a lab and anyone who can log on can go for it. Machines are re-imaged on startup or nightly. Whatever.

Realistically, either they can use their own personal config files and/or probably have system manager access to change the system-wide one.

The lab system/os files are locked down and you don't want lab users having system manager access. You are giving them free rein on GNUstep and applications using that as no sensitive applications/daemons depend on it.

So, have I given examples of other security policies which are reasonable?

The suggestion that "hacking" the source would make the code inherently mode insecure is wrong.
My suggestion was that a particular hack (to disable checking the permissions on the config file) would make gnustep apps more insecure. The use of the term 'insecure' here is the plain-english sense ... meaning that if the system is easier to hack it is more insecure ... rather than some absolute sense in which the system is either secure or not secure.

Doesn't there need to be an evaluation of the cost of the hack against its benefits.
In my assessment the security 'benefit' is insignificant.

Um ... I sloppily used 'hack' just above to refer to to two different things. I think you have used it for a third, but I'm not sure. I think you are referring to the cost/benefit of trying to stop trojans being executed ... about which I've snipped a lot of quibbling.
To my mind by far the major cost has been email arguments.

Then shouldn't -base also perform an MD5 check on itself to ensure it hasn't been tampered with? Shouldn't it perform the same check on all libraries it depends upon?
Well, that would probably be good, but not great since anyone able to modify it has by definition the ability and knowhow to fool that check.

Exactly. The cost of implementing the feature weighs the benefits to be gained by it.

Not to mention, the whole system can be compromised by LD_PRELOAD so shouldn't we institute checks for that?

Those are all *much* more significant security threats.

But ones we *can't* really do much about.

No, I'm not implying that its not worth doing anything. I'm saying its not worth doing *this* thing.

I'm not sure what 'this' is ... I would like the library to check that the config files setting it paths are protected so that only the current user and/or system manager(s) as appropriate can modify them, so that a cracker cannot use them to get you to execute trojans. I hope we are both still talking about the same thing.

Yes we are: the config file permissions check.

If you have positive suggestions of other things we can do to improve security, please let us know.

I have. NSTemp(). Completely separate issue. Let's leave that out here.

Do you drive your car while wearing a crash helmet? It'd improve your chances of surviving a crash...

No, but I do wear a safety belt.

Right. Wearing a safety belt is worth it (cost/benefit) while wearing a crash helment isn't.

* code fails to deal with acl, trustees and other enhanced security models
True ... needs enhancing.

No, needs deleting. Those issues are better dealt with (from a system security perspective) elsewhere.

This is the fundamental disagreement. Not whether this is the best place, but whether security should be addressed in more than one place. Think about it from the analogy of network security. Your viewpoint that 'issues are better dealt with elsewhere' can be seen as advocating a 'firewall' ... which is very good thing of course, but what I'm advocating is 'security in depth' ... the notion that there may be holes in the firewall, it may not have been set up correctly etc, so security should be addressed everywhere.

Well, then. Do you have a deadbolt installed on your bedroom door? If your bedroom door was of hollow construction have you replaced it with a solid one?

Security should be assessed everywhere, it should not be implemented everywhere. Appropriate consideration to the specific case in hand.

* it enforces just *one* policy, imposing it on all users and denying them flexibility in administering their systems.
You need to provide some evidence of that ... as far as I can see, people can administer their systems with any policy they want ... what we are talking about here is policy internal to the base library.

No, they can't use any security policy they like. Their security policy *has* to have the config file with the enforced permissions. Counter examples provided.

My point was that this has *NO* influence on their general security policy ... the rest of their system. It's purely internal to GNUstep and won't break anything for them. Also, if they object to GNUstep's internal policy, the configure script allows them to easily build a version where there are no config files.

So the choice is comply with -base enforcement or do without config files.

My first counter-example is one where this isn't "purely internal" to base but rather forces changes on the way a system is administered and rights delegated.


Regards,
Sheldon




reply via email to

[Prev in Thread] Current Thread [Next in Thread]