discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Base enforcing permissions policy on Config file


From: Richard Frith-Macdonald
Subject: Re: Base enforcing permissions policy on Config file
Date: Thu, 23 Feb 2006 10:29:15 +0000


On 23 Feb 2006, at 04:55, Sheldon Gill wrote:

{from Re: Creating NSImage in a tool}

I hope you don't mind if I cut a lot out ... this is getting far too long ... I'll try not to miss anything important.

Your inference is extreme ... I neither state nor imply that the test is 'thorough' ... however I do suggest that it is 'reasonable' ... in the sense that it is a reasonable policy to make things harder for hackers.

Well then we disagree on this point. In my view it fails to make things harder for crackers to any significant degree. The cost outweighs the benefits.

If we are talking about windows rather than unix here ... the current code checking permissions is completely non functional (actually I recently added ownership checking for personal config files, so it's no longer *completely* non functional) ... so technically you are correct ... but as it's non functional, it's not enforcing any permission on the file. My point is that we *should* be making sure it's hard for people to make us execute trojans, so we ought to make the code do that job.

Why then is it not done widely in the unix world?
It is.

On configuration files? I don't agree. For example, none of these major applications do it: kdm, gdm, apache, sshd, samba, inetd, svnserve, sendmail, postfix, bash

I concede ... I haven't time to check lots of stuff, and probably the vast majority don't check, and the term 'widely' is vague, and the point is not central.

Why is it not noted in any of the security programming guides?
Show me.

Your logic here is inverted. Show you that it's *not* noted? Surely you should be showing me where it *is*?

Sorry ... bad comment on my part ... just don't have time to go through huge number of documents etc for this. I don't think you are really trying to say that *no* security programming guide ever suggests trying to discourage trojans, so I think I assumed some pedantic point.

Seeing that this is essential, I take it you've modified sshd on all your machines? And taken this 'security flaw' to OpenBSD?
Ssh performs precisely this sort of check ... try changing the protections on your encryption keys and see what it does.

I said "sshd", (ie the daemon). It makes no check on the permissions of its configuration file.

You are being pedantic about what you consider a config file here ... both ssh and sshd check security on some of their config files ... more below

Your specific example of ssh checking security on identity encryption keys doesn't apply at all here. The identity file contains highly sensitive material which should be read by anyone other than the owner in all identified use-case scenarios. Thus, it is an appropriate permissions policy to be enforced.

Of course it should (which is an example supporting my point that packages should be allowed to enforce their own security policies). However, I take your point that private keys are sensitive information and therefore not exactly equivalent to letting programs be run. Why not consider sshd's use of the authorised_keys file then? Those are public keys which aren't sensitive ... yet it checks the accessibility of that file too.

Anyway please note ... I don't want to get into example/counter- example as that can go on forever.

So, are you now going to answer the questions I posed? Specifically:
I consider the files in an sshd installation most analagous to the GNUstep config file to be the authorised_keys files, so ...
Have you modified sshd on all your machines to enforce permissions on its configuration file?
No ... because sshd already enforces permsissions.
Have you reported this a security flaw to OpenBSD or OpenSSH?
No ... because sshd already enforces permissions.

Of course, the above is irrelevant ... I don't go around hacking various libraries and reporting security holes in general, but that has no bearing on how secure I try to make a project I'm actively involved with. perhaps we can try to avoid this sort of diversion in future?



This amounts to saying that the 'correct' advice is to ignore whatever security policy was in place and instead implement the one espoused within the code in -base.
No ... you use the term 'security policy' without defining any one. It is perfectly reasonable for a framework to impose a security policy and for other security policies to work in conjunction with that policy.

Actually, I use the term 'security policy' as per [JTC 1 SC 27]. You might prefer that we stick to a more colloquial definition as per RFC2504: A security policy is a formal statement of the rules by which users who are given access to a site's technology and information assets must abide. I hold that security policy should be defined by the site/ organisation and that frameworks and applications should conform to that policy, not impose upon it.

My point was to talk about something concrete (a specific policy), not about definition of the abstract term.

Your implication seems to be that other good security policies would conflict with what the base library tries to do, but if so I think you should provide examples of such policies and reasons why you think they are good.

Okay. Counter examples to current base policy:

NB. I have not argued that 'current' policy is ideal ... My argument has been that enforcing a security policy is the right thing to do and that we should improve (mostly tighten) security rather than slacken it. So I was asking for counter examples to what we are 'trying' to do. In fairness, it occurs to me that that's not a good request since we don't actually document what we are trying to do, and while I guess it's fairly clear that the aim is to help prevent crackers using config files to get people to execute trojans, that's not a precisely defined objective.


1) -rw-rw-r-- system admins

Here 'admins' is a group of personnel with rights granted under the security policy to administer the system. All operations to files with owner or group 'admins' is logged to the audit trail

There are two config files ... system wide and personal. IMO the system-wide one would ideally be aware of an admin group (though having admin users use sudo makes this a minor issue). For the personal config files, the system provides a mechanism for sysadmins to disable the personal files, but even if it didn't, the sysadmin can likely change permissions modify the file, and change it back if they want.

2) -rw-rw-r-- developer testers

Testers are allowed to make changes and see what happens. The developer is responsible for publishing but testers need to make changes in order to test. This example can be advanced further with an enhanced permissions model where testers don't have delete rights.

I can't really see this as a real-life scenario ... what sort of testers need to modify the system-wide config file rather than modifying their own personal config file?

3) Also consider the case where GNUstep.conf is actually a compound file which is accessible as GNUstep.conf/GNUSTEP_SYSTEM_ROOT etc. Permissions can then be set per key.

But it isn't.
If you have a system where there is some kernel/filesystem module which can do that, it can presumably set the permissions on the pseudo-file too.

4) -rw-rw-rw-

Machine is in a lab and anyone who can log on can go for it. Machines are re-imaged on startup or nightly. Whatever.

Realistically, either they can use their own personal config files and/or probably have system manager access to change the system-wide one.

The suggestion that "hacking" the source would make the code inherently mode insecure is wrong.
My suggestion was that a particular hack (to disable checking the permissions on the config file) would make gnustep apps more insecure. The use of the term 'insecure' here is the plain- english sense ... meaning that if the system is easier to hack it is more insecure ... rather than some absolute sense in which the system is either secure or not secure.

Doesn't there need to be an evaluation of the cost of the hack against its benefits.
In my assessment the security 'benefit' is insignificant.

Um ... I sloppily used 'hack' just above to refer to to two different things. I think you have used it for a third, but I'm not sure. I think you are referring to the cost/benefit of trying to stop trojans being executed ... about which I've snipped a lot of quibbling.
To my mind by far the major cost has been email arguments.

Then shouldn't -base also perform an MD5 check on itself to ensure it hasn't been tampered with? Shouldn't it perform the same check on all libraries it depends upon?
Well, that would probably be good, but not great since anyone able to modify it has by definition the ability and knowhow to fool that check.

Not to mention, the whole system can be compromised by LD_PRELOAD so shouldn't we institute checks for that?

Those are all *much* more significant security threats.

But ones we *can't* really do much about.

No, I'm not implying that its not worth doing anything. I'm saying its not worth doing *this* thing.

I'm not sure what 'this' is ... I would like the library to check that the config files setting it paths are protected so that only the current user and/or system manager(s) as appropriate can modify them, so that a cracker cannot use them to get you to execute trojans. I hope we are both still talking about the same thing. If you have positive suggestions of other things we can do to improve security, please let us know.

Do you drive your car while wearing a crash helmet? It'd improve your chances of surviving a crash...

No, but I do wear a safety belt.

* code fails to deal with acl, trustees and other enhanced security models
True ... needs enhancing.

No, needs deleting. Those issues are better dealt with (from a system security perspective) elsewhere.

This is the fundamental disagreement. Not whether this is the best place, but whether security should be addressed in more than one place. Think about it from the analogy of network security. Your viewpoint that 'issues are better dealt with elsewhere' can be seen as advocating a 'firewall' ... which is very good thing of course, but what I'm advocating is 'security in depth' ... the notion that there may be holes in the firewall, it may not have been set up correctly etc, so security should be addressed everywhere.

* it enforces just *one* policy, imposing it on all users and denying them flexibility in administering their systems.
You need to provide some evidence of that ... as far as I can see, people can administer their systems with any policy they want ... what we are talking about here is policy internal to the base library.

No, they can't use any security policy they like. Their security policy *has* to have the config file with the enforced permissions. Counter examples provided.

My point was that this has *NO* influence on their general security policy ... the rest of their system. It's purely internal to GNUstep and won't break anything for them. Also, if they object to GNUstep's internal policy, the configure script allows them to easily build a version where there are no config files.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]