discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Base enforcing permissions policy on Config file


From: Sheldon Gill
Subject: Base enforcing permissions policy on Config file
Date: Thu, 23 Feb 2006 12:55:19 +0800
User-agent: Thunderbird 1.5 (Windows/20051201)

{from Re: Creating NSImage in a tool}
Richard Frith-Macdonald wrote:
On 16 Feb 2006, at 09:50, Sheldon Gill wrote:

Richard Frith-Macdonald wrote:
On 16 Feb 2006, at 03:37, Alex Perez wrote:
Sheldon Gill wrote:
Stefan Urbanek wrote:

[snip]

There is code in base which enforces a particular security policy.
It seems the security policy on your system violates those expectations. The specific issue is that your configuration file 'c' has permissions on it that base doesn't like. My recommendation is removing the offending code: gnustep-base NSPathUtilities.m ~line 660

If this causes problems *under windows*, shouldn't it be ifdefed out *under windows*?

In plain english, I think what Sheldon is saying is that the doesn't think the base library should check whether the config file it is reading in could have been hacked by someone else or not.

Interesting choice of emotive words here. The implication is that the check can determine if the config file is hacked or not. And that failing to check somehow would 'open the doors' to hackers.

Unfortunately, the test is *far* from conclusive and actually does very little to improve security.

There are many ways to "crack" and the test for a config file having POSIX permissions set to no write for 'group' and 'others' hardly constitutes a thorough test nor reasonable barrier.

Your inference is extreme ... I neither state nor imply that the test is 'thorough' ... however I do suggest that it is 'reasonable' ... in the sense that it is a reasonable policy to make things harder for hackers.

Well then we disagree on this point. In my view it fails to make things harder for crackers to any significant degree. The cost outweighs the benefits.

If so, I strongly disagree with him on this ... if you read in a config file which tells you where programs are located, and someone has hacked that to point to a trojan, you have a big security issue, so this sort of checking is essential. It is not sufficient to say that developers should add other checks of their own devising (of course that would be good) ... you have to try to minimise security problems at every point.

I'm afraid your security analysis here is deeply flawed.

* You're suggesting that the check is sufficient to detect that someone has edited the file to point to a trojan.
Its not.

I neither suggest nor imply that the test is sufficient ... indeed I explictly mention one reason why it is not sufficient below. And of course it can't prevent other unrelated hacks.

* You note that "this sort of checking" is essential:
Why then is it not done widely in the unix world?

It is.

On configuration files? I don't agree. For example, none of these major applications do it:

kdm, gdm, apache, sshd, samba, inetd, svnserve, sendmail, postfix, bash

Why is it not noted in any of the security programming guides?

Show me.

Your logic here is inverted. Show you that it's *not* noted? Surely you should be showing me where it *is*?

Seeing that this is essential, I take it you've modified sshd on all your machines? And taken this 'security flaw' to OpenBSD?

Ssh performs precisely this sort of check ... try changing the protections on your encryption keys and see what it does.

I said "sshd", (ie the daemon). It makes no check on the permissions of its configuration file.

ssh does *not* to precisely this check. Changing:
  chmod g+w ~/.ssh/config

will not cause it to ignore the configuration file.

Your specific example of ssh checking security on identity encryption keys doesn't apply at all here. The identity file contains highly sensitive material which should be read by anyone other than the owner in all identified use-case scenarios. Thus, it is an appropriate permissions policy to be enforced.

So, are you now going to answer the questions I posed? Specifically:

Have you modified sshd on all your machines to enforce permissions on its configuration file?

Have you reported this a security flaw to OpenBSD or OpenSSH?

So the 'correct' advice would have been to locate and protect this config file such that only the owner could modify it, so that the check would not fail. Hacking the source as suggested would be a last resort IF Stefan want insecure code.

This amounts to saying that the 'correct' advice is to ignore whatever security policy was in place and instead implement the one espoused within the code in -base.

No ... you use the term 'security policy' without defining any one. It is perfectly reasonable for a framework to impose a security policy and for other security policies to work in conjunction with that policy.

Actually, I use the term 'security policy' as per [JTC 1 SC 27]. You might prefer that we stick to a more colloquial definition as per RFC2504:

A security policy is a formal statement of the rules by which users who are given access to a site's technology and information assets must abide.

I hold that security policy should be defined by the site/organisation and that frameworks and applications should conform to that policy, not impose upon it.

Your implication seems to be that other good security policies would conflict with what the base library tries to do, but if so I think you should provide examples of such policies and reasons why you think they are good.

Okay. Counter examples to current base policy:

1) -rw-rw-r-- system admins

Here 'admins' is a group of personnel with rights granted under the security policy to administer the system. All operations to files with owner or group 'admins' is logged to the audit trail

2) -rw-rw-r-- developer testers

Testers are allowed to make changes and see what happens. The developer is responsible for publishing but testers need to make changes in order to test.

This example can be advanced further with an enhanced permissions model where testers don't have delete rights.

3) Also consider the case where GNUstep.conf is actually a compound file which is accessible as GNUstep.conf/GNUSTEP_SYSTEM_ROOT etc. Permissions can then be set per key.

4) -rw-rw-rw-

Machine is in a lab and anyone who can log on can go for it. Machines are re-imaged on startup or nightly. Whatever.

The suggestion that "hacking" the source would make the code inherently mode insecure is wrong.

My suggestion was that a particular hack (to disable checking the permissions on the config file) would make gnustep apps more insecure. The use of the term 'insecure' here is the plain-english sense ... meaning that if the system is easier to hack it is more insecure ... rather than some absolute sense in which the system is either secure or not secure.

Doesn't there need to be an evaluation of the cost of the hack against its benefits.

In my assessment the security 'benefit' is insignificant.

Of course it's never quite that simple ... you have filesystems which by design cannot be made secure ... do we just say config files are not supported on such filesystems? It's a reasonable answer, but debatable.

I think we say "this file system doesn't support <x> security features"

Just because the file system is insecure doesn't mean that the machine is insecure, just as file systems offering enhanced security doesn't by itself mean the machine is secure.

This is an important point in an abstract discussion of overall system security, but in this context I think it's just playing with words. What we are talking about here is putting an obstacle in the way of an attacker.

An obstacle which is tiny and easily side-stepped. Enough so that it won't even slow an attacker down.

On windows you also have the problem that the security model is not the same as the posix/unix one, and it's possible to have things secured by access control lists even though they appear insecure when you look at posix file modes ... it's a bug that the base library does not check this (and I would love someone to contribute the code to do so) ... but that's a reason to fix the bug, not a reason to make the code less secure.

The trouble is, what ACL constitutes secure and what doesn't?
I guarantee your answer is one of policy, and so not universally applicable.

Just as the current base implementation enforces just one policy, where others are just as valid depending on the circumstances.

Of course ... but the base library is entitled to enforce its policy on its own files, and more importantly it's a reasonable user expectation that it *should* enforce a fairly sane policy. To take this to an extreme for the purposes of illustration...

Assume that a function in the library allows reading of data in a manner permitting buffer overrun such that a program can, by presenting it with particular input, be caused to execute arbitrary code.

Okay. This constitutes a potential security hole and is regarded as a defect.

Note that *all* such buffer overruns also expose the library to crashing. This is a bug by everyones definition.

In order to execute arbitrary code there must be two requisites:
 a) the overrun which, by definition, over writes another area
 b) a method of code injection into said other area

If only (a) is possible you have a crashing bug. If (b) is possible you have an exploitable hole. Then, of course, you have to actually write the exploit...

Base library policy is that this is a security issue and the code should be changed so that the buffer overrun cannot occur. Isn't it legitimate to enforce that?

It is entirely reasonable that defects and bugs get rectified.

This is defect management and coding policy.

While it is a "security issue" it isn't "security policy"

Note that this ACL complication also applies to *nix with acl support. I take it you consider it a bug that the current code does nothing to detect acl support and check attributes on *nix?

Yes ... the code is less secure than it should be, and we should be fixing these issues as and when we can.

In this case we are making things less flexible than they should be as well. Which is more important? How much more 'security' is this giving us?

We start with the most well-known flaws, and block them ... each hole fixed means that hackers need to work a bit harder and we block off some of the dumber ones.

The way to block dumbness for this case is to set appropriate install 
permissions.

It should be up to the site administrators to set and enforce security policy.

For this particular case of base security policy enforcement:

* base will refuse to run if the POSIX permission are not ?????-??-?
No it won't ... it will refuse to load config files which would override its built-in defaults.

If the built-in defaults are not suitable it will refuse to run. Otherwise yes, it will operate per compile-time settings which may or may not be suitable for the users.

this opens the way for a denial of service if you can change those permissions

Only denial of service in the sense that particular features (overriding default values) would be prevented ... and if someone can change permissions, they can probably change file contents ... and loss of the ability to override default values is better than executing a trojan.

No denial of service in the sense that the services provided are being denied to the user.

You've also hit on reasons this "feature" adds an insignificant amount to 
security.

this opens the way for a user to deny themselves if they change those permissions

And they can change back again.

* a cracker can change ownership of the file and still point it to a trojan

The current check is incomplete ... we know that ... but if they can change ownership they probably have too much access to the system for us to stop them screwing everything anyway.

Then shouldn't -base also perform an MD5 check on itself to ensure it hasn't been tampered with? Shouldn't it perform the same check on all libraries it depends upon?

Not to mention, the whole system can be compromised by LD_PRELOAD so shouldn't we institute checks for that?

Those are all *much* more significant security threats.

* a cracker can edit the file and point it to a trojan

Not if they can't edit it ... and the check is to catch cases where they might have edited it.... so the more cases we can catch, the better.

* a cracker can replace gnustep-base with a trojan version and *not* touch the config file at all

Sure, and the cracker can have root access already, and the cracker can have replaced your disk drive. You seem to be implying that it's not worth doing anything to improve security because there are always other points of attack. That makes no sense to me.

No, I'm not implying that its not worth doing anything. I'm saying its not worth doing *this* thing.

Do you drive your car while wearing a crash helmet? It'd improve your chances of surviving a crash...

* code is on critical startup path for just about *every* application and tool so its slowing down everything

Insignificant.

By itself, perhaps. However, allowing a multitude of "insignificant" does make it become significant.

* more code = more complexity and more maintenance

Firstly this is localised/specific and therefore easily encapsulated/maintained and adds no significant complexity to the overall system. Secondly, even if that was not the case, you don't omit important features just because they need to be coded.

We're disagreeing that this is an important feature.

* code fails to deal with acl, trustees and other enhanced security models

True ... needs enhancing.

No, needs deleting. Those issues are better dealt with (from a system security perspective) elsewhere.

Do the right thing in the right places.

* it enforces just *one* policy, imposing it on all users and denying them flexibility in administering their systems.

You need to provide some evidence of that ... as far as I can see, people can administer their systems with any policy they want ... what we are talking about here is policy internal to the base library.

No, they can't use any security policy they like. Their security policy *has* to have the config file with the enforced permissions. Counter examples provided.

In summary, anything you do is inflicting a policy on people... not checking config file permissions is inflicting a policy just as much as checking them.

No, not checking the file permissions is not inflicting a security policy at 
all.

I guess we all agree that users should be security conscious/aware.

Yes. Administrators more so.

Where we differ is that I believe that the base library should be as secure as reasonably possible because users won't necessarily do a good job, and there is a lot of room for improvement (possibly even adding features to make it easier for people to create secure applications).

Yes, it should not open security holes. It shouldn't increase the attack 
surface.

It shouldn't be trying to babysit users nor should it be trying to tackle things outside its scope which are better handled elsewhere.


Regards,
Sheldon




reply via email to

[Prev in Thread] Current Thread [Next in Thread]