discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Creating NSImage in a tool


From: Richard Frith-Macdonald
Subject: Re: Creating NSImage in a tool
Date: Thu, 16 Feb 2006 12:40:08 +0000


On 16 Feb 2006, at 09:50, Sheldon Gill wrote:

Richard Frith-Macdonald wrote:
On 16 Feb 2006, at 03:37, Alex Perez wrote:
Sheldon Gill wrote:
Stefan Urbanek wrote:

[snip]

There is code in base which enforces a particular security policy.
It seems the security policy on your system violates those expectations. The specific issue is that your configuration file 'c' has permissions on it that base doesn't like. My recommendation is removing the offending code: gnustep-base NSPathUtilities.m ~line 660

If this causes problems *under windows*, shouldn't it be ifdefed out *under windows*?
In plain english, I think what Sheldon is saying is that the doesn't think the base library should check whether the config file it is reading in could have been hacked by someone else or not.

Interesting choice of emotive words here. The implication is that the check can determine if the config file is hacked or not. And that failing to check somehow would 'open the doors' to hackers.

Unfortunately, the test is *far* from conclusive and actually does very little to improve security.

There are many ways to "crack" and the test for a config file having POSIX permissions set to no write for 'group' and 'others' hardly constitutes a thorough test nor reasonable barrier.

Your inference is extreme ... I neither state nor imply that the test is 'thorough' ... however I do suggest that it is 'reasonable' ... in the sense that it is a reasonable policy to make things harder for hackers.

If so, I strongly disagree with him on this ... if you read in a config file which tells you where programs are located, and someone has hacked that to point to a trojan, you have a big security issue, so this sort of checking is essential. It is not sufficient to say that developers should add other checks of their own devising (of course that would be good) ... you have to try to minimise security problems at every point.

I'm afraid your security analysis here is deeply flawed.

* You're suggesting that the check is sufficient to detect that someone has edited the file to point to a trojan.
Its not.

I neither suggest nor imply that the test is sufficient ... indeed I explictly mention one reason why it is not sufficient below. And of course it can't prevent other unrelated hacks.

* You note that "this sort of checking" is essential:
Why then is it not done widely in the unix world?

It is.

Why is it not noted in any of the security programming guide?

Show me.

Seeing that this is essential, I take it you've modified sshd on all your machines? And taken this 'security flaw' to OpenBSD?

Ssh performs precisely this sort of check ... try changing the protections on your encryption keys and see what it does.

So the 'correct' advice would have been to locate and protect this config file such that only the owner could modify it, so that the check would not fail. Hacking the source as suggested would be a last resort IF Stefan want insecure code.

This amounts to saying that the 'correct' advice is to ignore whatever security policy was in place and instead implement the one espoused within the code in -base.

No ... you use the term 'security policy' without defining any one. It is perfectly reasonable for a framework to impose a security policy and for other security policies to work in conjunction with that policy. Your implication seems to be that other good security policies would conflict with what the base library tries to do, but if so I think you should provide examples of such policies and reasons why you think they are good.

The suggestion that "hacking" the source would make the code inherently mode insecure is wrong.

My suggestion was that a particular hack (to disable checking the permissions on the config file) would make gnustep apps more insecure. The use of the term 'insecure' here is the plain-english sense ... meaning that if the system is easier to hack it is more insecure ... rather than some absolute sense in which the system is either secure or not secure.

Of course it's never quite that simple ... you have filesystems which by design cannot be made secure ... do we just say config files are not supported on such filesystems? It's a reasonable answer, but debatable.

I think we say "this file system doesn't support <x> security features"

Just because the file system is insecure doesn't mean that the machine is insecure, just as file systems offering enhanced security doesn't by itself mean the machine is secure.

This is an important point in an abstract discussion of overall system security, but in this context I think it's just playing with words. What we are talking about here is putting an obstacle in the way of an attacker.

On windows you also have the problem that the security model is not the same as the posix/unix one, and it's possible to have things secured by access control lists even though they appear insecure when you look at posix file modes ... it's a bug that the base library does not check this (and I would love someone to contribute the code to do so) ... but that's a reason to fix the bug, not a reason to make the code less secure.

The trouble is, what ACL constitutes secure and what doesn't?
I guarantee your answer is one of policy, and so not universally applicable.

Just as the current base implementation enforces just one policy, where others are just as valid depending on the circumstances.

Of course ... but the base library is entitled to enforce its policy on its own files, and more importantly it's a reasonable user expectation that it *should* enforce a fairly sane policy. To take this to an extreme for the purposes of illustration...

Assume that a function in the library allows reading of data in a manner permitting buffer overrun such that a program can, by presenting it with particular input, be caused to execute arbitrary code. Base library policy is that this is a security issue and the code should be changed so that the buffer overrun cannot occur. Isn't it legitimate to enforce that?

Note that this ACL complication also applies to *nix with acl support. I take it you consider it a bug that the current code does nothing to detect acl support and check attributes on *nix?

Yes ... the code is less secure than it should be, and we should be fixing these issues as and when we can. We start with the most well-known flaws, and block them ... each hole fixed means that hackers need to work a bit harder and we block off some of the dumber ones.

For this particular case of base security policy enforcement:

* base will refuse to run if the POSIX permission are not ?????-??-?
No it won't ... it will refuse to load config files which would override its built-in defaults.

this opens the way for a denial of service if you can change those permissions

Only denial of service in the sense that particular features (overriding default values) would be prevented ... and if someone can change permissions, they can probably change file contents ... and loss of the ability to override default values is better than executing a trojan.

this opens the way for a user to deny themselves if they change those permissions

And they can change back again.

* a cracker can change ownership of the file and still point it to a trojan

The current check is incomplete ... we know that ... but if they can change ownership they probably have too much access to the system for us to stop them screwing everything anyway.

* a cracker can edit the file and point it to a trojan

Not if they can't edit it ... and the check is to catch cases where they might have edited it.... so the more cases we can catch, the better.

* a cracker can replace gnustep-base with a trojan version and *not* touch the config file at all

Sure, and the cracker can have root access already, and the cracker can have replaced your disk drive. You seem to be implying that it's not worth doing anything to improve security because there are always other points of attack. That makes no sense to me.

* code is on critical startup path for just about *every* application and tool so its slowing down everything

Insignificant.

* more code = more complexity and more maintenance

Firstly this is localised/specific and therefore easily encapsulated/ maintained and adds no significant complexity to the overall system. Secondly, even if that was not the case, you don't omit important features just because they need to be coded.

* code fails to deal with acl, trustees and other enhanced security models

True ... needs enhancing.

* it enforces just *one* policy, imposing it on all users and denying them flexibility in administering their systems.

You need to provide some evidence of that ... as far as I can see, people can administer their systems with any policy they want ... what we are talking about here is policy internal to the base library.

In summary, anything you do is inflicting a policy on people... not checking config file permissions is inflicting a policy just as much as checking them.

I guess we all agree that users should be security conscious/aware. Where we differ is that I believe that the base library should be as secure as reasonably possible because users won't necessarily do a good job, and there is a lot of room for improvement (possibly even adding features to make it easier for people to create secure applications).






reply via email to

[Prev in Thread] Current Thread [Next in Thread]