Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
IBM Data Science Experience: Whole-Cluster Privilege Escalation Disclosure (wycd.net)
103 points by wyc on Feb 21, 2017 | hide | past | favorite | 10 comments


I am very familiar with the product, as we are developing a peer-to-peer alternative for sharing large datasets for machine learning. It seems there are now security reasons for preferring the p2p approach.

If you're interested in our p2p approach, see: https://fosdem.org/2017/schedule/event/democratizing_deep_le... and www.hops.io


Surprised the docker iptables doesn't block this already. I do see rules disallowing traffic to and from docker0 (172.17.0.1).


Docker has explicit recommendations for enabling control of the daemon from a privileged container, for example to have Jenkins in a container adjacent to the containers it is manipulating. Mentioned in [run command docs](https://docs.docker.com/engine/reference/commandline/run/#/m...) and [discussed at length](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-d...) by senior engineer Jérôme Petazzoni.

From the description, this particular vulnerability (TLS certs mounted to each guest container) seems to have come from an earnest alternative implementation of the same pattern, without considering how that would be inappropriate on a shared host.


I am not surprised at all. Security is mostly an afterthought in the docker universe. Sensible defaults are not really a thing and instead of giving access to resources when needed it is allowed to do anything by default. Also, unprivileged containers when?


This is not really true IMHO unless you're just starting and don't know what you're doing. Like for example, if you're doing a container and you set --privileged flag, you have almost certainly granted more access to the processes inside of the container than was needed.

I get what you're saying about unprivileged containers, even if the processes in the container are not running as root, the container itself (and docker itself) is basically root. The person running the container gets root. Setting up a docker host as multi-tenant is something you may do at your own risk.

If your users have access through the network to processes running inside of a container, that is how you may use containers to protect yourself and your users from each other. If, on the other hand, your users are allowed to execute code outside of the container (or launch containers) because that's how you have set up authorization and access control for users on your multi-tenant system, that's not the container's fault.


And just to tack on a specific anecdote about my own use of containers and how what you're saying just isn't true, I found once when trying to run Chrome in a container that it failed for some reason related to sandboxing. So I tried to disable sandboxing with --no-sandbox and saw that it worked, then went back and googled to find out the implications of what I had done.

The first advice I found was from core docker maintainers saying clear as day "don't run chrome with --no-sandbox".

The problem was a missing kernel flag for USERNS support. This feature provided by the kernel is the piece that allows to create a virtual root user that only has root access inside of his namespace (only inside of the container, then.) This is a service provided by the kernel.

Just to rebut your position, in my experience there is no "throw up your hands" attitude towards security in the Docker dev team and containers ecosystem.

Now certainly the thread parent shows this is not the case everywhere, but I can tell you in my opinion from limited experience that I am not AT ALL surprised this happened at IBM. I was exposed to their BlueMix platform at a hackathon in Buffalo, and putting it gently I was not impressed. More directly, important things like authentication and continuous deployment were obviously broken as soon as you scratched the surface.

The judge from IBM did not respond well when we told him we found their platform was severely broken, and we had decided at 1am that we'd needed to switch our efforts to targeting Heroku deployment instead. (We did not win the prize, if you're still wondering.)


Hackathons seem to be where companies can dump an untested implementation of a platform without ruining the experience of actual customers. They can either find out what's wrong with their platform or be pleasantly surprised when someone manages to use it.

A few years ago I ended up at a Hadoop-themed hackathon. It took most of the day for the organizers to provision us servers running Hadoop that could compute anything non-trivial. The only reason my team ended up with something instead of nothing is because I SSHed to my desktop computer to do the actual computation.


WTF !


Please don't post unsubstantively like this here.


Yeah, I might over react .. I will be more careful ;) "Scaring" would be a better word. anyway, given that the issue was so simple to exploit, shall we consider that private data could be already compromised?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: