A Brand New Mindcraft Moment?

· 42 min read
A Brand New Mindcraft Moment?

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link]


1. this WP article was the fifth in a sequence of articles following the security of the internet from its beginnings to related topics of right this moment. discussing the security of linux (or lack thereof) fits nicely in there. it was additionally a properly-researched article with over two months of analysis and interviews, one thing you cannot fairly claim yourself in your recent items on the subject. you do not just like the facts? then say so. or even better, do one thing constructive about them like Kees and others have been making an attempt. nevertheless foolish comparisons to outdated crap like the Mindcraft studies and fueling conspiracies do not precisely assist your case.
2. "We do an inexpensive job of discovering and fixing bugs."
let's begin right here. is that this assertion primarily based on wishful pondering or chilly arduous facts you are going to share in your response? according to Kees, the lifetime of safety bugs is measured in years. that's greater than the lifetime of many gadgets individuals purchase and use and ditch in that interval.
3. "Issues, whether or not they are safety-associated or not, are patched rapidly,"
some are, some aren't: let's not forget the recent NMI fixes that took over 2 months to trickle all the way down to stable kernels and we even have a person who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-methods.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is trying to upstream, think about the shitstorm if bugreports might be treated with this attitude, let's hope btrfs guys are an exception, not the rule). anyway, two examples should not statistics, so once once more, do you will have numbers or is all of it wishful pondering? (it is partly a trick question because you will even have to explain how one thing gets to be decided to be security associated which as we all know is a messy business within the linux world)
4. "and the stable-update mechanism makes those patches available to kernel users."
except when it doesn't. and yes, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "Specifically, the few builders who are working on this area have by no means made a serious try and get that work integrated upstream."
you don't need to be shy about naming us, in any case you probably did so elsewhere already. and we also defined the explanation why we haven't pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i do not anticipate you and your readers to learn any of it, this is the tl;dr: if you need us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that is how >90% of linux code gets in too. i personally find it fairly hypocritic that effectively paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter without cost. and earlier than someone brings up the CII, go test their mail archives, after some preliminary exploratory discussions i explicitly asked them about supporting this lengthy drawn out upstreaming work and acquired no answers.


Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]


Cash (aha) quote :
> I suggest you spend none of your free time on this. Zero. I suggest you receives a commission to do this. And nicely.
No one anticipate you to serve your code on a silver platter for free. The Linux basis and large companies utilizing Linux (Google, Red Hat, Oracle, Samsung, and so on.) ought to pay safety specialists such as you to upstream your patchs.


Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link]


I might just like to point out that the way you phrased this makes your remark a tone argument[1][2]; you've got (most likely unintentionally) dismissed all of the mum or dad's arguments by pointing at its presentation. The tone of PAXTeam's remark shows the frustration built up through the years with the way things work which I believe needs to be taken at face value, empathized with, and understood rather than simply dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,


Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]


why, is upstream identified for its fundamental civility and decency? have you ever even learn the WP post underneath discussion, never thoughts previous lkml site visitors?


Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]


Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Hyperlink]


No Argument


Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Please don't; it would not belong there either, and it especially does not want a cheering part because the tech press (LWN typically excepted) tends to supply.


Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (guest, #58961) [Link]


Ok, however I used to be considering of Linus Torvalds


Posted Nov 8, 2015 16:Eleven UTC (Sun) by pbonzini (subscriber, #60935) [Link]


Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (guest, #24616) [Hyperlink]


Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]


Why should you assume only money will fix this problem? Sure, I agree more assets needs to be spent on fixing Linux kernel security issues, but do not assume somebody giving a corporation (ahem, PAXTeam) cash is the one answer. (Not mean to impugn PAXTeam's security efforts.)


The Linux growth group might have had the wool pulled over its collective eyes with respect to security points (both real or perceived), however merely throwing cash at the problem will not repair this.


And yes, I do notice the business Linux distros do heaps (most?) of the kernel growth these days, and that implies oblique monetary transactions, however it's a lot more concerned than simply that.


Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]


Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]


Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (visitor, #24616) [Link]


Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Link]


I think you definitely agree with the gist of Jon's argument... not sufficient focus has been given to security within the Linux kernel... the article will get that half proper... cash hasn't been going in direction of safety... and now it must. Aren't you glad?


Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Link]


they talked to spender, not me personally, but sure, this side of the coin is well represented by us and others who have been interviewed. the identical means Linus is an efficient consultant of, effectively, his personal pet challenge referred to as linux.
> And if Jon had only talked to you, his would have been too.
given that i'm the author of PaX (a part of grsec) sure, talking to me about grsec matters makes it the most effective ways to research it. but when you understand of someone else, be my visitor and name them, i am pretty positive the recently formed kernel self-protection of us can be dying to engage them (or not, i don't suppose there's a sucker on the market with hundreds of hours of free time on their hand).
> [...]it additionally contained fairly a number of of groan-worthy statements.
nothing is perfect but considering the viewers of the WP, that is one in all the better journalistic pieces on the topic, no matter the way you and others don't like the sorry state of linux security uncovered in there. if you need to debate more technical details, nothing stops you from talking to us ;).
talking of your complaints about journalistic qualities, since a earlier LWN article noticed it match to include several typical dismissive claims by Linus about the quality of unspecified grsec features with no proof of what expertise he had with the code and the way recent it was, how come we did not see you or anyone else complaining about the standard of that article?
> Aren't you glad?
no, or not yet anyway. i've heard a number of empty phrases over time and nothing ever manifested or worse, all the money has gone to the pointless train of fixing individual bugs and related circus (that Linus rightfully despises FWIW).


Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]


Posted Nov 8, 2015 13:06 UTC (Solar) by k3ninho (subscriber, #50375) [Hyperlink]


Right now we have received developers from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Unfortunately, the surrounding cultural angle of builders is to hit useful targets, and occasionally efficiency targets. Security objectives are sometimes ignored. Ideally, the tradition would shift so that we make it tough to comply with insecure habits, patterns or paradigms -- that may be a task that will take a sustained effort, not merely the upstreaming of patches.
Regardless of the tradition, these patches will go upstream eventually anyway as a result of the concepts that they embody are actually timely. I can see a approach to make it happen: Linus will settle for them when a big finish-consumer (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here is a set of improvements, we're already utilizing them to solve this type of drawback, this is how every little thing will remain working as a result of $proof, be aware carefully that you're staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a sport and will be gamed; I would choose that the neighborhood shepherds customers to comply with the sample of declaring problem + solution + functional test proof + efficiency check proof + safety check proof.
K3n.


Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]


And about that fork barrel: I'd argue it's the opposite approach around. Google forked and lost already.


Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Link]


Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]


Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]


So I need to confess to a specific amount of confusion. I may swear that the article I wrote mentioned precisely that, but you have put a good amount of effort into flaming it...?


Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]


Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]


I personally assume you and Nick Krause share reverse sides of the same coin. Programming capacity and basic civility.


Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]


Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (guest, #16953) [Link]


I hope I'm improper, however a hostile angle is not going to assist anybody receives a commission. It is a time like this where something you seem to be an "skilled" at and there is a demand for that experience the place you show cooperation and willingness to take part because it is an opportunity. I'm relatively shocked that somebody would not get that, but I'm older and have seen a few of these opportunities in my career and exploited the hell out of them. You only get just a few of these in the common career, and handful at probably the most.
Sometimes you must spend money on proving your skills, and this is a type of moments. It seems the Kernel community may finally take this safety lesson to coronary heart and embrace it, as said in the article as a "mindcraft moment". This is an opportunity for builders which will want to work on Linux security. Some will exploit the chance and others will thumb their noses at it. In the long run these developers that exploit the opportunity will prosper from it.
I really feel outdated even having to jot down that.


Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Maybe there's a rooster and egg drawback here, however when seeking out and funding individuals to get code upstream, it helps to select people and teams with a historical past of with the ability to get code upstream.
It's completely cheap to want understanding of tree, offering the ability to develop spectacular and significant security advances unconstrained by upstream requirements. That is work somebody may additionally wish to fund, if that meets their needs.


Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Hyperlink]


You make this argument (implying you do research and Josh doesn't) after which fail to help it by any cite. It would be far more convincing when you hand over on the Onus probandi rhetorical fallacy and actually cite facts.
> living proof, it was *them* who steered that they would not fund out-of-tree work however would consider funding upstreaming work, besides when pressed for the details, all i bought was silence.
For these following along at house, that is the relevant set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-discuss...
A fast precis is that they advised you your mission was unhealthy as a result of the code was never going upstream. You instructed them it was due to kernel developers angle so they should fund you anyway. They informed you to submit a grant proposal, you whined more in regards to the kernel attitudes and eventually even your apologist told you that submitting a proposal may be the smartest thing to do. At that point you went silent, not vice versa as you suggest above.
> obviously i will not spend time to jot down up a begging proposal simply to be instructed that 'no sorry, we do not fund multi-12 months initiatives at all'. that is one thing that one should be instructed prematurely (or heck, be part of some public guidelines so that others will know the foundations too).
You seem to have a fatally flawed grasp of how public funding works. If you do not tell individuals why you want the cash and the way you may spend it, they're unlikely to disburse. Saying I'm good and I do know the issue now hand over the money does not even work for most Lecturers who have a stable fame in the sector; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not correctly credited)?
jejb@jarvis> git log|grep -i 'Author: pax.*team'|wc -l
1
Stellar, I must say. And before you mild off on these who have misappropriated your credit, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a vastly invaluable and time consuming ability and considered one of the explanations groups like Linaro exist and are well funded. If extra of your stuff does go upstream, it is going to be because of the not inconsiderable efforts of other people on this area.
You now have a enterprise model promoting non-upstream safety patches to customers. There's nothing flawed with that, it's a reasonably normal first stage business mannequin, however it does fairly rely on patches not being upstream in the first place, calling into question the earnestness of your attempt to put them there.
Now here's some free advice in my subject, which is aiding companies align their businesses in open source: The promoting out of tree patch route is all the time an eventual failure, particularly with the kernel, as a result of if the functionality is that useful, it gets upstreamed or reinvented in your regardless of, leaving you with nothing to promote. If your marketing strategy B is promoting expertise, you could have to bear in mind that it should be a hard promote when you've got no out of tree differentiator left and git historical past denies that you just had something to do with the in-tree patches. In fact "crazy security particular person" will develop into a self fulfilling prophecy. The advice? it was apparent to everybody else who read this, however for you, it's do the upstreaming yourself before it will get completed for you. That method you've got a reliable historic claim to Plan B and also you might actually have a Plan A selling a rollup of upstream observe patches integrated and delivered earlier than the distributions get round to it. Even your software to the CII could not be dismissed because your work wasn't going anyplace. Your alternative is to proceed enjoying the function of Cassandra and possibly undergo her eventual fate.


Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]


> Second, for the doubtlessly viable items this could be a multi-year
> full time job. Is the CII willing to fund projects at that degree? If not
> we all would find yourself with numerous unfinished and partially broken options.
please show me the reply to that question. without a definitive 'sure' there is no point in submitting a proposal because that is the time-frame that in my opinion the job will take and any proposal with that requirement can be shot down instantly and be a waste of my time. and i stand by my declare that such simple fundamental necessities should be public information.
> Stellar, I must say.
"Lies, damned lies, and statistics". you notice there's multiple strategy to get code into the kernel? how about you use your git-fu to seek out all of the bugreports/advised fixes that went in due to us? as for specifically me, Greg explicitly banned me from future contributions by way of af45f32d25cc1 so it is no wonder i do not send patches directly in (and that one commit you found that went in regardless of mentioned ban is actually a really dangerous example as a result of it is also the one which Linus censored for no good reason and made me resolve to never send safety fixes upstream till that practice adjustments).
> You now have a enterprise model promoting non-upstream security patches to prospects.
now? we've had paid sponsorship for our various stable kernel series for 7 years. i wouldn't name it a enterprise model although as it hasn't paid anyone's payments.
> [...]calling into question the earnestness of your try to place them there.
i must be missing one thing here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already). the CII mails have been exploratory to see how serious that whole organization is about truly securing core infrastructure. in a sense i've acquired my solutions, there's nothing extra to the story.
as in your free recommendation, let me reciprocate: advanced issues don't resolve themselves. code solving advanced problems does not write itself. people writing code solving complex problems are few and much between that one can find out in short order. such people (domain consultants) do not work at no cost with few exceptions like ourselves. biting the hand that feeds you'll solely finish you up in hunger.
PS: since you are so sure about kernel builders' means to reimplement our code, maybe have a look at what parallel features i still maintain in PaX regardless of vanilla having a 'totally-not-reinvented-right here' implementation and try to know the rationale. or simply have a look at all the CVEs that affected say vanilla's ASLR however did not have an effect on mine.
PPS: Cassandra by no means wrote code, i do. criticizing the sorry state of kernel safety is a side mission when i am bored or simply waiting for the following kernel to compile (i wish LTO was extra environment friendly).


Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Link]


In other words, you tried to outline their course of for them ... I can't think why that would not work.
> "Lies, damned lies, and statistics".
The problem with advert hominem attacks is that they're singularly ineffective against a transparently factual argument. I posted a one line command anyone could run to get the number of patches you've got authored within the kernel. Why do not you post an equal that gives figures you like extra?
> i've by no means in my life tried to submit PaX upstream (for all the reasons mentioned already).
So the grasp plan is to demonstrate your expertise by the variety of patches you have not submitted? nice plan, world domination beckons, sorry that one got away from you, but I am sure you won't let it occur once more.


Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (guest, #24616) [Link]


what? since when does asking a question define anything? isn't that how we discover out what someone else thinks? isn't that what *they* have that webform (by no means mind the mailing lists) for as properly? in other phrases you admit that my query was not really answered .
> The problem with ad hominem assaults is that they're singularly ineffective towards a transparently factual argument.
you didn't have an argument to start with, that's what i explained within the half you fastidiously chose to not quote. i'm not right here to defend myself towards your clearly idiotic attempts at proving whatever you're trying to prove, as they say even in kernel circles, code speaks, bullshit walks. you'll be able to take a look at mine and determine what i can or cannot do (not that you have the information to know most of it, mind you). that stated, there're clearly other more capable folks who have completed so and determined that my/our work was worth something else nobody would have been feeding off of it for the past 15 years and still counting. and as unimaginable as it could seem to you, life does not revolve around the vanilla kernel, not everybody's dying to get their code in there particularly when it means to put up with such foolish hostility on lkml that you just now additionally demonstrated here (it is ironic how you came to the protection of josh who particularly asked individuals not to deliver that infamous lkml fashion right here. good job there James.). as for world domination, there're some ways to achieve it and something tells me that you're clearly out of your league right here since PaX has already achieved that. you're working such code that implements PaX options as we communicate.


Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]


I posted the one line git script giving your authored patches in response to this original request by you (this one, just in case you have forgotten http://lwn.web/Articles/663591/):
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)?
I take it, by the best way you've got shifted ground in the earlier threads, that you simply want to withdraw that request?


Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]


Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Link]


Please present one that is not flawed, or much less wrong. It can take much less time than you've got already wasted right here.


Posted Nov 8, 2015 22:Forty nine UTC (Sun) by PaXTeam (visitor, #24616) [Link]


anyway, since it's you guys who've a bee in your bonnet, let's take a look at your degree of intelligence too. first work out my e mail tackle and mission identify then strive to find the commits that say they arrive from there (it brought again some reminiscences from 2004 already, how times flies! i am stunned i truly managed to perform this much with explicitly not attempting, imagine if i did :). it is an extremely complex activity so by accomplishing it you will prove your self to be the highest dog right here on lwn, no matter that is worth ;).


Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link]


*shrug* Or don't; you are only sullying your own status.


Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]


Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]


I wouldn't either


Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]


Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]


Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (guest, #24616) [Link]


Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Hyperlink]


Ah. I thought my reminiscence wasn't failing me. Evaluate to PaXTeam's response to .
PaXTeam will not be averse to outright mendacity if it means he gets to look proper, I see. Perhaps PaXTeam's reminiscence is failing, and this apparent contradiction isn't a brazen lie, but provided that the 2 posts have been made inside a day of one another I doubt it. (PaXTeam's complete unwillingness to assume good religion in others deserves some reflection. Sure, I *do* suppose he is lying by implication here, and doing so when there's nearly nothing at stake. God alone knows what he is prepared to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.)


Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (guest, #24616) [Link]


> and that one commit you discovered that went in regardless of mentioned ban
additionally somebody's ban does not imply it's going to translate into another person's execution of that ban as it's clear from the commit in question. it's considerably unhappy that it takes a safety fix to expose the fallacy of this policy though. the rest of your pithy ad hominem speaks for itself better than i ever may ;).


Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Link]


Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Link]


I don't see this message in my mailbox, so presumably it obtained swallowed.


Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]


You're conscious that it's entirely attainable that everyone seems to be flawed right here , right?
That the kernel maintainers need to focus more on security, that the article was biased, that you're irresponsible to decry the state of safety, and do nothing to help, and that your patchsets would not help that a lot and are the unsuitable direction for the kernel? That just because the kernel maintainers aren't 100% right it does not imply you are?


Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Hyperlink]


I believe you've gotten him backwards there. Jon is evaluating this to Mindcraft as a result of he thinks that despite being unpalatable to a lot of the neighborhood, the article might in reality comprise a number of truth.


Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]


Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (guest, #23067) [Hyperlink]


"There are rumors of dark forces that drove the article within the hopes of taking Linux down a notch. All of this could properly be true"
Simply as you criticized the article for mentioning Ashley Madison despite the fact that in the very first sentence of the following paragraph it mentions it did not contain the Linux kernel, you cannot give credence to conspiracy theories without incurring the same criticism (in different words, you cannot play the Glenn Beck "I am simply asking the questions right here!" whose "questions" gas the conspiracy theories of others). Very similar to mentioning Ashley Madison for instance for non-technical readers about the prevalence of Linux on the planet, if you're criticizing the mention then mustn't likening a non-FUD article to a FUD article also deserve criticism, particularly given the rosy, self-congratulatory picture you painted of upstream Linux safety?
As the PaX Staff pointed out within the initial publish, the motivations aren't hard to know -- you made no mention at all about it being the fifth in an extended-running collection following a reasonably predictable time trajectory.
No, we didn't miss the general analogy you were trying to make, we simply do not assume you can have your cake and eat it too.
-Brad


Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]


Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]


It is gracious of you not to blame your readers. I determine they're a good goal: there's that line about these ignorant of history being condemned to re-implement Unix -- as your readers are! :-)
K3n.


Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]


Unfortunately, I don't understand neither the "safety" folks (PaXTeam/spender), nor the mainstream kernel people when it comes to their attitude. I confess I have totally no technical capabilities on any of those matters, but when they all determined to work collectively, as an alternative of getting infinite and pointless flame wars and blame sport exchanges, a lot of the stuff would have been carried out already. And all the while everyone involved may have made another huge pile of money on the stuff. They all appear to wish to have a greater Linux kernel, so I've bought no concept what the problem is. Evidently no one is willing to yield any of their positions even somewhat bit. As a substitute, both sides look like bent on attempting to insult their way into forcing the other side to give up. Which, after all, by no means works - it simply causes extra pushback.
Perplexing stuff...


Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link]


Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Link]


Take a scientific computational cluster with an "air hole", for instance. You'd probably need most of the safety stuff turned off on it to gain most performance, as a result of you can belief all customers. Now take a couple of billion cell phones which may be difficult or sluggish to patch. You'd in all probability wish to kill most of the exploit classes there, if those gadgets can nonetheless run moderately nicely with most safety options turned on.
So, it is not either/or. It is probably "it relies upon". But, if the stuff isn't there for everyone to compile/use within the vanilla kernel, it will likely be harder to make it a part of everyday selections for distributors and users.


Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]


How sad. This Dijkstra quote involves thoughts instantly:
Software program engineering, in fact, presents itself as another worthy trigger, but that is eyewash: when you fastidiously learn its literature and analyse what its devotees actually do, you'll uncover that software engineering has accepted as its charter "How to program if you can not."


Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Hyperlink]


I assume that truth was too unpleasant to fit into Dijkstra's world view.


Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Hyperlink]


Certainly. And the attention-grabbing thing to me is that once I reach that point, exams are not enough - mannequin checking at a minimum and really proofs are the one way forwards. I'm no security expert, my discipline is all distributed programs. I perceive and have applied Paxos and that i imagine I can clarify how and why it works to anybody. However I am presently doing a little algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is sufficient as a result of there are infinite interleavings of occasions and my head simply could not cope with working on this either at the pc or on paper - I found I could not intuitively cause about these items at all. So I began defining the properties and wanted and step-by-step proving why every of them holds. Without my notes and proofs I am unable to even clarify to myself, not to mention anybody else, why this factor works. I find this each fully apparent that this could occur and utterly terrifying - the maintenance value of those algorithms is now an order of magnitude greater.


Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]


> Indeed. And the fascinating factor to me is that when I reach that time, exams will not be enough - mannequin checking at a minimum and really proofs are the one means forwards.
Or are you just using the fallacious maths? Hobbyhorse time again :-) however to quote a fellow Choose developer ... "I typically walk into a SQL growth store and see that wall - you realize, the one with the massive SQL schema that no-one fully understands on it - and surprise how I can easily hold your entire schema for a Choose database of the same or larger complexity in my head".
But it is easy - by education I'm a Chemist, by curiosity a Bodily Chemist (and by career an unemployed programmer :-). And when I'm excited about chemistry, I can ask myself "what is an atom manufactured from" and assume about things like the robust nuclear power. Next degree up, how do atoms stick together and make molecules, and suppose about the electroweak pressure and electron orbitals, and the way do chemical reactions occur. Then I believe about molecules stick collectively to make materials, and think about metals, and/or Van de Waals, and stuff.
Level is, it is advisable to *layer* stuff, and look at things, and say "how can I split components off into 'black bins' so at anyone stage I can assume the other levels 'simply work'". For instance, with Pick a FILE (desk to you) stores a class - a set of equivalent objects. One object per Document (row). And, same as relational, one attribute per Subject (column). Are you able to map your relational tables to actuality so easily? :-)
Going again THIRTY years, I remember a narrative about a man who built little laptop crabs, that could fairly happily scuttle round within the surf zone. As a result of he didn't attempt to work out how to resolve all the issues without delay - each of his (incredibly puny by today's standards - this is the 8080/Z80 period!) processors was set to only course of somewhat bit of the problem and there was no central "brain". But it labored ... Possibly it is best to just write a bunch of small modules to unravel every individual problem, and let final answer "just happen".
Cheers,
Wol


Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Link]


To my understanding, this is precisely what a mathematical abstraction does. For instance in Z notation we might construct schemas for the varied modifying ("delta") operations on the base schema, and then argue about preservation of formal invariants, properties of the result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A through O (for which they've been already argued).
The end result is a set of operations that, executed in arbitrary order, result in a set of properties holding for the end result and outputs. Thus proving the formal design appropriate (w/ caveat lectors regarding scope, correspondence with its implementation [though that may be confirmed as properly], and read-solely ["xi"] operations).


Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]


Trying by the historical past of computing (and doubtless plenty of other fields too), you'll most likely find that individuals "can't see the wood for the timber" more typically that not. They dive into the element and fully miss the massive picture.
(Drugs, and interest of mine, suffers from that too - I remember somebody talking in regards to the consultant desirous to amputate a gangrenous leg to save lots of somebody's life - oblivious to the truth that the affected person was dying of most cancers.)
Cheers,
Wol


Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Hyperlink]


https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Considered Harmful")
FWIW, I think that this talk may be very relevant to why writing secure software is so onerous..
-Dave.


Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Link]


While we are spending thousands and thousands at a multitude of security problems, kernel issues should not on our prime-priority record. Honestly I remember only as soon as having discussing a kernel vulnerability. The results of the analysis has been that each one our systems were operating kernels that had been older because the kernel that had the vulnerability.
However "patch management" is an actual challenge for us. Software must continue to work if we set up security patches or replace to new releases due to the top-of-life policy of a vendor. The revenue of the corporate is relying on the IT methods operating. So "not breaking consumer house" is a safety function for us, as a result of a breakage of 1 component of our several ten 1000's of Linux techniques will cease the roll-out of the safety update.
One other problem is embedded software or firmware. As of late almost all hardware techniques embrace an working system, often some Linux version, offering a fill network stack embedded to support remote management. Recurrently these techniques don't survive our obligatory security scan, as a result of distributors still did not replace the embedded openssl.
The real problem is to offer a software stack that may be operated in the hostile environment of the Web maintaining full system integrity for ten years and even longer without any buyer maintenance. The current state of software engineering would require help for an automatic replace process, however distributors should perceive that their enterprise mannequin must be able to finance the resources offering the updates.
Total I am optimistic, networked software program just isn't the first expertise utilized by mankind causing issues that have been addressed later. Steam engine use might result in boiler explosions however the "engineers" had been ready to cut back this threat significantly over just a few many years.


Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]


The next is all guess work; I might be keen to know if others have evidence both a technique or one other on this: The people who discover ways to hack into these methods via kernel vulnerabilities know that they skills they've learnt have a market. Thus they don't are likely to hack with a purpose to wreak havoc - certainly on the whole the place information has been stolen as a way to launch and embarrass folks, it _appears_ as if those hacks are through a lot easier vectors. I.e. lesser expert hackers discover there's a whole load of low-hanging fruit which they can get at. They're not being paid ahead of time for the data, so they turn to extortion as a substitute. They do not cowl their tracks, and they'll usually be discovered and charged with criminal offences.
So if your safety meets a certain fundamental level of proficiency and/or your organization is not doing something that places it near the highest of "firms we might prefer to embarrass" (I think the latter is far more practical at preserving techniques "secure" than the previous), then the hackers that get into your system are prone to be expert, paid, and probably not going to do much harm - they're stealing information for a competitor / state. So that doesn't trouble your bottom line - at the very least not in a manner which your shareholders will bear in mind of. So why fund security?


Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Link]


On the other hand, some effective mitigation in kernel stage could be very useful to crush cybercriminal/skiddie's attempt. If considered one of your buyer operating a future buying and selling platform exposes some open API to their purchasers, and if the server has some memory corruption bugs may be exploited remotely. Then you recognize there are recognized attack methods( comparable to offset2lib) may also help the attacker make the weaponized exploit so much simpler. Will you explain the failosophy "A bug is bug" to your buyer and tell them it might be ok? Btw, offset2lib is ineffective to PaX/Grsecurity's ASLR imp.
To probably the most commercial uses, extra safety mitigation throughout the software won't price you more finances. You may nonetheless have to do the regression test for every upgrade.


Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]


Needless to say I specialise in exterior internet-primarily based penetration-exams and that in-house assessments (native LAN) will probably yield different outcomes.


Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]


I keep reading this headline as "a new Minecraft moment", and thinking that maybe they've decided to observe up the .Net thing by open-sourcing Minecraft. Oh effectively. I mean, security is sweet too, I assume.


Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]


Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Hyperlink]


Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Link]


Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]


Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]


(Oh, and I was also still wondering how Minecraft had taught us about Linux performance - so due to the opposite remark thread that pointed out the 'd', not 'e'.)


Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Link]


I might identical to to add that in my view, there is a basic problem with the economics of computer security, which is particularly visible at present. Two issues even possibly.
First, the money spent on pc security is usually diverted towards the so-referred to as safety "circus": quick, simple solutions that are primarily chosen just with the intention to "do something" and get higher press. It took me a long time - maybe many years - to claim that no safety mechanism in any respect is healthier than a nasty mechanism. But now I firmly imagine in this attitude and would quite take the risk knowingly (provided that I can save money/resource for myself) than take a foul method at solving it (and haven't any money/resource left once i understand I should have accomplished something else). And i find there are lots of bad or incomplete approaches at present available in the pc security subject.
These spilling our uncommon cash/assets on prepared-made useless instruments ought to get the unhealthy press they deserve. And, we actually must enlighten the press on that as a result of it isn't really easy to appreciate the efficiency of protection mechanisms (which, by definition, ought to stop issues from occurring).
Second, and that may be newer and extra worrying. The movement of cash/resource is oriented within the path of assault instruments and vulnerabilities discovery a lot greater than in the direction of latest safety mechanisms.
This is particularly worrying as cyber "defense" initiatives look more and more like the same old idustrial projects aimed at producing weapons or intelligence programs. Moreover,  minecraft servers , because they're only working towards our very susceptible present methods; and bad intelligence programs as even fundamental college-level encryption scares them down to ineffective.
However, all the ressources are for these adult teenagers taking part in the white hat hackers with not-so-troublesome programming tips or network monitoring or WWI-degree cryptanalysis. And now additionally for the cyberwarriors and cyberspies that have but to prove their usefulness fully (especially for peace protection...).
Personnally, I might fortunately go away them all the hype; but I'll forcefully declare that they have no proper whatsoever on any of the funds allocation decisions. Only these engaged on protection should. And yep, it means we should always determine the place to put there assets. Now we have to assert the unique lock for ourselves this time. (and I guess the PaXteam could possibly be among the primary to learn from such a change).
Whereas fascinated by it, I would not even leave white-hat or cyber-guys any hype in the long run. That is extra publicity than they deserve.
I crave for the day I'll learn in the newspaper that: "One other of these ill advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to deliver a type of unfinished and dangerous quality programs, X, that we are all obliged to make use of to its knees, annoying thousands and thousands of standard customers along with his unlucky cyber-vandalism. All the protection consultants unanimously suggest that, as soon as again, the budget of the cyber-command be retargetted, or no less than leveled-off, so as to convey more safety engineer positions in the educational domain or civilian business. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."


Hmmm - cyber-hooligans - I just like the label. Though it doesn't apply effectively to the battlefield-oriented variant.


Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]


The state of 'software program safety trade' is a f-ng catastrophe. Failure of the very best order. There is very large quantities of money that is going into 'cyber safety', but it's often spent on authorities compliance and audit efforts. This implies as an alternative of truly putting effort into correcting issues and mitigating future problems, the majority of the hassle goes into taking present purposes and making them conform to committee-pushed tips with the minimal quantity of effort and changes.
Some degree of regulation and standardization is totally needed, but lay persons are clueless and are completely unable to discern the difference between any individual who has beneficial experience versus some company that has spent millions on slick advertising and marketing and 'native promoting' on giant websites and pc magazines. The people with the money unfortunately only have their very own judgment to depend on when buying into 'cyber safety'.
> Those spilling our uncommon cash/resources on ready-made useless instruments should get the bad press they deserve.
There is no such thing as 'our uncommon money/resources'. You may have your money, I have mine. Money being spent by some company like Redhat is their cash. Cash being spent by governments is the federal government's cash. (you, actually, have far more control in how Walmart spends it's money then over what your government does with their's)
> This is particularly worrying as cyber "defense" initiatives look an increasing number of like the same old idustrial tasks aimed toward producing weapons or intelligence methods. Moreover, bad ineffective weapons, because they're solely working in opposition to our very vulnerable current programs; and dangerous intelligence programs as even fundamental faculty-stage encryption scares them right down to useless.
Having secure software with sturdy encryption mechanisms in the hands of the general public runs counter to the pursuits of most major governments. Governments, like another for-revenue organization, are primarily desirous about self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Way more worthwhile to them then attempting to help the public have a safe mechanism for making phone calls. Especially when these safe mechanisms interfere with knowledge assortment efforts.
Sadly you/I/us cannot rely upon some magical benefactor with deep pockets to sweep in and make Linux higher. It is just not going to happen.
Companies like Redhat have been massively useful to spending assets to make Linux kernel extra succesful.. nonetheless they're pushed by a the necessity to turn a revenue, which implies they should cater directly to the the sort of requirements established by their buyer base. Prospects for EL are typically far more centered on lowering costs associated with administration and software development then safety at the low-level OS.
Enterprise Linux customers are inclined to depend on physical, human policy, and community security to guard their 'mushy' interiors from being exposed to exterior threats.. assuming (rightly) that there is little or no they'll do to really harden their programs. In fact when the choice comes between security vs comfort I'm positive that most customers will happily defeat or strip out any security mechanisms launched into Linux.
On high of that when most Enterprise software program is extremely unhealthy. So much so that 10 hours spent on improving a web entrance-end will yield extra real-world security benefits then a 1000 hours spent on Linux kernel bugs for most companies.
Even for 'normal' Linux users a safety bug of their Firefox's NAPI flash plugin is way more devastating and poses a massively larger threat then a obscure Linux kernel buffer over flow drawback. It's just not likely vital for attackers to get 'root' to get access to the vital data... usually all of which is contained in a single user account.
In the end it is as much as individuals like you and myself to put the trouble and money into enhancing Linux safety. For each ourselves and different people.


Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]


Spilling has always been the case, but now, to me and in laptop safety, most of the money appears spilled on account of dangerous religion. And this is mostly your cash or mine: both tax-fueled governemental resources or company costs which might be instantly reimputed on the costs of products/software program we're told we are *obliged* to purchase. (Look at corporate firewalls, house alarms or antivirus software advertising discourse.)
I believe it is time to level out that there are several "malicious malefactors" round and that there's a real must establish and sanction them and confiscate the assets they've someway managed to monopolize. And that i do *not* think Linus is among such culprits by the best way. But I believe he may be among the ones hiding their heads within the sand about the aforementioned evil actors, while he in all probability has extra leverage to counteract them or oblige them to reveal themselves than many people.
I find that to be of brown-paper-bag level (although head-in-the-sand is somehow a brand new interpretation).
Ultimately, I believe you might be proper to say that currently it's solely as much as us people to strive honestly to do one thing to improve Linux or computer safety. But I still think that I am right to say that this isn't normal; particularly while some very severe individuals get very critical salaries to distribute randomly some difficult to guage budgets.
[1] A paradoxical scenario when you think about it: in a site where you're at the start preoccupied by malicious people everybody ought to have factual, transparent and trustworthy habits as the primary priority in their thoughts.


Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Link]


It even has a nice, seven line Fundamental-pseudo-code that describes the present scenario and clearly reveals that we're caught in an infinite loop. It does not answer the big query, although: How to write down higher software.
The sad factor is, that this is from 2005 and all the things that had been obviously stupid concepts 10 years ago have proliferated even more.


Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]


Notice IMHO, we must always investigate further why these dumb things proliferate and get so much help.
If it's solely human psychology, effectively, let's battle it: e.g. Mozilla has shown us that they can do wonderful issues given the fitting message.
If we're dealing with lively folks exploiting public credulity: let's establish and combat them.
However, extra importantly, let's capitalize on this information and safe *our* systems, to show off at a minimal (and more later on after all).
Your reference conclusion is particularly good to me. "problem [...] the standard knowledge and the status quo": that job I might fortunately accept.


Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Hyperlink]


That rant is itself a bunch of "empty calories". The converse to the gadgets it rants about, which it's suggesting at some level, can be as unhealthy or worse, and indicative of the worst kind of safety thinking that has put lots of people off. Alternatively, it's only a rant that provides little of worth.
Personally, I believe there's no magic bullet. Safety is and always has been, in human historical past, an arms race between defenders and attackers, and one that is inherently a trade-off between usability, risks and costs. If there are mistakes being made, it's that we should most likely spend extra resources on defences that would block whole courses of attacks. E.g., why is the GRSec kernel hardening stuff so exhausting to use to regular distros (e.g. there is not any reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one safety context? Why are we still writing plenty of software program in C/C++, typically without any basic security-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to provide safety with speed?
Little question there are a lot of individuals engaged on "block lessons of attacks" stuff, the query is, why aren't there more resources directed there?


Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]


>There are a lot of the explanation why Linux lags behind in defensive security applied sciences, but considered one of the key ones is that the companies earning profits on Linux haven't prioritized the development and integration of those applied sciences.
This looks like a reason which is really worth exploring. Why is it so?
I think it's not apparent why this does not get some more attention. Is it attainable that the people with the cash are proper not to extra extremely prioritise this? Afterall, what curiosity do they have in an unsecure, exploitable kernel? Where there's common trigger, linux development gets resourced. It has been this fashion for a few years. If filesystems qualify for common curiosity, surely security does. So there does not appear to be any obvious reason why this situation does not get more mainstream attention, besides that it really already will get enough. You might say that catastrophe has not struck yet, that the iceberg has not been hit. However it seems to be that the linux improvement process just isn't overly reactive elsewhere.


Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]


That is an attention-grabbing query, actually that's what they really imagine regardless of what they publicly say about their dedication to safety applied sciences. What's the truly demonstrated downside for Kernel developers and the organizations that pay them, as far as I can tell there isn't adequate consequence for the lack of Safety to drive extra funding, so we're left begging and cajoling unconvincingly.


Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Link]


The key subject with this area is it pertains to malicious faults. So, when consequences manifest themselves, it is too late to act. And if the present commitment to a scarcity of voluntary strategy persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers seem pretty resistant to paranoia. That is an efficient factor. But I'm ready for the times the place armed land-drones patrol US streets in the neighborhood of their kids schools for them to discover the feeling. They don't seem to be so distants the days when innocent lives will unconsciouly depend on the safety of (linux-primarily based) computer systems; under water, that's already the case if I remember appropriately my final dive, in addition to in a number of current automobiles in response to some experiences.


Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]


Classic internet hosting corporations that use Linux as an uncovered entrance-finish system are retreating from improvement whereas HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their directions.
This is actually not that shocking: For hosting wants the kernel has been "finished" for quite some time now. Apart from support for present hardware there shouldn't be a lot use for newer kernels. Linux 3.2, and even older, works simply high quality.
Hosting doesn't need scalability to a whole bunch or thousands of CPU cores (one uses commodity hardware), complex instrumentation like perf or tracing (techniques are locked down as a lot as attainable) or superior power-management (if the system does not have fixed excessive load, it isn't making sufficient money). So why should internet hosting companies nonetheless make sturdy investments in kernel growth? Even if that they had something to contribute, the hurdles for contribution have turn out to be increased and higher.
For their security needs, hosting firms already use Grsecurity. I have no numbers, but some expertise means that Grsecurity is mainly a set requirement for shared internet hosting.
Then again, kernel security is nearly irrelevant on nodes of an excellent laptop or on a system operating large enterprise databases which are wrapped in layers of middle-ware. And cell distributors merely do not care.


Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]


Linking


Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]


Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]


The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm sure the system's onerous drives have been sent off for forensic examination, and we have all been waiting patiently for the answer to crucial query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper through April 1st, 2013, kernel.org included this be aware at the highest of the location Information: 'Thanks to all on your persistence and understanding during our outage and please bear with us as we carry up the totally different kernel.org systems over the following few weeks. We can be writing up a report on the incident in the future.' (Emphasis added.) That comment was removed (along with the rest of the site News) during a Could 2013 edit, and there hasn't been -- to my information -- a peep about any report on the incident since then. This has been disappointing. When the Debian Venture discovered sudden compromise of a number of of its servers in 2007, Wichert Akkerman wrote and posted a wonderful public report on exactly what occurred. Likewise, the Apache Foundation likewise did the precise factor with good public autopsies of the 2010 Net site breaches. Arstechnica's Dan Goodin was nonetheless making an attempt to observe up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman informed Ars that the investigation has yet to be accomplished and gave no timetable for when a report might be released. [...] Kroah-Hartman also advised Ars kernel.org programs were rebuilt from scratch following the attack. Officials have developed new tools and procedures since then, but he declined to say what they are. "There can be a report later this 12 months about site [sic] has been engineered, however do not quote me on when will probably be released as I am not liable for it," he wrote.
Who's responsible, then? Is anybody? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg K-H mentioned there would be a report 'later this yr', and four years for the reason that meltdown, nothing yet. How about some information? Rick Moen
[email protected]


Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]


Less seriously, be aware that if even the Linux mafia doesn't know, it should be the venusians; they are notoriously stealth of their invasions.


Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Link]


I do know the kernel.org admins have given talks about a few of the brand new protections which have been put into place. There are not any extra shell logins, instead every thing uses gitolite. The different services are on different hosts. There are extra kernel.org workers now. Persons are using two issue identification. Another stuff. Do a search for Konstantin Ryabitsev.


Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]


I beg your pardon if I was somehow unclear: That was mentioned to have been the path of entry to the machine (and i can readily consider that, as it was also the exact path to entry into shells.sourceforge.net, a few years prior, around 2002, and into many other shared Internet hosts for many years). However that's not what is of primary curiosity, and is not what the forensic research lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to use that to root entry is presently unknown and is being investigated'. Okay, folks, you have now had 4 years of investigation. What was the path of escalation to root? (Additionally, other details that might logically be covered by a forensic research, equivalent to: Whose key was stolen? Who stole the important thing?) That is the type of autopsy was promised prominently on the front page of kernel.org, to reporters, and elsewhere for a very long time (and then summarily eliminated as a promise from the front web page of kernel.org, without remark, along with the rest of the location Information part, and apparently dropped). It nonetheless would be applicable to know and share that data. Especially the datum of whether the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
[email protected]


Posted Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]


I've completed a better overview of revelations that got here out quickly after the break-in, and suppose I've found the reply, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days earlier than the general public was informed), plus Aug. Thirty first comments to The Register's Dan Goodin by 'two safety researchers who have been briefed on the breach': Root escalation was through exploit of a Linux kernel safety hole: Per the two security researchers, it was one both extremely embarrassing (vast-open access to /dev/mem contents including the running kernel's picture in RAM, in 2.6 kernels of that day) and known-exploitable for the prior six years by canned 'sploits, one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Different tidbits: - Site admins left the basis-compromised Web servers working with all companies nonetheless lit up, for multiple days. - Site admins and Linux Foundation sat on the data and failed to inform the public for those same a number of days. - Site admins and Linux Foundation have by no means revealed whether or not trojaned Linux source tarballs have been posted within the http/ftp tree for the 19+ days earlier than they took the location down. (Yes, git checkout was fantastic, but what about the thousands of tarball downloads?) - After promising a report for a number of years and then quietly removing that promise from the front web page of kernel.org, Linux Foundation now stonewalls press queries.
I posted my greatest try at reconstructing the story, absent an actual report from insiders, to SVLUG's principal mailing record yesterday. (Necessarily, there are surmises. If the people with the info were more forthcoming, we might know what happened for sure.) I do need to wonder: If there's another embarrassing screwup, will we even be told about it in any respect? Rick Moen
[email protected]


Posted Nov 22, 2015 14:25 UTC (Solar) by spender (guest, #23067) [Link]


Also, it's preferable to use stay reminiscence acquisition prior to powering off the system, in any other case you lose out on memory-resident artifacts you could perform forensics on.
-Brad


How in regards to the lengthy overdue autopsy on the August 2011 kernel.org compromise?


Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]


Thanks on your feedback, Brad. I'd been counting on Dan Goodin's claim of Phalanx being what was used to realize root, within the bit the place he cited 'two security researchers who had been briefed on the breach' to that effect. Goodin also elaborated: 'Fellow security researcher Dan Rosenberg mentioned he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an assault instrument, and i noted that oddity in my posting to SVLUG. That having been stated, yeah, the Phalanx README doesn't particularly declare this, so then possibly Goodin and his a number of 'safety researcher' sources blew that element, and nobody but kernel.org insiders yet knows the escalation path used to gain root. Additionally, it is preferable to use dwell memory acquisition previous to powering off the system, otherwise you lose out on reminiscence-resident artifacts which you could perform forensics on.
Arguable, but a tradeoff; you may poke the compromised reside system for state information, however with the disadvantage of leaving your system operating underneath hostile management. I used to be at all times taught that, on balance, it is higher to pull energy to finish the intrusion. Rick Moen
[email protected]


Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Link]


Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]


With "something" you imply those who produce these closed supply drivers, right?
If the "client product corporations" just stuck to utilizing parts with mainlined open source drivers, then updating their merchandise could be a lot easier.


A new Mindcraft second?


Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Link]


They have ring zero privilege, can entry protected memory straight, and can't be audited. Trick a kernel into running a compromised module and it is recreation over.
Even tickle a bug in a "good" module, and it is in all probability sport over - on this case quite literally as such modules are typically video drivers optimised for games ...