Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stupid question, how does a security audit work? Do the consultants just read through the code? Do they try to find security bug like they do on bug bounty programs?


I'm not an expert in this field, but we recently did a security audit. The auditors get access to the code in order to evaluate it for vulnerabilities. In our ruby application, they also check gems that we are using (through open source tools albeit).

They also did an in-app audit where they tried to break the application however they might see that. Having access to the code helps with this.

When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.


> When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.

Is this in reference to on-prem / enterprise software and is this typical? I haven't heard of customers doing this but it certainly makes sense (might as well invest thousands to test before spending magnitudes more on the product itself only to find it having a huge security hole). Then again I'm not sure I've worked with potential customers who have access to do something like that.


We just signed a big deal with a Google subsidiary, and part of that deal required us to go through a third party penetration test (no code access).


Well today I learned. Thanks!


The first few chapters of the book, "The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities" outlines a very meticulous process of reviewing source code for vulnerabilities in a professional manner.


Yes. The way it works is that two smart hackers go into the office each day and spend eight hours trying to think of as many creative ways of tapping on the target application as possible. Nothing is off-limits except what is agreed up front, but you're obviously expected not to interfere with production operations. The client is generally expected to set up a testing environment substantially similar to production, but usually consultants just have to muddle through with whatever the clients give, which may not (and usually isn't) populated with production data. As long as consultants are given the ability to enter in data themselves, i.e. admin accounts, this is fine, because data entry is their job. Hacking is basically large-scale data entry, and it's as boring as it sounds: very tedious, interspersed with excitement when you see an XSS popup window or figure out a clever way to get a reverse shell.

If after two weeks of this you found no medium-or-higher security vulnerabilities, you were generally considered to not be doing a very good job.

The secret of the industry is that at the end of this process, you are deemed secure. That's the point of the security audit. But if it's not a repeating process, it doesn't work. It may work for that particular version of the application, and it may substantially improve the security in that old vulnerabilities are found and fixed. Let me abandon this train of thought and put it another way:

This post is a press release saying that phpMyAdmin is secure. But that's not how this works. High-severity vulnerabilities are often found near the end of an audit. This is because the consultants have had time to become intimately familiar with the application. But the late stages of an audit are exactly when the consultant's time is mostly spent writing reports for the existing findings, and not doing pentesting. This means that two weeks is often just long enough to start finding serious vulns, since week one can be devoted to pentesting and week two is mostly reporting from Tuesday onward. But that "mostly reporting" process gets the consultant thinking about the application as they're doing the writeups, which -- you guessed it -- leads to realizing that there's something clever they could try. And when they try that clever thing, sometimes it yields a high-severity vuln. It's the opposite of a mechanical, thoughtless process.

That means your results will vary depending on who, specifically, is doing the auditing. If you run your application through the consulting process twice -- same version, same staging data, same everything -- it's likely that you'll get wildly different results, because the pentesters are different people.

It has to be an on-going process in order to be effective. And it can be highly effective. It just costs so much that only the most massive companies can afford this.

That's not to say this audit wasn't effective. It's possible that whoever did the audit found substantially everything. But it was interesting to discover how often this was not the case, in a "How'd they miss this last time?" sort of way.


They're a very worthwhile process for SMEs and companies that haven't had one before. At one of my previous employers (I won't say which) they were marvelling at the ways in which the contractor was able to do privilege escalation by editing a form to change their user level from the "3" or "4" in the drop-down to "1".

The fact that they got full rights so quickly really drove home the need for security to be a feature and for code reviews.

Now the cynical would point out that standard pen testers would have found that, and maybe they would, but the speed at which a contractor could find these issues and then see the full breadth of the surface compared to pen testers was great. And the fact they could explain back what the problem was in terms of code and how it should be rewritten rather than just "found rights escalation in form x" leaving the client to perhaps improperly deal with that.

Overall I was far more impressed watching an auditor doing a few days work than any of the regular pen testing companies I've seen since who mostly seem to point fuzzers at any endpoints they find.


> Now the cynical would point out that standard pen testers would have found that, and maybe they would, but the speed at which a contractor could find these issues and then see the full breadth of the surface compared to pen testers was great.

What's the difference between a contractor and a pen tester?

I consider one a function of how you are employed and the other a function of role. IE the two overlap and are not directly comparable.


I think eterm is comparing security auditors (with code access) to pen-testers (no code access).


Good question. It can be all or none of the above. Here's what happens at a high level:

Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.

The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.

Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.

It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.

Putting this all together, these are the stages of a traditional security audit from a high-quality firm:

Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.

Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.

Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.

Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.

Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.

This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.


Thanks for your response, great to see how it works from a business side. I'm going to use this opportunity and ask you another question.

What happens if after 2-3 weeks of consulting you don't find any "high impact" issue? Are your customer angry, happy?


That almost never happens. I can count on one hand the number of times it has happened in ~100 past assessments. Generally speaking, the maxim, "There is no such thing as a secure system" is valid. Competent security consultants should be capable finding something actionable in all but the most exceptional circumstances if you throw them into a room to search for vulnerabilities for a few weeks.

That said, I have had assessments where there were no findings. This is generally because there are informational observations that can't be escalated to vulnerabilities in the given assessment time, or because the application has a very security-conscious development team. If it happens, it might be a sign that the application is not sufficiently mature to require an assessment yet, or it's just too simple to really analyze. It can also mean that the consultant is not sufficiently competent to perform the assessment.

To give an example, I worked at a large consultancy where we had a giant public company hold us on retainer to perform assessments on "brochure websites" - they were not interactive at all. There wasn't even a login interface. The company wanted to check off that it had security assessments performed on all webpages it hosted, but realistically there were never any actionable findings. (This is about as much detail as I can give because it's NDA'd, but it's not the sort of thing I'd take on in my own practice).

A more recent example is a YC company I worked with a few weeks ago. Their development team is very well educated on security matters. While I found security vulnerabilities, there were no high severity findings because the quality of peer review and paranoid development was very high there. They were very familiar with every Ruby/Rails gotcha and pretty thoroughly avoided them.

To answer your question, I've never had anyone "angry" at me for not finding anything. They're not "happy", but as long as they can verify that the work they paid for was done, they aren't angry. It doesn't happen often, and when it has happened the consultant should provide enough information to demonstrate that competent work was done.

However, I personally don't feel very good about it. My understanding is that competent security engineers in general are not happy about it. It is much more likely that the assessment either shouldn't have happened (because the application is not mature or complex enough) or that the consultant was simply insufficiently competent than that the application is really completely secure.


The smart clients are usually unhappy, unless they've set expectations in advance that you're not expected to find anything (which is rare).

As consultants, you are always very unhappy when your project ends with no sev:hi findings. That, too, is rare.


I can't upvote this comment enough. That's an excellent answer to the question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: