Good question. It can be all or none of the above. Here's what happens at a high level:
Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.
The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.
Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.
It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.
Putting this all together, these are the stages of a traditional security audit from a high-quality firm:
Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.
Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.
Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.
Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.
Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.
This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.
That almost never happens. I can count on one hand the number of times it has happened in ~100 past assessments. Generally speaking, the maxim, "There is no such thing as a secure system" is valid. Competent security consultants should be capable finding something actionable in all but the most exceptional circumstances if you throw them into a room to search for vulnerabilities for a few weeks.
That said, I have had assessments where there were no findings. This is generally because there are informational observations that can't be escalated to vulnerabilities in the given assessment time, or because the application has a very security-conscious development team. If it happens, it might be a sign that the application is not sufficiently mature to require an assessment yet, or it's just too simple to really analyze. It can also mean that the consultant is not sufficiently competent to perform the assessment.
To give an example, I worked at a large consultancy where we had a giant public company hold us on retainer to perform assessments on "brochure websites" - they were not interactive at all. There wasn't even a login interface. The company wanted to check off that it had security assessments performed on all webpages it hosted, but realistically there were never any actionable findings. (This is about as much detail as I can give because it's NDA'd, but it's not the sort of thing I'd take on in my own practice).
A more recent example is a YC company I worked with a few weeks ago. Their development team is very well educated on security matters. While I found security vulnerabilities, there were no high severity findings because the quality of peer review and paranoid development was very high there. They were very familiar with every Ruby/Rails gotcha and pretty thoroughly avoided them.
To answer your question, I've never had anyone "angry" at me for not finding anything. They're not "happy", but as long as they can verify that the work they paid for was done, they aren't angry. It doesn't happen often, and when it has happened the consultant should provide enough information to demonstrate that competent work was done.
However, I personally don't feel very good about it. My understanding is that competent security engineers in general are not happy about it. It is much more likely that the assessment either shouldn't have happened (because the application is not mature or complex enough) or that the consultant was simply insufficiently competent than that the application is really completely secure.
Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.
The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.
Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.
It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.
Putting this all together, these are the stages of a traditional security audit from a high-quality firm:
Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.
Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.
Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.
Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.
Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.
This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.