I'm on a password kick lately. I read about hash tables in Powershell. They are so cool. So, I wrote a script that counts the number of unique passwords in a text file that contains usernames and passwords. It also sorts the passwords from the most used to the least used, using the GetEnumerator method and Sort-Object. Keep in mind everyone, I'm still new to Powershell. Someone may have already done this, or there may be more efficient methods. This is what I've learned to this point in time.
#Make a hash table to hold the passwords and count.
$passwordscount = @{}
#Make a variable to hold each of the passwords
$password
#Get the content of the passwords file and add it to a list
$passwordslist = Get-Content plain.txt
#Tell powershell the column number of the plain.txt file that you want in the hash table.
#The computer counts from 0, so it will be one less than the column you want.
#For example, plain.txt has a username and password separated by a column.
#You want column 1. Column 0 is the username.
$passwordscolumnnumber = 1
#For each password in the passwords list, do the following
ForEach($password in $passwordslist){
#Split each line of the plain.txt file into an array, splitting at the colon
#The username is element 0, the password is element 1
$passwordsarray = $password.split(':')
#separate these passwords from the username and count them
$passwordfield = $passwordsarray[$passwordcolumnnumber];
#If this is the first occurrence of a password, add it to the hash table and put 1 in the count
If ($passwordscount[$passwordfield -eq $null]{
$passwordscount[$passwordfield] = 1
}
Else{
#if a password has already been seen, add 1 to it's count
$passwordscount[passwordfield]++
}
}
$passwordscount.GetEnumerator() | Sort-Object -Property Value | Out-File passwordscount.txt
Saturday, February 24, 2018
Thursday, February 22, 2018
Powershell: Convert JtR Formatted Text file to Hashcat LM or NTLM
I said on my recent post about cracking domain passwords with hashcat, that you could probably convert from JtR Format using Powershell. By JtR format, I mean username:uid:lm hash:ntlm hash on each line in a text file. Someone corrected me and stated that this is pwdump format. I learn new things every day. They also cleared up a misunderstanding about how the LM hashes work. Thanks again, loyal reader!
Update: Someone stated that there is a switch/flag to so that JtR/pwdump formatted hashes could be used in hashcat. Does anyone happen to know what that switch/flag is? I haven't had luck finding it.
I think that I've written a script that may convert JtR formatted files to hash cat lm or ntlm.
#The following hash means that the lm hash is blank. This occurs because the password is
#longer than 14 characters.
$blanklmhash = "aad3b435b51404eeaad3b435b51404ee"
#Create arrays to hold the lm hashes, ntlm hashes, and ntlm hashes with lm hashes.
$lmhashes = @()
$ntlmhashes = @()
$ntlmhasheswlm = @()
#Get the JtR formatted hashes from a text file.
$hasheslist = Get-Content hashes.txt
#For each JtR formatted hash in the hashes list, do the following
ForEach($JtRhash in $hasheslist){
#Split the JtRhash into an array of four pieces. Element [0] of the array is the username.
#Element[1] of the array is the uid. Element [2] of the array is the lm hash. Element [3] of
#the array is the ntlm hash.
$JtRhashArray = $JtRhash.split(':')
#if the LM hash is that blank hash in the hashes file, it means that LM is either disabled or
#the password is greater than 14 digits. LM can't handle more than 14 digits. So, add
#the LM hashes that are not that blank password to the $lmhashes array. Add their ntlm
#counterparts to the ntlmhasheswlm array. I'm doing this because the lm cracked
#passwords are uppercase because of how lm works. These lm hashes can be used as
#a dictionary/rules attack against their ntlm counterparts - making it faster to crack the
#ntlm passwords associated with them.
If (!($JtRhashArray[2] -eq $blanklmhash)){
$lmhashes += $JtRhashArray[0] + ":" + JtRhashArray[2]
$ntlmhasheswlm += $JtRhashArray[0] + ":" + JtRhashArray[3]
}
#otherwise, add the password to the $ntlmhashes array.
Else{
$ntlmhashes += $JtRhashArray[0] + ":" + JtRhashArray[3]
}
}
#output the lm hashes, ntlm hashes, and ntlm hashes with lm hashes to files.
$lmhashes | Out-File lmhashes.txt
$ntlmhashes | Out-File ntlmhashes.txt
$ntlmhasheswlm | Out-File ntlmhasheswlm.txt
Update: Someone stated that there is a switch/flag to so that JtR/pwdump formatted hashes could be used in hashcat. Does anyone happen to know what that switch/flag is? I haven't had luck finding it.
I think that I've written a script that may convert JtR formatted files to hash cat lm or ntlm.
#The following hash means that the lm hash is blank. This occurs because the password is
#longer than 14 characters.
$blanklmhash = "aad3b435b51404eeaad3b435b51404ee"
#Create arrays to hold the lm hashes, ntlm hashes, and ntlm hashes with lm hashes.
$lmhashes = @()
$ntlmhashes = @()
$ntlmhasheswlm = @()
#Get the JtR formatted hashes from a text file.
$hasheslist = Get-Content hashes.txt
#For each JtR formatted hash in the hashes list, do the following
ForEach($JtRhash in $hasheslist){
#Split the JtRhash into an array of four pieces. Element [0] of the array is the username.
#Element[1] of the array is the uid. Element [2] of the array is the lm hash. Element [3] of
#the array is the ntlm hash.
$JtRhashArray = $JtRhash.split(':')
#if the LM hash is that blank hash in the hashes file, it means that LM is either disabled or
#the password is greater than 14 digits. LM can't handle more than 14 digits. So, add
#the LM hashes that are not that blank password to the $lmhashes array. Add their ntlm
#counterparts to the ntlmhasheswlm array. I'm doing this because the lm cracked
#passwords are uppercase because of how lm works. These lm hashes can be used as
#a dictionary/rules attack against their ntlm counterparts - making it faster to crack the
#ntlm passwords associated with them.
If (!($JtRhashArray[2] -eq $blanklmhash)){
$lmhashes += $JtRhashArray[0] + ":" + JtRhashArray[2]
$ntlmhasheswlm += $JtRhashArray[0] + ":" + JtRhashArray[3]
}
#otherwise, add the password to the $ntlmhashes array.
Else{
$ntlmhashes += $JtRhashArray[0] + ":" + JtRhashArray[3]
}
}
#output the lm hashes, ntlm hashes, and ntlm hashes with lm hashes to files.
$lmhashes | Out-File lmhashes.txt
$ntlmhashes | Out-File ntlmhashes.txt
$ntlmhasheswlm | Out-File ntlmhasheswlm.txt
Hashcat: Cracking Windows Domain Hashes
Learning how to use hashcat. Sharing some of my experience with it. I've used JtR and Cain and Abel. Hashcat I've used maybe once or twice. I'm not going to go into depth about how to dump the hashes. That is not the purpose of this post. If you're interested in that, Rapid7, the creator of Metasploit has some good tutorials about how to use their modules to dump password hashes from Domain Controllers.
First, I had to manipulate the data that I had gathered in order for hash cat to understand it. Many of the modules in Metasploit dump the hashes in JtR (John the Ripper) format. I've seen some that dump the hashes in hashcat format, but not a lot. Also, note, I may be missing some settings in Metasploit because I'm still new to using it. This still may be useful for other purposes.
For windows domain hashes, JtR format looks like the following:
username:uid:lm hash:ntlm hash
Note: There is a blank hash for lm hashes. That blank hash is aad3b435b51404eeaad3b435b51404ee. LM passwords are really easy to crack.
Someone was kind enough to explain the LM password being that blank hash. That means that the password is greater than 14 characters. Thanks loyal reader!
Sometimes it's useful to first crack LM passwords - if they are available, then crack the NTLM passwords using a dictionary consisting of the LM passwords and what are known as mangling rules in JtR.
The format that hashcat understands is "username:lm" hash or "username:ntlm" hash. Note: This is as long as the --username switch is being used in the command to use hashcat, other wise, you'll get an error about the hash length.
I went about converting it the long way. There are much easier ways. I imagine I could use Powershell to remove the uid and one or the other of the password hash types. Or, I could have simply used officetohashcat.py.
Use CSV with HashCat (Use at your own risk. I haven't thoroughly tested this- it seems to work fine so far.)
I changed the List Separator in the Region settings in the Control Panel to use a : as a list separator instead of a comma. When I save files as csv files, it will be a colon separated list, not a comma separated list.
I used Microsoft Excel 2016 to separate the data for me. I like the sorting and filtering options with Excel. To bring in a delimited text file - in my case it was formatted with colons, you go to the Data tab>Get External Data>From Text File. Select the text file that contains the hashes from the list. Follow the directions in the wizard. On one part, it will ask you how it is delimited: Choose Other, then type :.
One the data was imported into Excel, I sorted out the LM passwords. (I could tell that they were LM because they didn't have the blank LM hash. The hashes were different.) I deleted the uid and NTLM columns. I saved that into a lm_hashes.csv file.
Then I separated out the NTLM hashes. I deleted the uid and LM columns. I saved this as ntlm_hashes.csv file.
If either of these files are opened in Notepad, they should be colon delimited. Might check before trying to crack them.
Now the fun begins. :)
Hashcat takes some getting used to. It is picky about the order of things, attack mode, formats of the hashes, the type of attack, etc.
Hashcat Dictionary attack
-a 0 : straight mode - this takes hashes from a dictionary
-m : the type of password hash. 1000 is NTLM, 3000 is LM, 900 is MD4
-o : an output file for the cracked hashes - If -o is not specified, the cracked hashes/passwords will be in hashcat.potfile note if you want to save the hashes in a certain format, you can do that after cracking them with --show and other options.
Assuming hashcat is in the PATH. Otherwise, specify a full path.
hashcat64.exe -a 0 -m 1000 ntlm_hashes.csv dictionary.txt -o ntlm_cracked.txt
Note: You can specify more than one dictionary. Just add the pathname/file after the first one.
Hashcat Brute-Force (Mask Attack)
-a 3 : brute-force (mask) attack
-1 : user-defined character set. ?u - Uppercase letters, ?d - digits, ?s - symbols
--incremental : don't just do a password length of the mask. Do 1 character, 2 characters, 3 characters, etc with the same user-defined character set. If a mask is set that is large - like more than 6 characters, you may get an error about an integer overflow detected. This means that hashcat can't handle that mask. It may be wise not to use a large mask anyway - because those hashes may not be cracked in your lifetime. I always use incremental. If it ever gets to a point where it estimates a long time - weeks or months to crack, I don't do it. There are better ways. Using rules to manipulate dictionary words, for instance.
hashcat64.exe -a 3 -m 1000 -1 ?u?d?s ntlm_hashes.csv -o ntlm_cracked.txt ?1?1?1?1?1?1?1? --incremental
There is a -p option which specifies a different delimiter for the hash file/output file, but I've not had good luck with it. I recommend having your data the way it needs to be before putting it into hashcat.
Show Loot (IE the Cracked Passwords)
hashcat64.exe -m 1000 --show hashcat.potfile
Note: That -m is the password type. It must match the type of hashes that were cracked.
That last bit, hashcat.potfile is assuming you didn't add an output file when you were cracking. If you did, they will be in that path/filename. I think that it still saves it to the pot file as well, but remember to add the path/filename if you aren't in the same directory as the hashcat.potfile. It's usually in the same place that the hashcat binary is stored.
Show the Cracked Hashes in a Certain Format
hashcat64.exe -m 1000 --show --potfile-path hashcat.potfile --username -o ntlm_cracked.txt --outfile-format 2 C:\Users\user\ntlm_hashes.csv
--potfile-path : specifies where the loot is.
--username : specifies to ignore usernames. This must be added if there are usernames in the original file.
-o : specifies an output file.
--outfile-format 2 : in this case, it shows the cracked hashes as plain text passwords in the file only. If the original file has users, it will have user:password in the output file.
C:\Users\user\ntlm_hashes.csv : specifies the original file that contains the hashes.
I will add how to do cracking with rules later. I haven't experimented with that functionality just yet.
First, I had to manipulate the data that I had gathered in order for hash cat to understand it. Many of the modules in Metasploit dump the hashes in JtR (John the Ripper) format. I've seen some that dump the hashes in hashcat format, but not a lot. Also, note, I may be missing some settings in Metasploit because I'm still new to using it. This still may be useful for other purposes.
For windows domain hashes, JtR format looks like the following:
username:uid:lm hash:ntlm hash
Note: There is a blank hash for lm hashes. That blank hash is aad3b435b51404eeaad3b435b51404ee. LM passwords are really easy to crack.
Someone was kind enough to explain the LM password being that blank hash. That means that the password is greater than 14 characters. Thanks loyal reader!
Sometimes it's useful to first crack LM passwords - if they are available, then crack the NTLM passwords using a dictionary consisting of the LM passwords and what are known as mangling rules in JtR.
The format that hashcat understands is "username:lm" hash or "username:ntlm" hash. Note: This is as long as the --username switch is being used in the command to use hashcat, other wise, you'll get an error about the hash length.
I went about converting it the long way. There are much easier ways. I imagine I could use Powershell to remove the uid and one or the other of the password hash types. Or, I could have simply used officetohashcat.py.
Use CSV with HashCat (Use at your own risk. I haven't thoroughly tested this- it seems to work fine so far.)
I changed the List Separator in the Region settings in the Control Panel to use a : as a list separator instead of a comma. When I save files as csv files, it will be a colon separated list, not a comma separated list.
I used Microsoft Excel 2016 to separate the data for me. I like the sorting and filtering options with Excel. To bring in a delimited text file - in my case it was formatted with colons, you go to the Data tab>Get External Data>From Text File. Select the text file that contains the hashes from the list. Follow the directions in the wizard. On one part, it will ask you how it is delimited: Choose Other, then type :.
One the data was imported into Excel, I sorted out the LM passwords. (I could tell that they were LM because they didn't have the blank LM hash. The hashes were different.) I deleted the uid and NTLM columns. I saved that into a lm_hashes.csv file.
Then I separated out the NTLM hashes. I deleted the uid and LM columns. I saved this as ntlm_hashes.csv file.
If either of these files are opened in Notepad, they should be colon delimited. Might check before trying to crack them.
Now the fun begins. :)
Hashcat takes some getting used to. It is picky about the order of things, attack mode, formats of the hashes, the type of attack, etc.
Hashcat Dictionary attack
-a 0 : straight mode - this takes hashes from a dictionary
-m : the type of password hash. 1000 is NTLM, 3000 is LM, 900 is MD4
-o : an output file for the cracked hashes - If -o is not specified, the cracked hashes/passwords will be in hashcat.potfile note if you want to save the hashes in a certain format, you can do that after cracking them with --show and other options.
Assuming hashcat is in the PATH. Otherwise, specify a full path.
hashcat64.exe -a 0 -m 1000 ntlm_hashes.csv dictionary.txt -o ntlm_cracked.txt
Note: You can specify more than one dictionary. Just add the pathname/file after the first one.
Hashcat Brute-Force (Mask Attack)
-a 3 : brute-force (mask) attack
-1 : user-defined character set. ?u - Uppercase letters, ?d - digits, ?s - symbols
--incremental : don't just do a password length of the mask. Do 1 character, 2 characters, 3 characters, etc with the same user-defined character set. If a mask is set that is large - like more than 6 characters, you may get an error about an integer overflow detected. This means that hashcat can't handle that mask. It may be wise not to use a large mask anyway - because those hashes may not be cracked in your lifetime. I always use incremental. If it ever gets to a point where it estimates a long time - weeks or months to crack, I don't do it. There are better ways. Using rules to manipulate dictionary words, for instance.
hashcat64.exe -a 3 -m 1000 -1 ?u?d?s ntlm_hashes.csv -o ntlm_cracked.txt ?1?1?1?1?1?1?1? --incremental
There is a -p option which specifies a different delimiter for the hash file/output file, but I've not had good luck with it. I recommend having your data the way it needs to be before putting it into hashcat.
Show Loot (IE the Cracked Passwords)
hashcat64.exe -m 1000 --show hashcat.potfile
Note: That -m is the password type. It must match the type of hashes that were cracked.
That last bit, hashcat.potfile is assuming you didn't add an output file when you were cracking. If you did, they will be in that path/filename. I think that it still saves it to the pot file as well, but remember to add the path/filename if you aren't in the same directory as the hashcat.potfile. It's usually in the same place that the hashcat binary is stored.
Show the Cracked Hashes in a Certain Format
hashcat64.exe -m 1000 --show --potfile-path hashcat.potfile --username -o ntlm_cracked.txt --outfile-format 2 C:\Users\user\ntlm_hashes.csv
--potfile-path : specifies where the loot is.
--username : specifies to ignore usernames. This must be added if there are usernames in the original file.
-o : specifies an output file.
--outfile-format 2 : in this case, it shows the cracked hashes as plain text passwords in the file only. If the original file has users, it will have user:password in the output file.
C:\Users\user\ntlm_hashes.csv : specifies the original file that contains the hashes.
I will add how to do cracking with rules later. I haven't experimented with that functionality just yet.
Thursday, February 15, 2018
K For Troubleshooting
People think of Kibana as this awesome data visualization and exploration tool. What does that even mean? Considering the breadth of logs that can be fed into Kibana, that can mean many things.
Today, I'm going to explore a real use that may not be normally considered. Troubleshooting.
Fortigate VPN tunnels, for example, have fairly explicit error logs. If you aren't used to reading them, they can be annoying to understand. For example, "vpn SA peer proposal does not match local policy" - in other words, "Hey, your firewall rules may be blocking this traffic." At least some are easily understood, like "probable preshared key mismatch", for example.
If you have these logs going into the ELK stack, you can use Kibana to find these errors for you, so all you would have to do is look at a Visualization or Dashboard when you arrive at work and periodically throughout the day - fix the VPNs before anyone even knows there is a problem and have an awesome day - not having to fight those fires when some random person mentions them.
In order to show only the down VPNs, on the Discover page, I showed only the firewall logs, and did a search for "probable preshared key mismatch". I saved that search.
When I created the visualization, I chose the option to create the visualization from a saved search, and selected the "Probable Preshared Key Mismatch" saved search.
I used a data table because if you're working in a large environment, there might not just be a couple of VPN tunnels down, there could be a lot of them.
For the metric, I used count - this tells the number of times that this error was seen per bucket.
For the bucket, I used the Terms bucket - VPNDeviceName. For the sub-bucket, I used the Terms bucket - VpnTunnelName so that we knew which specific tunnels were down. (No sense in fixing every tunnel on the device if only one is down.) These make up the columns in the data table.
I tested the visualization by changing the time frame from the last fifteen minutes to the last day. (If there weren't any down in the last few days, change the time frame to the last few days - trust me, they go down quite a bit - you will eventually see at least one down.) Sure enough, it showed VPN tunnels that had been down in the last day because of a "preshared key mismatch".
Then I did the same steps for the other common errors that happen when VPN tunnels go down.
If they ever change the error messages, I will have to change these, so if there is a better way to do it, please let me know.
Once I saved the visualizations, I saved them to a Dashboard so that I could easily see what was down and why. This saved a lot of troubleshooting time.
Another awesome thing about this: If you change this visualization a small amount, it can be used as a metric to show how often vpn tunnels are down and how often an error occurs. It can be used to find if these vpns going down is a symptom of an even larger problem.
What other ways have people found to use Kibana?
Today, I'm going to explore a real use that may not be normally considered. Troubleshooting.
Fortigate VPN tunnels, for example, have fairly explicit error logs. If you aren't used to reading them, they can be annoying to understand. For example, "vpn SA peer proposal does not match local policy" - in other words, "Hey, your firewall rules may be blocking this traffic." At least some are easily understood, like "probable preshared key mismatch", for example.
If you have these logs going into the ELK stack, you can use Kibana to find these errors for you, so all you would have to do is look at a Visualization or Dashboard when you arrive at work and periodically throughout the day - fix the VPNs before anyone even knows there is a problem and have an awesome day - not having to fight those fires when some random person mentions them.
In order to show only the down VPNs, on the Discover page, I showed only the firewall logs, and did a search for "probable preshared key mismatch". I saved that search.
When I created the visualization, I chose the option to create the visualization from a saved search, and selected the "Probable Preshared Key Mismatch" saved search.
I used a data table because if you're working in a large environment, there might not just be a couple of VPN tunnels down, there could be a lot of them.
For the metric, I used count - this tells the number of times that this error was seen per bucket.
For the bucket, I used the Terms bucket - VPNDeviceName. For the sub-bucket, I used the Terms bucket - VpnTunnelName so that we knew which specific tunnels were down. (No sense in fixing every tunnel on the device if only one is down.) These make up the columns in the data table.
I tested the visualization by changing the time frame from the last fifteen minutes to the last day. (If there weren't any down in the last few days, change the time frame to the last few days - trust me, they go down quite a bit - you will eventually see at least one down.) Sure enough, it showed VPN tunnels that had been down in the last day because of a "preshared key mismatch".
Then I did the same steps for the other common errors that happen when VPN tunnels go down.
If they ever change the error messages, I will have to change these, so if there is a better way to do it, please let me know.
Once I saved the visualizations, I saved them to a Dashboard so that I could easily see what was down and why. This saved a lot of troubleshooting time.
Another awesome thing about this: If you change this visualization a small amount, it can be used as a metric to show how often vpn tunnels are down and how often an error occurs. It can be used to find if these vpns going down is a symptom of an even larger problem.
What other ways have people found to use Kibana?
Subscribe to:
Posts (Atom)