Only two team members were actually present at DefCon, the rest of us participated from home. Coordination between members was done through a dedicated contest server with an ssh account for each member, shared pot, wordlists and results directories and an IRC chat. Member ch3root wrote a script to go through pot files every other minute, calculate statistics and put cracked and uncracked hashes under results. A separate script uploaded the results to the contest server.
- nt – very fast
- nsldap – very fast
- raw-sha512 – fast
- salted-sha1 – medium
- descrypt – medium
- md5crypt – medium
- sha512crypt – slow
- bcrypt – very slow
- scrypt – very slow
In the first phase I ran the usual dictionaries, rockyou etc. Member csec uploaded some dictionaries from aspell. On day 2 Eternal found the following dicts on github:
Some words in the .dic files had /xx appended. Cmdline to clean it up and put results in dicts:
mkdir dicts cd Dictionaries for d in *.dic; do sed 's/\/.*$//' "$d" | \ ../unique "../dict/$d" done
(unique comes with JtR, it works better than the unix uniq tool, no need for sorting)
I ran the dictionaries with the JtR rules; Single, Jumbo, o and i for the fast hashes. Single for large dictionaries, Jumbo, o and i for small dictionaries. See john.conf for rule definitions. You can add your own rules in john.local, see doc/RULES for syntax.
Then I ran some brute force masks, like ?a, ?a?a, ?a?a?a etc. I concentrated on hashtypes 1-6. The last ones were too slow so you needed a very specific pattern to make progress there.
After a while members began uploading pot files to the server. I made a script to get all words from all pot files:
rm -f tmp.txt cat share/*.pot | sed 's/^[^:]*://' | unique tmp.txt iconv -c -t UTF-8 < tmp.txt > allwords.txt
Then I repeated the dictionary phase, but now using allwords.txt as dictionary.
When my pot file had grown large enough I created a custom charset from it:
./john --make-charset=custom.chr --pot=cmiyc2015.pot
and copied custom.chr to 4 different computers, running incremental on different hashtypes:
./john --incremental=custom <arguments>
After a while it became obvious there were random UTF-8 characters inserted into the passwords that were cracked, like ¹, ³, ã, á, ë, é, è, ê, Э, э, ٤, ó, Я, л, ¤ etc.
Since I am not very good at writing custom rules I wrote a c program to insert these characters at each possible position for every password in a dictionary.
./permute allwords.txt > all_utf.txt
For instance if one of the words in the wordlist is Password and one of the characters in the UTF-8 list is Я, then that particular ”insert combination” becomes:
ЯPassword PЯassword PaЯssword PasЯsword PassЯword PasswЯord PasswoЯrd PassworЯd PasswordЯ
Note. I plan to improve the permute program to take a list of utf-8 characters in a separate file instead of hardcoding the list. (and permute is not a very good name for the program, permute means shuffling letters around, but I was in a hurry)
After creating the all_utf.txt file I repeated the dictionary process (with rules) using all_utf.txt as dict. This last step, with permute on allwords.txt was particularly succesful.
See separate writeups. I didn’t keep track of what everybody was doing all the time, but noticed for instance that solardiz (initial author of JtR) were running the hard (slow) hashes the last day and aleksey did a good job keeping the team together. Other members were Kai, lei, royce, csec, nugget, jfoug, fd, ukasz, wucpi , ch3root, dhiru, bghote, math07, sergey, neriah7, frank and Eternal. I probably forgot someone too. Frank, neriah7 and csec contributed most cracks by number. But slower hashes, like those solardiz cracked, gave higher points.
Congratulations to Team HashCat who won the contest. John-users – which spent first day at place 3 – finished at place 4. See all results.