Just rolled out to my home network; also known as WPA2-EAP or 802.1x EAP, it’s the last “unhackable” wireless encryption protocol. The encryption key is handled by the access point (ultimately the RADIUS server) and the client device therefore there is no key to be compromised or changed regularly. Authentication is handled by the users credentials on the RADIUS (read Active Directory) server. Want to lock out a user? No problem; just disable their AD account.
As my wireless network just sits at the same physical address for years on-end, this prevents me from being brute-force attacked by a tech-savvy neighbor.
Yesterday I went through literally (yes literally) every host on my IIS server and rebuilt every SSL certificate. I was using the program Let’s Encrypt as my certification authority. Upon first attempt, this was no easy feat with Microsoft IIS. If I had Linux, it would be practically effortless. However, Windows was a different story. Fortunately there is a freeware command line application that assists in the process called letsencrypt-win-simple. For the most part, it does the job well and also create a scheduled task that automatically handles the certificate renewals.
Unfortunately last month I pressed “create all certificates for all sites” in the program. This made at least twice the number of certs I needed, the new ones pertaining to my redirects. This caused a few issues and spammed my Inbox that some certs were expiring.
So yesterday as I said before, I started all from a blank slate. The Let’s Encrypt program had its own folder that stored renewal information. I made a copy and wiped it. Then I cleared out all the unnecessary certs in IIS. Finally I went through one by one and made new valid certs that should renew automatically.
One site wouldn’t accept SSL, but I suspect that is due to the manual HTML encoding used. I’ll be speaking with that webmaster soon.
Free SSL for the win!
It’s sad that I have to uninstall Facebook and Messenger on my phone just to get better battery life. No, I don’t check Facebook during the day.
I must remember this one. Scenario: you have setup SonicWALL’s SSL-VPN to accept external NetExtender client connections. You have configured the clients in “Tunnel All Mode” which means the external device will browse the Internet from the IP of the SonicWALL (useful for when you’re at a public hotspot or other connection-inhibiting location). Everything connects properly and yet you cannot browse the Internet. The fix is simple.
Go to Local Groups, edit the SSLVPN Services group. Go to the VPN Access tab. Add the entry WAN RemoteAccess Networks.
It’s been a rough day for SIP. Out of nowhere my Asterisk server stopped working properly. I suspected the SonicWALL and began a 2 hour long process of generating the configuration from factory defaults. I did this because a SonicWALL technician in his Indian accent chastised me for loading Beta firmware without having good backups. He blamed this for having a malfunctioning CFS policy. Anyway, I loaded new configuration and as it was it had no effect on the symptoms. Specifically, there was 1-way or no audio and the call disconnected right at about 30 seconds.
Every Asterisk forum and support post always describes the cause of this issue to be bad NAT-ing. However nothing had changed. I loaded the same configuration into the SonicWALL as was before the wipe.
Ultimately after much searching I came across a working solution. I added RTP ports UDP 10000-20000 to the firewall. Also I opened up the firewall to All incoming connections instead of my SIP trunk providers IP address. Possibly they changed the IP address for the media gateway but only a call to tech support would determine that. Fortunately I’ll do that tomorrow.
Side note that I also went through a couple hours worth of free SMTP quota in about 3 seconds. I turned on email alerts on the default SonicWALL configuration. I also had Geo-IP filter engaged for a measly few 12 of the baddest countries in the malware world. Let’s just say it’s a dangerous Internet out there. My SonicWALL sent an email every time someone tried to connect and was blocked yet the Geo-IP filter.
This problem took me over a month to figure out. However, with the help of a fellow tech guy (shout out to Michael Groff, thank you bro), it’s finally put to rest.
Symptoms: VMware ESXi server will not connect to a FreeNAS NFS share no matter what. When trying to add it, VMware immediately displays a “failed” error.
Cause: About a month ago, I had an existing datastore connected with the name of “BACKUP” that was an iSCSI share from a Synology NAS. This single drive finally failed and needed to be replaced. Since the drive failed, I did not specifically delete the datastore from VMware, although it did not show anymore.
This was ultimately the problem. While VMware didn’t show the datastore, I was trying to add a new datastore also called “BACKUP” (trying to remain consistent here) but somewhere in VMware the name still existed. Unfortunately I’ve lost the link to the website where the fix was found, but it’s so simple that I still remember it.
Resolution: Connect to VMware ESXi using SSH and run the command esxcfg-nas -d <datastore name>
It will generate an error such as “Datastore not found; but we deleted it anyway”. After that, you should be able to add your NFS datastore again.
Just a simple note that might save you hours of troubleshooting. If you update the firmware of an Advidia A-14 or A-15 camera (might be more models), you will need to perform a Factory Default command, otherwise the unit will reboot every 3-5 minutes.
For what it’s worth, this is mentioned on the Advidia website.
Navigating ASUS’s support site is difficult. Skip all the hassle and install missing Windows 10 drivers from the site below.
Latest ASUS drivers for Windows 10
Lots to write because lots has happened.
In the past week, the following has happened.
- The media server, Valhalla, was mysteriously infected by Ransomware but no note was found; only encrypted files.
- The main NAS for my network is reporting hundreds of “File System Errors” but is unable to tell me any more information and all the data is accessible.
- There was a storage failure (and subsequent automatic recovery) of the VMware server in the middle of the night and that caused four (4) virtual machines to fail; cause of the failure is completely unknown.
I really hate things that break. Tune in for more as this story develops.