Welcome!

Hello! I'm Defron and this is my blog.

Data Privacy Day: Passwords

Part One in a five-part exposé for Data Privacy Day

Data Privacy Day: Smartphones

Part two in a five-part exposé for Data Privacy Day

Data Privacy Day: Web Browsing

Part three in a five-part exposé for Data Privacy Day

Tuesday, January 28, 2014

Data Privacy Day: 2014 Edition



It's once again January 28th, which means it's once again Data Privacy Day! This is a great day to read up on privacy matters, check your privacy settings on various websites, see what web and mobile apps are doing, etc. Below various relevant links along with my own articles from last year's Data Privacy Day.
  • MyPermissions has shortcuts to the permission and privacy settings for many popular web apps. They also have a mobile app for alerting you of when new apps are getting info from your account.
  • It's a great day to visit the Electronic Frontier Foundation and read up on the latest privacy concerns and what you can do to protect yourself. The last year saw huge violations of privacy by various government and non-government organizations. The EFF is all of our friends in fighting for our privacy.
  • Android 4.3 users had the option to more easily review app permissions and selectively allow them via App Ops. Unfortunately if you're running 4.4.2 you can't any longer (hopefully the feature gets re-added in the near future)
  • Google released an official Device Manager for Android that allows you to locate, lock, and wipe your lost phone. Less third-party apps is always a good thing and it's free. Losing your phone is a huge potential privacy problem.
  • Facebook privacy settings are always changing, and facebook is still a big part of many people's online presence. Lifehacker has a great writeup on Facebook Privacy.
  • A comprehensive guide to passwords by me.
  • Most of us have a smartphone, and it usually holds a lot of personal info. Keep it safe.
  • Keeping things private on the web is no easy feat, but here's my guide on it.
  • Keep private information secure by encrypting it.
  • If your network is compromised, everything you do on it is potentially compromised too. Here's how to keep safe on networks.
Remember to spread the word about Data Privacy Day. Awareness is step #1 and the more people aware of Data Privacy Day, the bigger of a success it can be and the better off everyone can be.

Friday, January 10, 2014

What's So Good About A Tablet?

Aka: I Got A Nexus 10

At the beginning of this year, I bought myself a Nexus 10, specifically the 32 GB model. I have long been an advocate of the tablet, but had yet to get one. Why? It wasn't that I didn't want one, but rather that I was waiting on the wrong company. I kept waiting for Asus to get their act together in terms of quality assurance, but with the botched release of the Transformer TF701, I finally realized they weren't going to. As such, after the holiday season, I went out and got the Nexus 10, and have been quite happy with it since then.

Tablets Are Better Than Laptops

This has basically been the foundation of my argument since tablets first started getting popular. For the vast majority of tasks, tablets outperform laptops by a significant margin. Tablets, generally speaking, have a better battery life. They also are lighter, more portable, and for your standard daily tasks, I'd even argue that they are faster than your average laptop. The Nexus 10 has a unique feature of having front-facing speakers, meaning that sound clarity and quality on it outshine, not only most other tablets, but even most laptops too.

The Internet Has Made Everything Equal

XKCD covered this a little over 2 years ago. It's pretty true, especially for most of us most of the time. Email, communication, even productivity and office suites all exist in our web browsers these days. Needing a specific application from a specific platform is getting rarer and rarer for most people. Obviously this isn't the case for everyone, but it is for many people. Also, I think for many of us technology people, we have another option: remote desktop and administration.

It's actually because of remote access that I think I am able to enjoy my Nexus 10 as much as I do. Just this last October, Microsoft released an official RDP app for both Android and iOS. There have been many RDP apps before it (I personally have used Jump Desktop, which I still like and offers some advantages over the official one by Microsoft), but what makes this official app unique is that it supports RD Gateways and RD Web Access. On the Linux front, NoMachine is readying for a release of NX for Android and iOS. Currently there is an alpha build for Android users available for download. I personally am looking forward to trying this out soon. Of course there is the standard fare remote desktop apps, but I'm a DIYer and generally untrusting of the cloud. Through any of these, you can access, not only the programs of your desktop, but also the raw power of it, from wherever you need it.

10 Inches is Big Enough

OK, bigger is better, but not when it comes to portability. This is actually why I don't think I could ever use a laptop as my main computing device: 15'' or 17'' isn't comfortable to me. However, for Android, combined with the full screen only style of tablets and the App switcher, it works. On the go it works quite well, in fact. I have relatively small hands. Combined with the auto-correct, I have been pretty comfortably typing out this entire blog post on my tablet. I plan on getting a Bluetooth keyboard in the long-run, mainly to help with sshing a bit more comfortably, but this works quite well once I enable the PC-style keyboard layout. I do my heavy lifting on my desktop still, but when I want to be lazy, I can just use my tablet and read or surf the web without getting up.

Obviously this isn't for everyone. The hardcore on-the-go gamer won't be able to do it, but most others can. A designer type can remote in and access much more powerful equipment to do multimedia editing when on the go, and those in need of Windows software even when on the go can get the Surface Pro. For these reasons I feel that tablets are better than laptops, and I am quite enjoying mine, and that's what's so good about tablets: they're better than laptops for most people's needs. If you have been on the fence about a tablet, or are lugging around a laptop everywhere you go still, I say give a tablet a shot as your go-to mobile device and set up some sort of remote access to your desktop.

Thursday, June 20, 2013

Proper Computer Infection Triage

It's been a looong time since the last time I've cleaned up an infection. I do my best to keep it from happening on my computers that run Windows as well as those I manage. Of course eventually something gets through. Yesterday (my day off since I work Saturdays), I got a call from the office about a computer acting weird. It turned out it was the System Care Antivirus rogue software. According to file timestamps, infection occured around 1:34 PM. I was notified approximately 10 minutes later.

Like a stroke, I believe that fast action is important, and the mnemonic FAST still applies, albeit a bit differently, as it relates to triage instead of identification:
  • F[ull stop]: Once you think you're infected, don't do anything with the computer.
  • A[lert (me)]: I need to know ASAP
  • S[hutodown]: I'll do it if I'm there. Otherwise Get that computer turned off asap (usually a hard poweroff. I'd rather have a single PC damaged than risk malware spread over the LAN)
  • T[ime]: This one stays the same. Time is important. The longer a virus has to act, the more extensive the damage, and the less likely infection removal will be the right course of action. Files may be irreversibly damaged without a reinstall and you just never know.
As hinted  in that last one: I'm a strong believer in disk imaging. Clonzilla is awesome; RedoBackup is great for those who want a GUI. Haven't gotten around to messing with FOG, but it's definitely a project that interests me (and of course Windows 7 and 8 can create an image natively as well as some Server-side ways via Windows Server). I don't usually deal with viruses, because it takes more time to clean up the mess than it does to restore an image. This time was an exception. The EHR software we use at work had been upgraded, along with some other programs on that particular computer. I hadn't imaged it since these upgrades (my bad). It would have taken longer to install those programs (as with some of them, all other clients need to be exited before a new client can be added) than it would have taken to clean it since triage had been followed. Had triage not been followed, I probably wouldn't have tried. Triage really makes that big of a difference in my opinion.

So now to explain FAST.

F -- Full Stop

Many infections start out as a simple file that was able to execute itself in the %AppData% area of your computer. It doesn't have much permissions yet and damage is usually not that bad. It will then try to trick you into giving it more power by clicking on something. By stopping everything and not touching anything, you can in many cases stop the virus in its tracks. This wasn't the case this time. It looks like the virus used an exploit in Adobe Flash Player to infect a bit worse -- more on this later.

A -- Alert (me)

If you aren't computer savvy, now is the time to get help. In the case of my office, I'm the one alerted. The sooner the problem is brought to the attention of others, the more easily it will be resolved. If you are at work, please note this: you will not be able to keep an infection a secret. Eventually it will come out. All you are doing is putting your coworker's computers, and the business at risk. Tell someone and tell them fast. I am quite proud how well it was handled. The person whose computer was infected told the office manager, who promptly instructed her to call me, just like it should have been done. It was beautifully handled.

S -- Shutdown

This one goes along with Full Stop. If the computer has been truly compromised (which the alerted person should be able to tell), then it's time to power that bad boy off. Some malware will try to stop this. Solution: Hit the switch on the power cord. A hard power off is much better than other computers getting infected. I instructed the coworker to turn off the PC and it stayed off until I got there. She was given a laptop to work on in the mean time.

T -- Time

Time is of the essence in an infection. Just like in a real medical emergency, triage is designed to quickly ascertain the severity of the problem. The longer it takes to triage, the more at risk the bad cases are. In the case of a computer infection, the longer it takes to get a PC squared away the worse it is going to be and the less likely cleanup will be at restoring a PC to its former glory.

I have two time counters: The first one is time from infection until the end of triage. I give this 30 minutes. If more than 30 minutes have passed and the PC is still being actively used, most likely that infection is going to be in every nook and cranny of the PC, maybe even jumping across the network. The second time is cleanup time. This one is 60 minutes. If no progress on cleaning up the infection has been made in one hour, it's probably time to wipe and start from scratch. Infection cleaning is a race against the clock in every aspect. The longer you spend cleaning, the more appealing the wipe-and-reinstall method will be. I find one hour to be a good compromise. If I've made good progress and everything seems in order by then, I'll continue cleaning up the infection. If I haven't even come close to getting it under control then it's time t wipe and reinstall.

My Case

As mentioned, this was my first cleanup in a long time. My first cleanup in years in fact. It was quite pleasant. Or at least as pleasant as a cleanup can be. I contribute most of this to the triage method described above. None of the network shares were infected and the PC is back up-and-running.

The infection appears to have been due to an outdated Flash Player install. I don't know how that happened. It should have been updated, but wasn't. The user had visited a website (the website in question seems to have been compromised. It doesn't appear to have been a malware website) and then wham, the popup of System Care started sceaming its alerts at her. She did the best next move by telling the office manager. The office manager than told her to call me. I got the call. I told her to shut down and she got a laptop. I arrived the next day and started cracking.

First, it was taken out and brought back to my office room. It was disconnected from the network and booted up into safe mode. I had done my research beforehand and quickly deleted the files and removed the registry entry related to the malware via the command prompt. Total time? A few minutes (spent more time getting the PC to my room than deleting these files). All was looking good so far, so I booted into Windows normally, expecting the worst. I booted in to windows and it wasn't bad. I was able to launch things and it wasn't a problem. First I launched the antivirus software (Vipre Business). Lookie there! It had caught two files. It may not have been a full success, but it did catch part of the malware and was probably why it was so easy for me to delete it manually with no problem (along with the blitzkrieg tactics in my removal methods). So now it was time to get some better malware scanning software and get the AV up to date.

Before I did that, I noticed that there was an Action Center alert. Apparently the malware had disabled Windows Security Center Service. I went into services to try and re-enable it when I saw it didn't exist. Uh-oh. Looks like the malware did more than I initially thought. Most likely it deleted a few registry values, causing the service to disappear. Knowing that, I decided to gamble on a System Restore. I consider System Restores a gamble because many malware programs will infect them, so when you restore them, you just end up restoring the malware too. I thought "if this doesn't work, I won't have wasted much time and I'll just wipe and reinstall" that way I don't waste much time. I chose a restore point a bit older in hopes that an older restore point would lower the chances since this malware had all of about 20 minutes before I deleted its core files and registry entries.

It was successful! Security Center and firewall and everything were all back after the system restore, and still no traces of the malware! So now I needed Internet access.

Even though I had done some cleanup and everything was looking good, I'm far too paranoid to just plug this computer into the LAN after it's been infected. It won't get LAN access until I've given it a clean bill of health. So what to do? I don't have a secondary Internet connection to use. This is where my Quarantine LAN comes into play. Using a DD-WRT router and some iptables rules, I made it so the desktop could connect to the Internet, but not to any computer on the LAN. Using a different subnet for the quarantine router and blocking the DHCP pool of the servers outside of it, I guaranteed that this computer couldn't infect my LAN even if it was filled with the nastiest of nasty malware (which is wasn't at this point). So then I went on and installed some more antimalware stuff and updated all the cleanup tools to the latest definitions and versions. CCleaner took care of any temporary files, with me cleaning up some it missed manually. the antimalware software was humming along, removing traces in cached and temporary files every now and then. It was the log in Vipre that informed me that it seems Adobe Flash Player related. The computer this entire time hasn't been exhibiting any infected signs: everything was running fine and nothing weird was going on. After a few runs with the various scanners, things were coming up clean. HijackThis logs were clean of anything worrying too. I put the PC back and that was it.

I ended up spending a few hours due to me being cautious. Rebooting, rescanning, scanning with all sorts of things. It eats a lot of time, but I wanted to be confident in my clean bill of health before I put it back in place. During the time I was also running scans on the network shares and other computers just to be safe.

In the end, it was a pretty successful cleanup story. And for the future? Well, I might implement Click-to-Play for flash content now.

Monday, April 1, 2013

World Backup Day

No April Fooling here.

World Backup Day was yesterday. It's a little-known holiday, in fact it only came into being in 2011. It's got a pretty catchy slogan of "Don't be an April Fool. Backup your files. Check your restores." I'm a firm believer in backups. It's practically an art and not one most people understand. There are many things some people consider backups but aren't at all. A backup has to satisfy certain qualifiers to be considered a true, blue backup:
  • A backup is not a Mirror. Mirrors copy file corruption and don't version files. Human error is always the biggest source of data loss in my experience, and a mirror doesn't protect from the human factor
  • RAID is not a backup. This goes with #1. Things like RAID1 (which is mirroring) and pairity-based RAIDs are sometimes thought of to be a backup. They are not. RAID was designed to increase performance or availability. It doesn't protect your files beyond that needed to do one of those two things.
  • A backup is versioned. That is to say, the data in a backup represents files as they were some time in the past. A good backup has many versions of the same file from you to choose where to restore. A system that keeps only one version of a file is the weakest kind, but still has one important distinction over mirroring: changes to original files don't instantaneously replicate to the backup. This allows for a restoration Windows to correct from human error.
  • A backup is verified. An unverified backup is nothing more than a hope. You hope the data in the backup is good. You hope the data in the backup is what you need. Until you verify those facts (which can only truly be done by doing a test restore) you never know if your backup is actually working as you intend it to. This point is so important that it's in the slogan for World Backup Day. It's also the most overlooked aspect of a backup.
Those are the key aspects of a true backup. Besides that, a backup should be automated (human error is the greatest problem, and it's so easy to postpone backing up). A backup should generate feedback/reports. Exception reporting is when the backup software tells you that something went wrong and is the most common type of feedback from backup software. Software may also come with statistical/general/informative reporting where it'll tell you that everything is going fine. What's good about that is it means if you don't get that report than you know that something is wrong (so wrong that it's affecting the reporting feature).

Going further down the path to backup enlightenment is that backups should be stored on different mediums and securely stored in different locations. When you have this, you have backup nirvana. Power surge fries your hard drives? You've got tape/cold storage backups to restore from. Your house burns down? You're data's still safe. The more distant the better. Cloud backup services have made off-site backups an achievable possibility for the average person. Tape's not really a viable solution for your average person, but flash storage, optical media (DVDs, BDs), and cold storage (unplugged hard drive) all are. The problem with two of those three is that you can't automate it on the same level. Flash storage, too, is still expensive. I also consider cloud storage a storage medium.

Security of backups is the most important thing when dealing with off-site backups. Encryption is the most obvious way to do it. Any good cloud backup solution will encrypt your files. The best will give you options on how to encrypt. Of course good ol' sneakernet of an encrypted hard drive (maybe with TrueCrypt) is also a viable off-site secure backup strategy.

Of course you don't need to do all this at once. Backups can be improved over time. The whole idea behind World Backup Day is to get people started and moving in the right direction.


Backup Software

I consider backup software to be like Operating Systems: they all suck, so it's a matter of picking the one that sucks least for your purposes. Try out different programs and see which ones work best for you. IMO, the best backup software I've used has had a client-server relationship like BackupPC (been meaning to try out Bacula). It works well for my purposes, but obviously not ideal for others. Below are some free backup programs to try out and get started with:

Windows:

Windows is what most people use. Unfortunately it's the one whose free backup software I've found most lacking. There are some gems I've enjoyed from time to time though:
  • File History (Windows 8): File History is the one great feature of Windows 8 IMO and I truly think it is amazing. Unfortunately I'm not crazy about the rest of Windows 8 on the desktop paradigm and I doubt many of the users of Windows 8 use File History.
  • Windows Backup and Restore (Windows Vista/7): The predecessor of File History. It offers a lot of nice features including the ability to do a full system image. It's pretty efficient and I've never had it goof up, but setup is a bit clunky (Which is where File History really improved on) and depending on the version you have, you can be quite limited in backup destinations. Windows 7 Home Premium lacks network backups, for example.
  • Cobian Backup: Cobian Backup offers a lot of neat features and a pretty good scheduler. The main problem is the lack of an integrated restore feature makes continuous incremental backups quite painful. Instead it's best to do sets of incremental backups, say keeping 9 sets of 10-day incrementals (keeping 90 days worth).
  • FreeFileSync: Yes, mirrors aren't backups, but FreeFileSync is unique among file synchronization tools in that it offers the ability to keep files deleted/changed and moving them to a different  timestamped directory, creating a basic file versioning system. It's no-frills which will undoubtedly appeal to those after a simple (feature-wise) solution.

Mac OS X:

Anyone read this use  Mac OS X? I personally don't and so the only Mac OS X-only backup program I'm aware of is Time Machine, which is an included utility. It's quite nice for local backups. If you don't want to buy an expensive time capsule, you can use FreeNas for networked backups.

Linux:

Linux has backup software galore. rsnapshot, rdiff-backup, a million others, and many different GUIs. Here are some GUI backup programs for Linux:
  • Back In Time: Based on some no longer maintained backup programs and inspired by Time Machine, it is a very nice backup program.
  • LuckyBackup: A very feature-rich backup program offering GUI options for many advanced features not commonly found in a GUI. NOTE: technically it's cross platform, but I don't consider it to be stable enough to trust it with data and list it as cross-platform.
  • fwbackups: Designed to be simple, it largely succeeds. Doesn't have the bells and whistles others offer, but it gets the job done. NOTE: technically it's cross platform, but it doesn't play nice with UAC on Windows.

Cross-Platform:

There are some great cross-platform backup programs out there. I'll note that many require java to run. Here are my top three:
  • Areca Backup: A very advanced backup program. It offers many settings to satisfy anyone. This comes with a steep learning curve and a desire for experimentation to uncover them all, though.
  • CrashPlan: A friendly backup program. The free version is ad-supported. It offers the unique ability to back up to a friend's computer over the Internet without an involved setup. This is a very nice feature for people untrusting of the cloud The subscription version also offers a cloud backup option. The Windows version doesn't require Java, Linux and Mac OS X version does.
  • JBackPack: Java-based GUI for rdiff-backup and encorporates EncFS functionality for file encryption.
Finally there are cloud-based backup programs. I won't go into detail here since I don't have any experience with them besides CrashPlan. I chose CrashPlan because it's cross-platform and so far I've enjoyed it. I don't trust my super-private files to it (it has nice regex filtering) and it gives a weekly report of the backup. I've heard good things about BackBlaze, but it doesn't support Linux which is a no-go for me (I list these two in particular because 1. I use CrashPlan and 2. both are supporters of World Backup Day). Others are probably good too. I will say DO NOT USE CARBONITE. They seriously throttle your speed. That kind of practice is deceitful and hurtful to customers and such practices should not be acceptable. Also note that other online storage and sync platforms like Dropbox and Google Drive really aren't backup solutions, but they may suit your purposes fine so long as you realize they aren't really catering to the backup market.

Thursday, March 7, 2013

Oracle iSQL*Plus Overview

This post is a bit different from my usual ones. It's specifically aimed at classmates in one of my classes I'm taking right now where we are learning Oracle Database. I found out the school has an iSQL*Plus web access license and it's publicly accessible (so you can use it from home), so anyone can use it any time.

iSQL*Plus is a web interface to SQL*Plus. Pretty much everything will work in iSQL*Plus that will work in SQL*Plus, but not quite everything.

Login and Use

 Logging in to iSQL*Plus is similar to logging in to SQL*Plus, except you use your web browser. You'll navigate to the iSQL*Plus web server and enter your username, password and connect identifier (if applicable, for the school iSQL*Plus server it is). It works with any browser. I've used it in the latest releases of Chrome, Firefox, and Opera with no problem. The login screen looks like this:


Once you've logged in, you'll be presented with a nice web interface. Here's me executing a simple command:



 As you noticed, by default it'll display the output as standard HTML of your commands below the enter field. Also by default it'll paginate the output, so you only see a number of lines at a time, 24 by default. This can be changed in preferences, but first on the main workplace UI:

Your SQL Commands stay in the box until you clear it by either manually deleting it or using the "Clear" button.

The "Execute" button executes all the commands in your textbox, by default showing the output below.

The "Load Script" button will take you to a page where you can browse for a *.sql file to upload. After you've located it, press the "load" button and it'll put the contents of the file in the text box. It's kinda pointless since you can just copy and paste the contents faster most of the time.

The "Save Script" button will auto-generate a file called "myscript.sql" and present a download prompt, which is pretty nifty.

The "Cancel" button will terminate any running commands, useful if your commands are being unresponsive.

You may notice a "History" tab on the top-right corner. This will store a history of your last few executions of commands. You can then reload them, which is useful if you accidentally hit the "clear" button. Do note: the History is cleared upon logout.

Preferences

 Preferences are where you can change various default settings. Most notably are:

In Interface Configuration:
  1. The amount of scripts to save in "History" from the current session (by default 10).
  2. The default size of the script input text box (pointless in modern browsers that let you expand it yourself)
  3. Whether to display the output below the text box or generate a downloadable html file (this is the closest equivalent to spooling, which isn't available in the iSQL*Plus interface)
  4. Option to have everything display on one page or multiple pages (and set the number of lines per page)
In Script Formatting:
  1. Option to have line numbers (kinda nifty)
  2. Option to display commands in the output by default (equivalent to running SET ECHO ON at the top of every execution)
There are many other options too. Here's Oracle's page on iSQL*Plus preferences and the equivalent SET commands

Differences Between iSQL*Plus and SQL*Plus

Most things in SQL*Plus will also work in iSQL*Plus, as I've noted. One notable exception is spooling. If you try to spool you'll see this:
spool on 
SP2-0850: Command "spool" is not available in iSQL*Plus 
Any other command that can't be run will similarly kick up an error like that and the rest of your script will execute properly. Spooling doesn't work because, obviously, you cannot set a save location on the server from your web browser. that'd be a security nightmare! The closest equivalent is to have the output generate an isqlplus.html file instead of displaying the command results below the text box.

By far, my favorite feature is that you can edit multiple lines very easily in iSQL*Plus and recall history. It's much nicer in that respect than SQL*Plus is. Likewise, the commands don't disappear after execution, so if you made a small mistake, it's easy to fix and re-execute. Very nice.

The other nice thing is any user can set preferences for many set commands, which can save you time.

The main downside is that there are no TNS alias, so your connect identifier has to be the full, proper connect identifier normally contained in the TNS.ora file

You should also be careful of something: iSQL*Plus doesn't care if you don't end SQL commands with a semicolon. If you plan on using this elsewhere or submitting it, you'll need to make sure you include them, as otherwise it may not work for someone else.

One thing to note is that iSQL*Plus has a pretty aggressive timeout/auto-logoff setting. So if you just leave it open for some time, you'll probably be forceably be logged out, so do make note of that.

Wrap-Up

Well, that's pretty much all there is to iSQL*Plus. Just make sure to log out. It's a great way to practice your SQL off-campus.


Saturday, March 2, 2013

Expanding C: Partition on Win2k3 and Remote GParted

As I mentioned in my last post, the C: partition on the Windows server at work had become completely full. I immediately did some temporary stopgaps to hold her over until I could properly repartition her. Today was that day.

Repartitioning modern Windows (Server 2008+, Vista+) is no problem, as you can do it with the included disk management utility. XP and Server 2003(R2) are different, as the disk management utility isn't nearly as capable. This server is a Windows Server 2003R2, so I had to use a third-party utility. For home users, the EaseUS Partition Manager family is pretty good. For a corporate server, though, it's $160. I wasn't approved for spending that (For good reason, the server is slated for replacement in a year or so, so the money would have been wasted in the long-run), so had to go with free options.

I ended up going with two different tools: one for shrinking the data partition and one for expanding the system partition. I used GParted to shrink the data partition and ExtPart to expand the C: drive. The reason being that Windows doesn't really like GParted and sometimes it'll require a repair action from the Windows disk when messing with the C: partition. I didn't want to deal with that and ExtPart is a small, simple, free utility for extending a partition (hence the name) from within Windows.

The day started at 8:40 am. I fired up CloneZilla and cloned the hard drive. If there was a power outage or some other freak accident during the repartitioning, I could then simply restore the image in a short time. I always recommend imaging your system before repartitioning for this reason. There are lots of disk cloning tools, I like CloneZilla. I tried doing this the day before, but the version of CloneZilla I had didn't work with my server's RAID card (a SAS 6/iR). I brought a freshly burned copy of the latest CloneZilla release and it recognized my drive just fine. This ran until 12:35 pm.

Next it was partitioning time. I inserted the GParted Live CD and got busy. Unfortunately Dell thought it was a good idea to make the data partition an extended partition. This means I'd have to do an extra action: First shrink the data partition and then shrink the extended partition it resides in. This means it'll take more time. I figured it to be done by 2:30 pm originally, but this would make it take longer. I figured it'd be done by 5:30 pm (it ended up beating my expectations by finishing at just shy of 4:50 pm). I didn't want to stay at work until 5:30 pm, time to get remote access to the GParted Live CD.

This actually proved pretty easy. First I configured the network, which GParted Live includes a nice desktop shortcut to do. Next I opened up a terminal. GParted Live is based on Debian so I did a sudo apt-get update && sudo apt-get x11vnc... This didn't work. Turns out GParted comments out the repos from sources.list. So I did a sudo nano /etc/apt/sources.list and un-commented the repos.

So now to install and run x11vnc:
user@debian:~$ sudo apt-get update && sudo apt-get install x11vnc
user@debian:~$ x11vnc -forever -display :0
The installation pulled a few packages besides x11vnc: libvncserver0, libxssl, openssl, tcl, tcl8.5, tk, tk8.5, and x11vnc-data. It only ended up taking 11.2 MB of more space, so no big deal, my server has plenty of RAM.

The forever flag tells x11vnc server to remain running after a client disconnects. Without that, as soon as you disconnect the first time, x11vnc will stop running. I planned on connecting a few times so I could do periodic check-ins on how GParted is progressing. The display :0 flag tells x11vnc to show the current session instead of creating a new one. It would be useless to VNC in to check on GParted's progress if I was given a new session. I also didn't want to risk x11vnc disconnecting and me being SOL, so I also decided to enable ssh on GParted. This is simple.

First, we need to set a password and configure hosts.allow so I can ssh in. This is done with sudo passwd user to create a password for the user 'user'. Without this you'd have to allow for passwordless login for ssh, which would require more configuration. Easier to create a simple password. Next you need to edit hosts.allow by doing nano /etc/hosts.allow. Add sshd subnetblock. (don't for get the period!) to the end. In my case it was sshd 192.168.1. that I needed to add. Now I just restarted networking and started ssh

sudo /etc/init.d/networking restart && /etc/init.d/ssh start

End result? I drove home at 3:00 pm and started checking on it. I use LogMeIn to remote into the office and then fired up TightVNC to check on GParted

 And then a bit later I saw:

Success! Done! Well, with the GParted part. I drove to the office to finish it up, since GParted will hang after ejecting the CD, and the server itself hangs on a setup in the preboot environment due to an alert I cannot disable.

Upon Reboot, Windows did a consistency check on the data partition, no biggie. I then rebooted again and was almost done. Now I need to expand the C:\ drive with ExtPart
C:\Documents and Settings\Administrator>cd C:\Dell\ExtPart
C:\Dell\ExtPart>extpart c: 15325
That's the default directory ExtPart creates when "installed" (it's a self-extracting archive when you download it). As far as the extpart syntax: it's simple. You specify the drive letter and then the amount you want to expand it in megabytes. In my case, that's c: and 15325. The end result? 26.9 GiB C: partition and 16.4 GiB of free space. I call that a rousing success for an honest day's work.

Tuesday, February 26, 2013

Inconsiderate Behavior

I thought today was going to be a good day: completely over the flu I had caught, night class was canceled, and my work load wasn't large (despite taking the majority of last week off)... I was mistaken.

When I check my work email I see the usual day-to-day emails, but one caught my eye. It was from my Spiceworks install and the title read "C: has less than 5% remaining on [one of the work servers]"... heart sank. Goodbye care free day, hello hellish day full of trying to figure out what's going on.

I turn to the server (running Windows Server 2003R2). I've always been well aware of the storage problems of the C:\ partition on this. This wasn't the first time the C:\ had filled up, in fact. Back when I first started working, it filled up due to an out of control program producing a five-mile long error log. The system, like most in the office, predate me. It was bought through a "value-added" retailer. The VAR decided it was a good idea to partition C:\ to only have 12 GiB of free space. I probably should have done something about it back then, but it being a Windows Server 2003 server, there's no integrated option to shrink one partition and expand the other. Budget was, as always, $0 and uptime was considered critical, so I freed up 4 GiB of space and called it a day. Of that, last week (some 3 years later) I had 2 GiB free space remaining. The server didn't ever really get any new software installed beyond security updates and I ran her lean and mean to reduce the chances of some log file going crazy. As I saw it, the remaining 2 GiB of free space would last me to this summer, where I planned to finally shrink the D:\ partition and expand the C:\ partition. It won't make it to then due to some inconsiderate behavior of an outsider.

A company, which will remain nameless for now, decided it was OK for them to do an automatic update of their software without telling me. Not only did they not tell me, they didn't announce it at all ahead of time, even on their website. The software also doesn't have the option to disable automatic updates and the fact that it can do automatic updates isn't even a listed feature. The software in question uses Microsoft SQL Server for the backend. Why? I dunno. I guess they thought that was a good idea (I disagree with that conclusion); it didn't use a SQL backend two revisions ago. Part of the upgrade included a forced upgrade to MS SQL Server 2008 (We were on MS SQL Server 2005)... That might be acceptable if I lived in an ideal world where I had each server only do a single role, but I don't because I don't have that kind of budget, working for a small business. The server in question was an archive server for patient records, and that functionality also used MS SQL Server. The update also required .NET Framework 4.0, which I had no need for up to then, so I didn't have it installed (free space being a premium, after all).

None of this would have been a problem, had I been given prior notice  of the update. If I was told ahead of time that this update was coming, I could have done something about it, and there would be no issue. Instead due to the company's inconsiderateness, I find myself with... 85 MB of free space on the C:\ partition.The .NET Framework update was also still running and complaining about a lack of free space (obviously). I had to cancel that.Next step is getting me some breathing room and call the company up. They give me the usual company blah about it, don't even apologize for not telling me about the upgrade beforehand. Told me they couldn't revert the upgrade so my only option was to clear up the space myself or uninstall the software.

Uninstalling it is very tempting, but I'll need to get my boss's approval before I can do that. In the mean time, I have a good feeling that the software is hosed and useless. The services related to it wouldn't start up, so I disabled them, crippling the software to dead status anyway, so at least it's not a further threat. I scavenged for free space and was able to get back to a bit over 800 MB. At least now it won't fail over from a single hiccup. That'll buy me the time to defrag the other partition (which is running smoothly so far, but since it was an archive server, still has a ways to go), shrinking it, and expanding the C:\ partition. If my boss doesn't approve some money spending, that'll mean downtime as I boot off GParted to shrink. I'll expand the C:\ partition with extpart from Dell so Windows get too grouchy.

A number of factors lead to the current situation, but the one thing that definitely shouldn't have been the case and would have made the world of difference is if I was told about this major upgrade beforehand so I could prepare for it.

At least Spiceworks is doing its job properly.

Thursday, February 14, 2013

Blog Status Update

It's been about a week and a half since my last blog post, so I thought I'd fill everyone in on what's going on. I knew this would happen eventually, but was hoping I'd have been able to post a bit more before it does.

I'm juggling school, work, this blog, and a few other projects right now. At the beginning of the semester it wasn't an issue, as school work wasn't very demanding. Now, though, school work is requiring more and more of my time. I'm a straight-A student and would like to maintain that, as such I need to focus on my studying and so had to cut back on my blogging. On top of that I got sick and am going back and forth to the dentist, taking up even more of my free time. Oh, and my first midterms are the next two weeks.

I definitely won't be able to continue a post a day like I did for that little bit, but I am going to try to do one or two posts a week. I have plenty I'd like to post about, but it's too time consuming to do so right now. In the immediate future, though, this blog is going to be running at a lower priority than I'd otherwise put it at. It's school, work, one of my other projects, and then this blog. Hopefully in two week's time, I will be able to pick up the pace again, though. Until then, I might be able to accomplish one a week, we'll see.

I'm still getting my feel for blogging, too, so it's definitely not worked out in my schedule. In the mean time we'll just see how this all goes.

Sunday, February 3, 2013

Wake-on-LAN

Wake-on-LAN is one of those technologies that I love, and one I think doesn't get enough attention. I guess it's a bit geeky still.

The actual technology is a hard to understand if you've never done any networking, but basically it works on layer-2 (MAC addresses) only. It sends the magic packet to everyone (broadcast), but only the intended device says "Oh, that's for me" and turns on the PC it's attached to. I always found it funny that it's called a magic packet. The "magic" part was fitting before I had a better grasp on networking, but, incidentally, now that I do understand networking better, the "packet" part makes less sense (since it uses layer-2 Ethernet frames, not IP packets). You can read up more about the technical side over at Wikipedia.

Wake-on-LAN Setup

In order to implement Wake-on-LAN you need to meet a few requirements:
  1. You need to use a wired (Ethernet) connection. There is a Wireless implementation known as WoWLAN, but it doesn't have much market penetration and even more requirements than WoL.
  2. Your BIOS/UEFI needs to support Wake-on-LAN (not all do)
  3. Your NIC needs to support Wake-on-LAN (not all do)
  4. Your OS needs to support Wake-on-LAN so you can manage it (AFAIK, all modern ones do)
To this day I regret the fact that I didn't consider support for WoL when building my current PC. I will never again build a PC that doesn't support Wake-on-LAN. MeetGadget allows you to sort by motherboards with this feature supported. Don't make my mistake in buying a motherboard that doesn't support it if you love WoL, as you will regret it.

BIOS implementation varies from one system to another. It's usually under power settings and something along the lines of "LAN wake-up" "Power on LAN" or something along those lines. Sometimes Wake on PCI and the like can be used, but those are usually for if using an separate PCI device (like a PCI NIC) to send a wake-up command.

If you don't find one of those options in your BIOS, your BIOS probably doesn't support the feature. It sometimes becomes available in a later version of your BIOS, but not usually.

Your NIC either will or will not support it. There's not much you have to do here. Really all you can do is verify it support Wake-on-LAN, which is done most easily by checking the documentation for your NIC.

On the Software/OS side, you'll need to tell the device it's ok to respond to Wake-on-LAN (and thereby allow your PC to turn on). I once was beating my head for hours because I thought I had configured this, but hadn't and the PC was refusing to turn on.

On Windows this is done by launching

devmgmt.msc

Then select "Network adapters" and right-click on the NIC you are using. Select Properties


Click on the advanced tab. The options may be different depending on your NIC, but for Realtek NICs, the option is usually called "Shutdown Wake-On-Lan". Make sure that is enabled. You should also make sure "Wake on Magic Packet" is enabled. Other names I've seen are "Network Wake-up" "Wake on Magic Packet" and other variations along those lines.


Now head over the the Power Management tab and make sure "Allow this device to wake the computer" is checked. Optionally check the box below it about allowing only magic packets to wake it up (otherwise the device may respond to any ethernet frame directed at it instead of particularly to magic packets).


On linux, you'll use a tool called ethtool, here's Debian's official documentation of it.

On Mac OS X, at least on Snow Leopard, it was:

System Preferences -> Energy Savor panel and make sure "Wake for network access” is selected.

Sending WoL Packets

Now that the system is all set up, you'll probably want to do all sorts of cool stuff with it. While WoL itself is layer-2, most tools that send the packet will operate on Layer-3 and 4 (usually using UDP packets to encapsulate the magic packet).

wakeonlan is a command-line Linux tool that I use (it's also available for Mac OS X via Macports). You should be able to pull it from you repos. The majority of wired computers at my work support Wake-on-LAN due to my concentrated efforts in making sure they do. I often do remote work at night on the computers, doing this and that. I just ssh in, turn on all the PCs with wakeonlan, and then control them through various methods, primarily ssh-tunneled RDP (as most are Windows 7/XP Pro computers). I like to imagine the look on someone's face if they were in the office and all of a sudden all the computers around them started powering up.

Two Windows tools are MC-WOL, which is a command-line tool. I like to script WoL sends, so command-line tools like this one and wakeonlan for Linux are useful to me. If you want a GUI, though, there is WOL - Magic Packet Sender.

You can send WoL packets from DD-WRT/Tomato and the like too. From the webGUI and command line. More importantly, you can set it up so incoming packets will automatically cause the router to send a WoL packet to your device. Very useful for sending WoL when outside your LAN. You can then turn your PC on from anywhere with an Internet connection

iPhones can send Wake-On-LAN via Mocha WOL. Unfortuantely Apple in their infinite "wisdom" doesn't allow for this to be automated on certain external events.

Android has two big options PcAutoWaker and Wol Wake on Lan Wan.

PcAutoWaker will allow for your phone to automatically send a magic packet on connecting to a wireless network. Imagine this if you would: You just pulled into your drive way, and by the time you get in the house your PC is already fully booted up. Now that is a beautiful thing to me.

Wol Wake on Lan Wan isn't as cool out of the box, but has some useful features: you can set up widgets for your devices to make sending magic packets easier, and even better: it can be incorporated in Tasker/Locale very easily. This allows for one very interesting thing: Sending WoL packets when your alarm goes off in the morning (note: I don't know if Locale has a similar event trigger). Imagine if you will, your alarm goes off. You're groggy and either hit snooze or turn it off and start getting up. In either case, by the time you reach your PC, it's already booted up. Ah, how wonderful.

That's why I love Wake-on-LAN: it allows for two things I love: saving power (leave PCs off and just turn them on remotely when you need them) and automation (I don't turn on my PCs, they're automatically turned on based on my actions). It's a beautiful thing, it's a simple thing, and it makes my life easier and more environmentally friendly. What could you possibly not like?

Saturday, February 2, 2013

Locking Down wifi on Windows without Active Directory

This is a cool trick I've learned recently, and it doesn't seem easily found through Google (but if you know of netsh, you may be able to discover it).

Windows management is best done through group policy, or at least most easily done through it. In fact, you can blacklist/whitelist wifi networks via group policy for Windows Vista+. The problem is that it's only available via AD group policy, not local group policy. At work I don't have Active Directory (but am hoping to by the end of the year), so I can't use this. Still, I'd like to block wifi networks on our wifi-enabled Windows computers. My desire for this came from the fact that someone in the office thought it'd be all right to take a laptop without permission for the purpose of working on public wifi during lunch. As a rule,  laptops shouldn't be just taken without properly being checked out, but sometimes people just think something not-ok is OK. Luckily the person didn't end up using the laptop on who-knows-what public wifi network, but it was a close call and made me look into this.

I found out it was possible with a couple of ye olde netsh commands. I'll show them off on my crappy laptop with a dead battery that I never use because I hate laptops (maybe I'll go into that another time). Before firing them off on my laptop, Windows saw these wireless networks:


Donnerschlag is my wireless network, so let's make it so that's the only option for this laptop to connect to. Open up the command prompt as administrator:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\windows\system32>netsh wlan add filter permission=denyall networktype=adhoc
Followed by:
C:\windows\system32>netsh wlan add filter permission=denyall networktype=infrastructure
That will block all wireless connections, let's see what Windows says:


Looking good, but now I need to add my whitelisted connections:
C:\windows\system32>netsh wlan add filter permission=allow ssid=Donnerschlag networktype=infrastructure
Aaaaand now:


Success! Here's some other useful netsh commands for wireless networks:

Show current filters:
netsh wlan show filters
Which returns something like:
Allow list on the system (group policy)
---------------------------------------
    

Allow list on the system (user)
-------------------------------
    SSID: "Donnerschlag", Type: Infrastructure

Block list on the system (group policy)
---------------------------------------
    

Block list on the system (user)
-------------------------------
    SSID: "", Type: Adhoc
    SSID: "", Type: Infrastructure
You may want to blacklist just certain wireless networks, this is done by setting the ssid as appropriate and permission to block
netsh wlan add filter permission=block ssid=somewifinetwork networktype=infrastructure
There's also the ever-important delete filter command. syntax after netsh wlan delete filter needs to match the same syntax you used to add that filter.

TechNet Library for Netsh wlan