Archive for the ‘Sys Admin’ Category

Make Turning Windows Off and On Again Great Again

Tuesday, December 5th, 2017

There’s a joke about all Windows related problems being solvable by turning the computer off and on again. It’s one of the things that works most reliably about the OS. Well, even this has been broken by Microsoft in the Windows 10 Fall Creators Update.

When you get to the point where you have more than a dozen virtual desktops running with many different programs, your computer (not to mention your brain) will start to get overloaded. At times like these, I just want to hit refresh (to coin a phrase…) and get back to an unmussed system. However, a new “feature” in the Fall Creators Update of Windows 10 will try to reopen as many of the programs that you had running beforehand as possible. Not all programs will get restarted. Tiny programs like Notepad++ stay unloaded, but bloated behemoths that hang for 30 seconds to a minute to load (like Excel and Visual Studio) get restarted, maybe with the files that you had opened, maybe not. If you were using virtual desktops beforehand and had carefully separated your programs by task, all the programs will be in the first one but you will still have several empty and useless virtual desktops open. Instead of getting a fresh start on a clear desktop, you are left waiting for programs to load and needed to close all the virtual desktops. Unfortunately, there’s no setting to just turn off this rather half-baked behaviour.

However, a little PowerShell can reduce the pain and shutdown the computer more thoroughly.

I don’t want to accidentally trigger this, so my script starts with a confirmation:

Write-Output "This will close all your virtual desktops and shutdown the computer."
$Confirm = Read-Host "Hit Y to confirm."
if ($Confirm.ToLower() -ne 'y')
{
    Write-Output "Doing nothing"
    exit
}
 
Write-Output "Shutting down..."

Closing all the other virtual desktops can be achieved with by wrapping around MScholtes’ VirtualDesktop:

while ($true)
{
    $CountCommand = "$($VirtualDesktopExe) /C"
    $CountOutput = iex $CountCommand
    if ($CountOutput -match 'Count of desktops: (\d+)')
    {
        $Count = $Matches[1]
 
        if ($Count -gt 1)
        {
            Write-Output "You still have $($Count) virtual desktops."
            iex "$($VirtualDesktopExe) /R"
        }
        else
        {
            Write-Output "Only one virtual desktop at this point"
            break
        }
    }
    else
    {
        Write-Error "Unable to parse result of command '$($CountCommand)': '$($CountOutput)'"
        exit
    }
}

This simply invokes the program with the “/C” flag (count) then calls the program with “/R” (remove) until only one virtual desktop remains.

After that, the script just invokes the old fashioned shutdown command from Ramesh Srinivasan’s article above about this new feature.

shutdown.exe /s /t 0

On my desktop, I have a link to a script that invokes the PowerShell script with the location of the VirtualDesktop.exe file on my machine:

PowerShell -command "F:\Robert\Dropbox\local-scripts\_Common\CloseVirtualDesktopsAndShutdown.ps1 -VirtualDesktopExe F:\Robert\Dropbox\executables\Windows\x64\VirtualDesktop.exe"

Now I know that I will get a fresh start when I turn on my computer again. I’m not really sure why this isn’t the default behaviour.

One liner to show logs without Rotated Backup files

Friday, May 25th, 2012

I’ve been looking at the Apache log files on a web server this morning. There are many virtual hosts on the machine and the log rotation scripts have created many numbered backup files of the logs. To make the directory listing more readable, I have been using the following one-liner:

ls -l /var/log/apache2 | grep -Ev -e '[[:digit:]]+(\.gz)?$'

This will only display the current log files, assuming that /var/log/apache2 is the directory in which you store your Apache logs and that you do not store any other files there.

I hope it helps.

New Domain for the Blog

Monday, February 6th, 2012

This blog will now be hosted at

http://www.reversing-entropy.com/

to reflect the title. Any requests to the old address:

http://impey.info/blog/

will be redirected to the new address.

Percidae

Saturday, March 6th, 2010

For the last few weeks I have been looking for a job as a programmer. One of the jobs that interests me involves writing VB.net. Many of my friends who are also programmers have howled with anguish at the mere mention of this technology.

In order to get a better idea of how the language works, I’ve written a small program for editing the PATH environment variable on Windows machines:

http://code.google.com/p/percidae/

I find myself editing this variable on various Windows machines pretty regularly. Every time, I also am annoyed by the small text box in the dialog in the Control Panel.

So far I’ve enjoyed the experience of writing with Visual Basic and can’t see what all the fuss is about. Having written PHP for years, I’m used to ignoring the comments of language purists. I’m much more interested in getting something working than any imagined superiority of different languages.

One-liner to SVN delete files that you have already deleted.

Saturday, January 10th, 2009

The following one line shell command allows one to delete files from your file system and then removed them from Subversion.

svn stat | grep '^!' | sed 's/^! *//g' | xargs svn delete

One-liner to make sure svn updates work

Sunday, January 4th, 2009

I’ve been having problems getting Subversion updates to complete reliable of late. I have an enormous working directory called ‘programming-projects’ that is basically a large list in svn:externals. It’s useful to be able to go to the root of that working directory and update everything at once. This is especially useful for checking the development versions of projects that are using the latest versions of Haddock CMS.

However, normally before all the directories have been updated, one of the external servers will return a 502 error (or similar) and the process will die. The best solution that I’ve found so far is to run the following command:

perl -e '$r = 1; while($r) { $r = system("svn up") } '

It simply keeps calling svn up until it is successful. It’s not very elegant, but it works. Is there an argument that you can give to Subversion that will achieve something similar?

CLI Script to allow access to a directory in Haddock CMS projects

Monday, July 7th, 2008

I’ve added a little script that allows developers to allow access to a directory that is relative to the project root directory of a Haddock CMS project:

PublicHTML_AllowAccessToDirectoryOnTheServerCLIScript

I refactored a bit of existing code from:

PublicHTML_RestrictAccessToDirectoryOnTheServerCLIScript

into an abstract CLI script class:

FileSystem_ExistingDirectoryRelativeToProjectRootCLIScript

If you ever need to write a script that does something with a directory that exists and is relative to the project root, then extending this class would probably be a good place to start.

How did Microsoft get Vista so wrong?

Tuesday, May 6th, 2008

Before anyone accuses me of being a Linux bigot, I would like to say that I’ve been frustrated by Ubuntu lots of times. Wireless networks on laptops have always been a bit of a bugger and my latest install on a partition of my laptop has been no exception. Getting an EEE PC has shown me how good Linux on a laptop can be, if it’s set up right by the manufacturers. Ubuntu does quite a good job at this but it’s certainly not for the impatient. Dell, Samsung, anyone, please start selling more Ubuntu laptops with all that boring driver nonsense sorted out!

Working with Debian servers at the command-line has never been anything but an unalloyed pleasure. I have a extremely complicated set of tasks that I want to achieve and the stable version of Debian has always done them quickly and painlessly. Some stuff takes research. I’ve no idea how much of my career has been taken up with reading tutorials on the syntax of UNIX config files, probably more time than I’m going to get back. But once you know something and it works it works well. On servers (which, at a glance, are indistinguishable from their counterparts from the 1970s), the bottlenecks have always been my intellect, knowledge and imagination.

And then there’s Vista.

At first I thought that it was a brilliant. Good look, nice fonts, WinKey+Tab 3D funkiness and so on. But then you use it and before long you need a shot of whiskey just to calm your nerves.

If I access an FTP server (even on a cheap shared host) or SSH daemon, logging on and moving from directory to directory is quick. Most programs, including Nautilus out of the box on Ubuntu, allow you to store previous connections. XP used to remember the SMB shares that I had accessed. However, in Vista, every time I go to the network window in the start menu, the list has to be refreshed. Why? And does this takes so long? Does the computer ping the whole of 192.168.*.* or something?

Eventually, you get a list of computers on the LAN. You start to move about but just going from one folder to another can take up to a minute. Eventually you get to a folder that just locks up the computer for a few minutes, Explorer tells you that access is denied and restarts Explorer.

You get a link to

http://support.microsoft.com/?kbid=937097

which tells you that an error occurred and gives you information on how to load up the event viewer that also tells you that an error occurred. Great! I guess that I better contact my system administrator.

I had hoped that the Vista service pack would sort this sort of nonsense out but it hasn’t.

I’m loath to spend an evening hacking away at config files on the Ubuntu partition of my laptop just to get the sodding wifi adapter to work but anything’s gotta be better that the soul destruction that is using Vista all day everyday.

People talk about Cognitive Surplus:

http://jeremy.zawodny.com/blog/archives/010218.html

I guess that any system where the bottleneck isn’t your intellect, like Vista and Ubuntu some of the time, then the thoughts that should be going into your work end up getting clogged. Hence, the need for hard liquor…

YAWAF?

Thursday, April 10th, 2008

I run this blog using Word Press on a shared server. I could use one of a number of free blogging services like blogger.com and save myself the effort of having to update the software every now and again and deal with the other admin tasks that running a blog this way. I don’t because I just the tiniest bit paranoid. I don’t want to sign everything other to Google; although I do already use Gmail, Google Apps, Picasa, Google Code and Google Analytics so it’s not like a could be much more dependent on them. For a second after looking at the list that I just wrote I got really paranoid. And then I felt resigned – maybe the blog should go on Google as well.

I just took a look at:

http://code.google.com/appengine/

Part of me worries about this and part of me is resigned to it. Even a little seduced by it. The scalability, free hosting and (I assume) increased reliability are very attractive.

I guess that it won’t take over the enterprise web app market for a while but I guess that that has to be a target. But how much does one want to lock oneself into a company for this sort of thing? Can we use deploy apps on other servers with other companies?

Stochastic Selection in Back up script?

Monday, February 18th, 2008

I maintain a number of web servers and have a number of scripts running on the crontab of various machines that take care of backing up our data. For example, the output of running mysqldump for each of our databases is saved in dump files and copied to several back up machines. Once data has been copied to one of our back up machines, it is copied to a different directory and named according to the time that the data was copied. This way, we have lots of copies of our data from different times.

So that the hard drives don’t get filled with old copies of data, we also run scripts to delete old files:

http://haddock-cms.googlecode.com/svn/tools/server-admin/trunk/bin/delete-old-dump-files.pl

This isn’t very satisfactory as it simply deletes files that are more than a set number of days old. Also, different databases are different sizes. Some are only a few MB, so I might as well have hundreds of copies of the dump files going back months. Some are hundreds of megabytes, so we can’t afford to keep everything for ever.

I’ve been thinking about updating this script so that I can achieve the following:

  • I want as many recent copies as possible
  • I want a few really old copies.
  • I want the size of the back up dir to never go above a fixed level.

The current script doesn’t to this, I simply deletes files that are older than a set number of days.

The algorithm that I’ve come up with for achieving this is as follows:

  1. If the size of the back up dir is less than the set maximum, exit.
  2. For each file in the back up dir older than a given age, assign a score equal to the age in seconds times the size of the dump file in bytes.
  3. Pick a file non-deterministically and delete it. The probability that a file will be chosen is proportional to the score in step 2. Go to step 1.

I’ll probably want to play around with the score function a bit, e.g.

if the age is A and the size is S, f(A,S) could be

A + S
A * S
A^k + S^l

and so on.

Luckily, I’ve got more than one Debian box backing up our server data so I can play around with the script without putting the data at risk.