What’s with this anti-directory structure movement?

Posted by Thomas Sat, 28 Jul 2012 00:08:47 +0000

What’s with this anti-directory structure movement?

His fundamentally flawed premise is that people can remember where they put things. They can’t.

People do fundamentally understand the concept of folders, but they:

  1. don’t know which folder something is in, and
  2. (assuming they can remember which folder it’s in) have a hard time navigating to that folder (how do I get to the “My Photos” directory from whichever directory I’m currently in)

For him, he can easily create, manipulate, and navigate a tailor-made structure of folders. Most users cannot, and I’d argue the average user just puts stuff on their Desktop (which I’d guess isn’t too far off from what Apple is offering).

He brings up a good point, though, which is that just as most people don’t like naming directories, they don’t like naming files either.

But I don’t buy his premise that people can’t be bothered to remember “parts of its content”. If you can’t remember some part of what you’re looking for, how can you ever hope to find it? Do you open every file in the directory until you find the one you’re looking for?

If so, then this is a fundamental problem going forward. If people aren’t inclined to search in general (or can’t remember to) and can’t remember distinguishing features to look for, then I have no idea how they will ever find anything.

What do people remember about what they’re looking for in this case if they can’t remember part of its content? Colors? Fonts? Dates?

Posted in Technology | Comments Off

Netflix queue hacking

Posted by Thomas Sun, 20 May 2012 20:00:58 +0000

If ever you want to filter your queue by some min/max rating, and possibly look only for a specific genre. YMMV. Class names subject to change. Void where prohibited.

function getstbrMaskFgSpan(element) {
  var spans = element.getElementsByTagName("span");
  for (var span, k = 0; span = spans[k]; k++) {
    if (span.className.match(/stbrMaskFg/)) {
      return span;
  return null;

var minRating = 3.7;
var maxRating = 5.0;
var tables = document.getElementsByTagName('table');
for (var table, i = 0; table = tables[i]; i++) {
  if (table.className != "qtbl") {

  var rows = table.getElementsByTagName('tr');
  for (var row, j = 0; row = rows[j]; j++) {
    // filter out tv shows
    if (row.innerHTML.match(/series_only/)) {
      row.style.display = "none";

    // possibly filter for just action movies
    if (!row.innerHTML.match(/Action/)) {
      row.style.display = "none";

    var span = getstbrMaskFgSpan(row);
    if (span != undefined) {
      var rating = span.className.replace(/.*sbmf-([0-9]+).*/g, "$1");
      rating = parseFloat(rating) / 10.;
      if (minRating < rating && rating <= maxRating) {
        row.style.display = "";
      } else {
        row.style.display = "none";

Posted in Technology | Comments Off

Still got it

Posted by Thomas Sun, 22 Apr 2012 17:19:53 +0000

It’s a good feeling knowing that I can still bend ldap, kerberos, and radius to my will. :P Squid gave me some trouble; since it logs to a strange place, it took me longer than it should to find a configuration problem. By and large the important pieces are working again, and all running shiny new 3.2 kernels (the first time I’ve used a 3+ kernel). I’ll consider this my late contribution to the world backup day, since really I should have moved off of the hard drive in my domU ages ago (since it always seems to return errors whenever I try to backup xen images). I really should go ahead and update my domU to a new fanless box that doesn’t take 10 minutes to boot up. I’ve been eying some mini-itx fanless boxes, which would allow me to migrate the important things over to new hardware before the old hardware fails… But all in all if I only have to spend one weekend every 2+ years doing (non-filer) admin work on the network, I think I have to consider that super win.

I should have been taking better notes, but here are some things I took away from the weekend:

  • squid3: Setting “error_directory /usr/share/squid-langpack/en” since it was looking in /usr/share/squid3/errors/templates/ for a reason I still haven’t figure out
  • installing testing debian: requires stupid ethernet firmware. REally?1? I understand the whole free software thing, but this was a little too pure. Reminds me of having to put ethernet drivers/modules on floppy during an install. Made me think, “man, I really shouldn’t have to do this”. Ditto for the netboot initrd not having the hard disk controller drivers I needed forcing me to connect up a cdrom. I gave up on getting serial working, as it’s never worked out as well as I’d liked and always seems to bite me when I need to debug (e.g. no vga console output when I need it most).
  • dist-upgrade: I’d like to think I could have upgraded the domU all the way from like 4.0 or whatever it was to current testing, but I just didn’t have the heart. I figured it’d take me less time to just install from scratch than to fight with xen, udev, libc, grub, grub2, device naming, etc, etc. I used to think that you always could dist-upgrade a box and it would usually work out ok. Now I’m not so sure, since it seems like there have been a huge amount of changes, which make automagic dist-upgrading very painful.
  • backups: I really should do a better job. Maybe one day it’ll bite me hard enough I’ll do better — that or it’ll teach me that the data wasn’t that important in the first place. :)
  • ldap: There still seems like the current version of ldap has ssl/tls issues since it’s compiled against gnutls (http://bugs.debian.org/645810). I’m too lazy to fight with this one at this point, so I just disabled tls on both the client and serve side. I’d like to have tls working, but it’s not the end of the world. I did have to fight with the upgrade process to get it to complete. Though I think the data should be cleaner now, since there was some cruft in there that I was able to remove. It did take quite a bit of finagling, tho…
  • xen: Installing the latest version of xen directly from a fresh install seemed pretty painless (though I don’t recall it being super painful before, either). After a while I figured out the steps required to upgrade the dom0′s, so eventually it went smoothly (but did take some time to upgrade many++ packages to be able to install the correct kernel).
  • kerberos: Since I had to restore my auth xen image from backups, the kerberos database was the only thing that I realized halfway into all of this had data that I really needed from the current copy (current passwords, machine credentials). A chroot + kerberos dump/restore fixed it up nicely. Dodged a bullet on that one.
  • freeradius: I finally got this working this morning. I thought it was fixed last night, but this morning my phone wasn’t connected to wifi. After a while I realized that the config was borked. Since my config was pretty old, it looks like there might be more standard ways to configure ldap, but after some time trying what appeared to be their way, I just reverted back to my old configs which did work with the current version of freeradius. I wish it was a little less hacky (dummy certs and not the “recommended” way to setup ldap), but hey whatever works. :)
  • homogeneity: While flipping back and forth between vms, I realized that several common configs (ldap, apt.conf, sources.list) varied some. I think my plan was to get them all synced via puppet, but since that never got off the ground I really should sync them up now…
  • ipv6: It continues to work out of the box, but I realized that sometimes I was doing apt-get downloads through my ipv6 tunnel, which means that it’s traversing the US — which explains why sometimes the downloads were slower than expected. One of these days I’ll migrate to a closer endpoint. :)
  • linux 3.2: I was a little apprehensive that everything (joe random userland tool) would work with a 3.2 kernel. Haven’t yet found anything broken by it.

My actual notes for upgrading dom0 vms:

# update console to listen on hvc0 instead of tty0 (before
# starting the vm)
mount /path/to/image /mnt/loop0
vim /mnt/loop0/etc/inittab

# start the vm
xm create -c image.cfg

# add hvc0 to the list of allowable local consoles
# (so root can login locally);
# h/t to http://docs.quantact.com/xen-fs-changes
# (even though I didn’t wind up updating fstab or mknod’ing anything)
echo “hvc0″ >> /etc/securetty

# update sources.list to use testing
vim /etc/apt/sources.list

# sync to testing
apt-get update

# these seem broken w/ the new kernel, so upgrade them first so
# they don’t break other apt installs
apt-get install findutils debconf

# ditto my older version of cpio caused trouble during the
# kernel install
apt-get install cpio

# install the kernel so when things depmod it won’t spew a warning
apt-get install linux-image-3.2.0-2-686-pae

Posted in Technology | Comments Off

528 days

Posted by Thomas Sat, 21 Apr 2012 14:46:15 +0000

[root@xen0 ~]# uptime
17:42:35 up 528 days, 14:29, 1 user, load average: 0.08, 0.11, 0.05

I’m probably going to regret this later this afternoon, but I think I’ve sort of painted myself into a corner and will need to upgrade my Xen setup. This wasn’t quite what I was planning on messing with today…

Posted in Technology | Comments Off


Posted by Thomas Sat, 21 Apr 2012 10:23:30 +0000

It’s been quite a while since I spent an evening fighting to get Debian installed on a new machine. But my Realtek RTL2832 DVB-T arrived yesterday. So I wanted to try it out, which required a specific kernel + modules, etc, etc. So I pulled out an old machine and installed it. I didn’t quite get the tuner working by the end of the night, but I did get it recognized properly by the kernel. Since I didn’t really want to play with it too much anyway, I’ll probably call that good enough and pack it away to play with at some later date.

In other news, I did receive the tablet. It doesn’t seem to have bluetooth and wouldn’t allow me to install some things like Maps, Youtube, and Chrome from Market directly. But then I remembered I could side load them, so I was able to install them. It seems a bit sluggish sometimes, like during rendering or touch. Sometimes I have to toggle the screen off and on again for it to recognize touches. So I guess there are some subtle bugs around. But for the price I paid, I guess I can’t complain too much. So all in all, not too bad of a purchase.

Posted in Technology | Comments Off

Software Radio

Posted by Thomas Mon, 09 Apr 2012 10:48:51 +0000

Since it became known that some software radio hardware was on the market pretty cheap, I figured I’d pick one up. It’s on a slow boat from China, so I expect it in a month. :) Not sure if I’ll do anything with it, but it’ll be nice to have around, just in case.

Posted in Technology | Tags , , , , | Comments Off

Another 7″ Tablet

Posted by Thomas Mon, 02 Apr 2012 21:19:02 +0000

I ordered another 7″ tablet over the weekend. I wouldn’t be surprised if it’s shipped direct from China. I had been thinking about getting a Spark/Vivaldi tablet. But then I realized that I didn’t really want to deal with less than polished software, and that really since this was just going to be something to tinker with, something running Android would work perfectly. So I got one of these. It has the same processor as the Spark/Vivaldi. It doesn’t have the full 1G of ram or as high of resolution that some of the other tablets do, but supposedly it has built-in bluetooth (only time will tell if it does or not). I figure trading built-in bluetooth for a cheaper tablet with a bit less RAM/resolution is worth it. Another prime criteria was being able to get it on my network. In theory it ships w/ Android 4.0, so it shouldn’t be a problem (again crossing my fingers that the description is accurate).

Posted in Technology | Tags , , , | 3 Comments


Posted by Thomas Fri, 18 Jun 2010 21:15:01 +0000

ext3 is dead to me. Long time readers will know I normally go with xfs (yes, I’m a xfs fanboy), but sometimes it’s easier to just go with the default. No modern filesystem should run out of inodes. This is simply inexcusable. :/

Posted in Technology | Comments Off

Hot Card

Posted by Thomas Fri, 21 May 2010 20:01:47 +0000

A fanless graphics card isn’t much use if it locks up without a fan blowing on it…

(This is really just a poor excuse for a post so that I have one for May…)

Posted in Technology | 1 Comment


Posted by Thomas Wed, 28 Apr 2010 21:26:53 +0000

This is probably the reason why I’ll never use Mercurial as a version control system: its complete inability to revert to a known, sane state without deleting and starting all over. No thank you I’d prefer not to have to re-download 10′s of MB of data onto my NFS home dir when I have a perfectly good, already almost perfect copy on disk.


Posted in Technology | Comments Off

Gnome’s Empathy

Posted by Thomas Thu, 22 Apr 2010 01:37:33 +0000

Now I remember why I had a bad feeling about Gnome’s Empathy (http://live.gnome.org/Empathy). It’s because after having to install deps that aren’t bundled by default (what good is a program that doesn’t come with any backends by default?), and starting 5+ daemons that I didn’t have to have running before, it still doesn’t open a window.

Posted in Technology | Comments Off

Buzz props

Posted by Thomas Mon, 29 Mar 2010 16:37:10 +0000

I’m not one for too much buzz love, but I gotta give Buzz props for 2 things:

1) allowing me to comment on things directly in the ui that notified me of the new content (I suppose facebook does this with their new reply-to feature, but it seems icky)
2) allowing me to cc people directly in my comment and have it show up in their inbox

Both features I’ve apparently gotten used to and miss when I’m using software that doesn’t have them.

Posted in Technology | Comments Off

More DriveS!

Posted by Thomas Mon, 22 Mar 2010 23:48:41 +0000

It’s only taken me 4+ weeks, but the filer is finally upgraded. Here’s hoping my random iops doubles (along with significant gains in streaming writes). The real reason I did this was because the first raidz stripe was ~completely full (30G free — down to 9GB free after several resilverings). I think this effectively reduced my io to the single new stripe. Once this gets baked in for a bit (and I’ve had time to make some more backups), I’ll throw my 5x400G drives back into the array for even more space. :)


[root@filer0 tgarner]# zfs list tank
tank 3.96T 1.05T 52.7K /tank
[root@filer0 tgarner]# zpool list tank
tank 6.37T 4.95T 1.41T 77% ONLINE -


[root@filer0 ~]# zfs list tank
tank 3.98T 3.17T 51.1K /tank
[root@filer0 ~]# zpool list tank
tank 9.09T 4.97T 4.12T 54% ONLINE -

Posted in Technology | Comments Off

why zfs and I aren’t on speaking terms

Posted by Thomas Sat, 20 Mar 2010 13:07:48 +0000

Per Cole’s request:

I’m not exactly sure what happened (’cause it happened just last night), but I ~lost a whole bunch of data. Granted it was a scratch disk, and granted I have a backup from the middle of January, that’s not the point. So zfs has snapshots right? Well that’s what I have backups of, the snapshots. There were like 4 snapshots. I have perfect replicas of my first two snapshots. I even have perfect incremental snapshots between each. But I can’t seem to get my backup “take” the incrementals. That was part of the issue. Now yesterday, likely due this scratch drive going bad (not superbad, just retrying sector reads a lot), the box kernel paniced. I saw it just before I went to work. It rebooted and came up fine. I went to work. I’m pretty fed up with the filer not being in a perfect state of being, so I sit down after work to beat it into submission. First item on the agenda is getting a perfect copy of the scratch disk, or at least figuring out what my good backups are missing. Well, I can’t even read the darn thing. It says something about corrupt metadata. At this point I have very few options. Basically they amount to a `zfs export` and `zfs import`. And I’m pretty sure that once I export it, I won’t be able to get it back, ever again. So, feeling that I have no other recourse, I export, and guess what happens: I can’t re-import it. Well fooey. I basically take my lumps, have a decent backup from January, recover from elsewhere the data I know I am missing, and curse zfs for being so smart that I have like zero recourse to debug w/o begging Sun engineers on a mailing list of how to recover this drive. Since the o/s that I’m running is quite old, it is also possible that newer versions of OpenSolaris could be better about being able to import this disk. I tried to upgrade my install of Nexenta last night, to no avail. Neither dist-upgrade nor a fresh install worked for me. So I’m stuck for a bit. Maybe over the course of the next few months, I might be able to recover it. I’m going to try not to worry about it too much. It’s just a super pita that it’s such a huge black box. Even if I, say, wanted to ddrescue the whole disk, I have no idea how to import it into the os. And there are tools (like zdb, etc) that in theory you might be able to recover, but that’s all voodoo to me. Anyhoo, that’s the gist of it. I lost another 1TB disk the other week. I still haven’t put all of the 5 1TB disks that I bought some time ago in. Which is the main goal of this weekend. I got another disk at Microcenter. I did a replacement last night, so I have 3 more to go. Given that they take 6 or 8 hours (depending on how much data is on the stripe), and that i/o basically comes to a crawl when this is going on, it’s going to be a long weekend.

Posted in Technology | 1 Comment


Posted by Thomas Sat, 20 Mar 2010 01:05:57 +0000

zfs and I are not on speaking terms right now…

Posted in Technology, Tweets | 2 Comments

Handle with care

Posted by Thomas Tue, 09 Feb 2010 01:10:54 +0000

!@%$($)@_+!! it. If ever in the future I tray again to move myself please stop me. I’ve gone through 2 power supplies and now my video card doesn’t seem to want to cooperate. At this rate I’ll have a whole new computer within the month. :( All of these broken components were not in the budget. Thank goodness I had the (flippant) desire to put the filer and the tv in the truck. In hindsight I should have put my workstation in there as well. Putting it at the very back of the trailer where it would get the most sway, have the most room to move, and get the most scratched up was a bone-headed move. I suppose I should be thankful I didn’t lose any data, though I’m wondering now about the hard drive in the workstation, if it’s going to hold out or not. It’s had a good run, and smart checks out, but it has some smart checksum errors, which is troubling, along with some less than desirable sounds emanating from it. And of course both frys and microcenter were closed by the time I discovered the video problems. Thanks alot ups for delivering my package at like 8pm (12 hours after it was put on the truck).


Posted in Technology | Comments Off


Posted by Thomas Sun, 07 Feb 2010 17:33:39 +0000

Having virtually complete silence in the apartment is definitely a welcome change. For almost a whole week, I didn’t have any computers on. With the rack squirreled away, the living room was completely silent. I powered up my workstation, but it was too noisy compared to the silence. So I bought some new, more silent fans to swap out, and now I can hear my hard drive chatter, which I can’t say I’ve been able to hear in longer than I can remember. :)

Posted in General, Technology | Comments Off

Chrome gripes

Posted by Thomas Wed, 23 Sep 2009 20:14:15 +0000

Chrome doesn’t:

  • Expose proxy settings through the ui (they’re set via environment variables). So if I want to change them, I have to go to the cli to set env vars, then restart chrome.
  • Display link urls in the status bar when I hover over links. Apparently it does this. Don’t know what I was smoking.
  • Let me middle click in the body of the page to load a new url (this is fixed upstream, but not in the versions I’m running).
  • Allow css ad blocking (http://www.floppymoose.com/).
  • I have to add a trailing slash to the end of everything so it doesn’t try to do a web search for whatever I type.
  • Update: Doesn’t let you hit ‘/’ to start searching.
  • Update: When searching doesn’t let you hit enter when matching a link to go to that link.
  • Update: It seems to take its sweet time sometimes doing what appears to be a dns lookup on the proxy. TTLs anyone?
  • Update: Selected text isn’t automatically the desired search text when opening the search box.
  • Update: Can’t shift delete mistyped urls in the omnibox.
  • Update: Chrome automatically selects text when the omnibox gains focus, clobbering anything I had in my clipboard. Something similar happens with the find field, but it’s that whatever I had in the clipboard goes away completely.
  • Update: Can’t control click and drag to select multiple rows of a single column in an html table.
  • Update: When selecting space delimited text, pressing delete/backspace will actually delete more than is actually selected.
  • Update: Backspace no longer goes to the previous page.

Posted in Technology | Tags , , , , , | 4 Comments


Posted by Thomas Wed, 23 Sep 2009 00:19:29 +0000

The place where people can’t be bothered to figure out where their friends’ blogs are, call, email, or write their friends, nor be bothered to read or write more than 140 characters. Freeloaders welcome.

Posted in Technology, Tweets | 1 Comment

Labor Day

Posted by Thomas Mon, 07 Sep 2009 20:34:49 +0000

…has spent more time than I’d care to admit this weekend fighting with python’s compete and utter lack of timezone support. Please add this to the list of reasons why I hate python. And please don’t waste my time with “you should just use <insert third party library here>”. They all suck too.

If I have a timestamp with a f*ing -0400 or EDT in it, you should know perfectly how to convert it into epoch! There is no excuse for that shit. If I have to tell you that EDT == UTC-4, you fail. Doubly fail if you silently throw away timezone information, or well you can just read this and this. At least I’m not the only one…


Posted in Technology | Tags | 1 Comment

Adventures with puppet

Posted by Thomas Sun, 02 Aug 2009 17:09:26 +0000

Last Sunday I finally had sufficient desire to get puppet working on Sunday. But I ran into issues and didn’t get it working, sort of like If you give a mouse a cookie.

  1. Install puppet => realize the version in etch is a little old, my config doesn’t work out of the box, the documentation online doesn’t jive with the behavior I’m seeing with this version => debugging is difficult
  2. Decide to upgrade to puppet in testing => make a backup of the machine => disk has bad sectors in the image => machine doesn’t have ddrescue => machine doesn’t have internet access (for reasons still tbd) => download ddresuce deb => copy to machine => install ddresuce => finally make a backup of the image w/ ddrescue
  3. Try to verify the new image is good by firing it up in the vm => vm doesn’t like the image (fsck barfs due to physical # of sectors not matching the ext3 superblock) => finally figure out I can try to resize the ext3 disk image => image boots!
  4. I get tired of this, and don’t get around to upgrading to testing

Posted in Technology | Comments Off

Housing Maps + Android

Posted by Thomas Mon, 20 Jul 2009 21:46:11 +0000

Would be nice if someone would write an Android app that combines street view and housing rental listings. Sort of like housingmaps.com, but with a real time display of your surroundings, overlaid with the listing. Also like this but showing the real buildings, etc on the screen. That way, I could be standing in a park, and look all around me and see which buildings have rentals available, and would be able to see what the outside looks like without having to actually go to the building.

Posted in Technology | 1 Comment

typedef struct

Posted by Thomas Mon, 22 Jun 2009 23:44:23 +0000

…is in circular typedef struct reference hell.

It’s like I’m that kid on America’s Funniest Home Videos picking up one ice cube, only to drop another. Rinse and repeat, except I’m moving #includes around…

Posted in Technology, Tweets | Comments Off

Anchor tags to the rescue

Posted by Thomas Sat, 02 May 2009 16:42:07 +0000

It dawned on me late this last week that twitter has created an artificial regression in the web. A little known tag in the HTML spec (the so-called ‘anchor‘ tag) allows one to make a link to a web page, but instead of showing the entire URI, one specifies the constant ‘click here’. For example, click here for more information about ‘click here’. Had twitter been on the cutting edge of the HTML spec, they might have not started this abominable trend of hurting the internet. Click here for the antidote. :)

Posted in Technology | Comments Off

Home network crawl/index/search

Posted by Thomas Fri, 17 Apr 2009 09:54:19 +0000

Why is there no componentized, modular, gpl’d crawl/index/search service for home networks that would search your smb, nfs, ftp, upnp, and daap shares, index them, make them searchable, and add value to the results (like cover art, imdb info, etc), and then serve it up for other media renderers? It’d be like the locate cron job, but distributed, and with more value add.

Posted in Technology | 1 Comment

Re: My name is not a URL

Posted by Thomas Tue, 31 Mar 2009 14:53:42 +0000

Chris Messina wrote a little blerb over at his blog. I read it shortly after he posted it, and thought to myself, “self, I disagree”. So here we go. :)

Vanity urls don’t seem terribly harmful at first glance, but definitely do seem a bit silly. I can understand that having a global namespace like that quickly leads to collisions, so people are forced to constantly modify whichever handle they prefer. I can also not understand certain things about people on the internet (like why they’d clamor over vanity urls and why most people on myspace choose the absolutely worst, ugliest web design principles possible — yet people love them for it).

The omission of a memorable url for my “home” is definitely a good design pattern, as is easily seen over and over by such intelligent people as Papa Goog and Flickr. Having what is basically a permalink, a static url that forever points to a particular document, photo, or whatever, is a good idea, especially when compared to urls (unlike WordPress’s urls that can change, depending on changes I make to the title of the post). This is a no-brainer. Check.

He makes some more decent points up until:

That everyone on Facebook has to use their real name (and Facebook will root out and disable accounts with pseudonyms), there’s a higher degree of accountability because legitimate users are forced to reveal who they are offline. No more “funnybunny345″ or “daveman692″ creeping around and leaving harassing wall posts on your profile; you know exactly who left the comment because their name is attached to their account.

This is where I really start “not buying it”. First and foremost, I don’t think this is a case of correlation equaling causation. Just because names are unobfuscated doesn’t mean that the quality of the comment/content is automatically driven up. I would argue that there are several reasons why the quality is so much better, completely outside of what I call myself. 1) You can’t leave messages on people’s walls you aren’t friends with. You can’t even see most people’s profiles. This is effectively whitelisting, and it works like a charm. If I don’t know you, or I change my mind and don’t like you anymore, I can block you. Everyone who’s ever read youtube, slashdot, or digg comments can relate. Which begs the question, why doesn’t flickr have this sort of watered-down spam problem? 2) Everyone I’m friends with, I actually know (or like 99%) in the real world. The people I’m friends with are people I have at least some interest in having some sort of conversation with (marginal as that conversation may be). That model builds in un-spammy-ness. Which kind of leads me to… 3) Facebook started out in colleges. And while I don’t know the demographics, I’d imagine that the majority remains in that original demographic, if now only a bit older and gradumicated. I think this also builds in high quality content, due to the fact that the majority went to college, and it’s not some 12 year old from New Jersey commenting like an idiot on Youtube.

Anyway, I’ve tried to read his post a couple of times, and maybe I’m missing the point. I agree that narrowing search scope can be useful in certain circumstances. But I still don’t quite grok how showing funnybunny345′s real name in a chat list or in my email or on a blog post or on twitter significantly increases the value of the content or relationship given that either way I know who that person is. Shouldn’t that be a simple feature of the software to allow me to give an alias to or simply rename the contact in my list to something more memorable?

Unless his whole point is that there are so many sites out there and people are forced to keep evolving their handles so much so that you can’t really remember who funnybunny345 is in real life. And that distinction probably does have value. But gmail and facebook are my primary means of communication, and everyone there has a first name, a last name, and maybe a picture, so perhaps I’ve just not hit that wall yet; that use case of not being able to recall who that person is who just commented on my [whatever]…

Posted in Technology | 1 Comment

The filer

Posted by Thomas Mon, 30 Mar 2009 22:52:39 +0000

is feeling much better after getting a much needed ram swap (it was panicing all the time). And I am contemplating whether or not updating the l7 filtering on my firewall is worth the new kernel + wrecking its 163 day uptime.

Posted in Technology, Tweets | Comments Off


Posted by Thomas Sun, 25 Jan 2009 16:28:30 +0000

A couple of weeks ago, I had an awesome idea (well of course I think it’s awesome, it’s my idea). Laurie was having some egregious virus/spyware problems. Problems almost to the point where I thought she’d either have to reinstall or take it somewhere and spend $$$. In the end, though, she was able to get it righted, and last I heard, all was mostly right with the world.

Several years ago I ran across Netsquid. Basically it “takes an Intrusion Detection System like Snort and transform it into an Intrusion Prevention System”. It sits between the internet and your computer, and tries its best to keep you from getting viruses, alerts you when you have one, and will shut you off from the network if you do have one.

I’d sort of forgotten about my idea, but recently there have been some very widespread viral activity. And since Microsoft seems neither to be able find a decent security model, nor find themselves with dwindling consumer market share, I ask myself, why haven’t all of the broadband router companies put this sort of functionality into their routers?

Wouldn’t it be nice to have Linux looking after you, even if your machine is running Windows? The virus protection is super easy and simple, as there are multiple anti-virus products for linux that are well maintained. Spyware gets more tricky, as I don’t think there are any decent spyware detectors for linux. Basically this device would act as your router, inspecting all of your traffic, disallowing viruses and spyware to enter your network, watching out for suspicious outgoing network activity (like noticing c&c traffic or secondary payloads), alerting you to the fact that there’s a problem, cutting you off from the internet (and possibly segregating you off from the rest of the local network), and keep you from visiting phishing sites (via Google’s Safe Browsing initiative). It’s like a trifecta of virus, spyware, and phishing protection.

Technically it’s probably pretty simple to whip up, but nothing I would feel like supporting for 100 million people. Anyone who reads this, implements it, and becomes rich off of it, please grant me some slice of the pie. :) kthxbye

Posted in Technology | Comments Off

The world is my rss feed

Posted by Thomas Sun, 18 Jan 2009 00:01:05 +0000

I don’t really get some of the current movement to open the borders between the current glut of the web’s social apps. From what I’ve seen, most federation revolves around me being able to use, say, my Facebook profile to interact with my friends on Myspace. I don’t get that — I don’t grok it. Standards have, of course, been the reason why the internet works, and why you can send email from a yahoo account to a gmail account. And that’s all well and good, and if you can get that to work, then I guess I’ll applaud your achievement.

But at the end of the day, you still have all of your data in commercial silos, where they want to own the data you’ve inputted, all of your relationships to your friends, your photos, your videos, basically any content you create on their site. I find great offense at that.

One thing that I’ve learned over the past several years is that creating change is hard, and usually virtually impossible. Getting these different commercial entities to agree on common interoperability standards must be analogous to an act of congress, as well as getting them to maintain that interoperability, in light of a quickly changing web 2.0 landscape.

Which is why I’ve grown to understand why papa Goog used rss and atom as the basis for their gdata api. I think that more often than not, black swans will leap out at you and dominate the landscape. I would classify the success of http and rss as one of these. I doubt if they tried again, that they would receive the same amount of adoption. Take ipv6 as an example of a technology that’s good, but hasn’t yet received widespread adoption, even in light of the dire consequences of non-adoption.

Which is why, if I had my way, I’d just stand on the shoulders of the http and rss giants to achieve my open, social web (not to be confused with opensocial). As I’ve mentioned before, it seems to me that if sites were to publish rss feeds of everything, that an aggregator app would be able to scrape these feeds to get a complete view of the world. Why do I need to build this complex interoperability, this ability from my facebook account to post on your myspace, when honestly all we want are common data apis?

Given: Everyone has an openid uri, and associates it with every social web app they use.
Given: I am able to get comments and entity feeds of everything that’s basically “posted” online.
Given: I publish a list of all of my friends by their openid (read: foaf).

Use case:
Marc posts an new photo on Flickr. I log into flickr with my openid, and post a comment on this new pic. My personal aggregator crawls Flickr, getting all of the new pics that all of my friends have posted, and all of the comments that they’ve made recently. Give these two sets of data, you can then go back to flickr to find additional supplemental data surrounding the original data. Given that rss/atom feeds are standard, and that additional xml namespaces are standards as well, it should be reasonable to, given the “type” of feed, to present all of this data in an attractive way, analogous to Friendfeed. You have, then, all of the comments that your friends have made on flickr recently (in context with the actual picture), and vice versa for your friends’ photos (and subsequently those photos comments). This extends to anything you post because everything is an “entry”. The only thing that’s left is how you’d like to present your data. Digg has already started a de facto standard for how to tell a third party what the thumbnail should be for an arbitrary web page, and I think rss and atom have had support for this for quite some time.

If the presentation is left as an exercise to the user, and the original feeds expose enough information for third parties to then, say, post new comments to an entry, then you are able to get away from Friendfeed’s current model of having comments on Friendfeed, instead of where the probably should be, which is the site of the original entry’s content. Why on earth would you want to have two entirely separate sets of comments about a single photo on flickr? In allowing people to comment or favorite Flickr entries on Friendfeed, Friendfeed itself becomes a walled garden. What do I use to aggregate what amounts to original content on Friendfeed?

Anyway, I guess the actionstream project does this to some degree, and they are thinking a lot about it, and they say that 2009 will be a great year, and while I hope they’re right, I won’t be holding my breath.

Posted in Technology | 1 Comment


Posted by Thomas Fri, 16 Jan 2009 23:59:54 +0000

I had an itch to play with sip, so I built a new xen host and installed asterisk. I think to myself, self, asterisk has been around for quite some time, and while everyone says its really hard to configure, surely by now, there are some very straightforward guides that should get me up and running, making my first call within minutes. Now 1 don’t call me Shirley, and 2, apparently not. A quickstart guide should be something like:

* apt-get install asterisk asterisk-sounds-extra
* configure a barebones config
* apt-get install linphone (or your sip client of choice) on your client
* make a call (when I call, who am I, who am I calling, what’s the format of the @ syntax, which server settings do I use?)

Since I did eventually figure it out, the answers to the above questions would be:
* apt-get install asterisk asterisk-sounds-extra (same as above — don’t know if you need the -sounds-extra package, but it probably won’t hurt)
* configure a barebones config debian sufficiently configures it
* apt-get install linphone (same as above)
* make a proof of concept call to sip:1000@asterisk (1000@ is the demo, which will talk to you and walk you through a demo; after the @ sign, you use the ip or dns name of your server — mine’s in dns as ‘asterisk’)

Posted in Technology | Tags , , , , , , | Comments Off


Posted by Thomas Wed, 17 Dec 2008 10:48:56 +0000

Dear Internets,

Please someone write ztop, an ncurses interface like iftop, but for zfs.


  1. show loadavg type averages (2s, 10s, and 40s) of read and write bandwidth
  2. show loadavg type averages (2s, 10s, and 40s) of read and write iops
  3. show cumulative read, write, and sum data transferred since program open
  4. show peak read, write, and sum bandwidth since program open
  5. show peak read, write, and sum iops since program open
  6. use explicit units
  7. extra points if you break it down by process doing the reading, writing and to where


Posted in Technology | Comments Off

Taco Bell

Posted by Thomas Sat, 13 Dec 2008 20:15:38 +0000

I was dumb and when I finally went to eat lunch at like 4pm forgot to take my wallet! Good thing I keep all that change in the console. The lady didn’t bat an eye when I paid her in exact change; $5 worth of quarters.

I’ve spent most of my day packaging up various l7-filter debs. I’ve been meaning to play with qos for forever, and was finally in the mood to fight with it today.

Oh, and distcc has to be the worst packaged thing ever. It absolutely requires you to specify acls (the –allow flag) restricting access, but only allows you to specify to listen on one ip. How’s that supposed to work? Apparently I have to listen on all interfaces, else I restrict myself to only 1 subnet. Like say when my hostname has an AAAA record, distcc pukes. I can’t add my /64 to –allow, so I have to listen on my internal ip. That’s well and good and all, but then what if I want to connect to localhost? Then I’m sol… It makes no sense.

Posted in Technology, Tweets | Comments Off

Link Post!

Posted by Thomas Mon, 08 Dec 2008 23:01:41 +0000

I’m not one to usually do link posts, but here you go:

Apture sort of reminds me of pop-up-news, but without the automagic, pithy commentary.
Sweetcron is perhaps a step in the right direction. The next cool thing might be a foaf app that organizes your friends. You’ll need some decentralized way to request that someone be your friend.

I absolutely cannot deny that there might be something in this. But I call bs on the whole semantic web thing. I can imagine people finding information for a small subset of questions, and that even within that subset twitter should be able to find a revenue model. I doubt, though, that this approach would work in the larger search context. And, I still don’t get twitter (or ff for that matter). :( How has no one made the new hotness that competes with these?

And while I’m semi-ranting, why are there no standards for blog themes? Have I mentioned this before? Anyway, wouldn’t it be so nice if there were another xml feed that you could just xsl into a theme? Then any theme created for any blogging software might be able to skin your favorite blogging software…

Posted in Technology | Comments Off

Cobalt Qube 2

Posted by Thomas Sun, 03 Aug 2008 12:12:53 +0000

I’ve wanted one for quite some time, and now apparently I own one. On a whim, I decided to bid on one on eBay, fully expecting to be outbid. I put in the max that I was willing to pay (+ shipping). It was at $9.99 at the time, and I put in $15, figuring I wouldn’t get it. The fact that I won I consider to be mostly an accident. I’ll have to procure a non-standard power supply for it and only then will I see if it actually works, as it’s an as-is auction, and untested. It will take some work to get it working, but $25 isn’t too bad, I don’t guess…

Posted in Technology | Comments Off


Posted by Thomas Tue, 22 Jul 2008 23:58:04 +0000

I meant to note that we open sourced Protobufs, but somehow I forgot. They are crazy awesome and you should check them out.

Posted in Technology | Tags , | Comments Off

Nexenta 2.0

Posted by Thomas Tue, 24 Jun 2008 00:02:52 +0000

is reinstalling his filer yet again. This is like the second time it’s locked up and then decided not to want to boot up after I’ve kicked it. Does not bode well for long term Nexenta stability. Granted I have like zero low level Solaris troubleshooting skills, but I can’t recall the last time it was I had a linux system totally buy the farm. I’ve managed to get myself into some sticky situations, where my linux skills have saved me, but this is ridiculous. Let’s just say I haven’t been impressed with Solaris. Honestly I have no idea why everyone buys their expensive stuff.

Posted in Technology, Tweets | Tags , , | Comments Off

5TB goodness

Posted by Thomas Sat, 21 Jun 2008 13:45:37 +0000

# before
root@filer0:~# zpool list tank
tank     1.82T  1.56T   266G    85%  ONLINE  -

root@filer0:~# zpool add tank raidz c2t5d0 c3t2d0 c3t6d0 c2t2d0 c2t6d0

# after
root@filer0:~# zpool list tank
tank     6.37T  1.56T  4.81T    24%  ONLINE  -

Posted in Technology | Tags , , | Comments Off

How are twitter and friendfeed not over glorified rss aggregators

Posted by Thomas Sat, 07 Jun 2008 02:24:06 +0000

How is twitter not just an rss aggregator that limits the blog posts to 140 characters? So maybe you can post from a text message via your phone. That’s novel, but not particularly earth shattering. Surely in 2008 it should be fairly easy to text from your phone to your blog if you so desire. Maybe it’s the social aspect to it? But I already follow my friend’s blogs, have the ability to comment on their entries (though I may not see all of the comments on all of their entries…). And doing an exceptionally simple search to find my friends solely based on email is neither novel nor nonobvious. The ability for me to log in with openid and you to use my foaf to find my friends would be much more robust. I guess it also has favoriting and replying to specific posts, but again replying to a post would be like making a comment on another’s blog, and favoriting I guess I just don’t really ‘get’. There also might be the soft real-time nature of it, whereas an rss aggregator has a pretty relaxed update guarantee. Surely that could be solved by liberal use of trackbacks or a push architecture.

Friendfeed seems equally as unoriginal. Honestly it seems more like an aggregator than twitter. Actually, looking at it right now, most of what I’m reading is an aggregation of twitter. Now don’t get me wrong; I love the notion of having the tiniest scrap of update of all of my friends in one place (like facebook’s news feed on steroids). And while I do think that is totally awesome, it is very much a natural and logical extension to what already was on the web. twitter is to blogs as blogs are to … geocities. It’s like we’re repeating ourselves that we couldn’t find enough content to publish in the 90s, so we eventually turned to blogging about the mundane details of our life, and now we’ve regressed to the point where we’re micro-publishing the inane blatherings direct from within our skulls. I mean I blog, but I don’t expect it to be terribly interesting. I digress from bashing friendfeed… How are rooms not just mailing lists of people with common interests, or newsgroups for that matter? Again you have the very heavy social aspect to it. And I totally applaud their functionality to make it hella simple for me to tell them about my flickr, netflix, delicious, linkedin, twitter, etc. But honestly shouldn’t those services already be exporting my feeds (and most probably already do). And again we come back to this notion that I should be publishing all of “my” sites (flickr, netflix, delicious, linkedin, twitter, etc) from a single source, where you can then crawl them and do what you like with the feeds. Granted you can’t comment on certain things, say on the last movie I got from Netflix. And maybe that is worth the lock-in, but I’m having a hard time biting.

It all comes back to my desire for a single authoritative profile, with no particular vendor lock-in based on any company’s implementation, information based on open standards, and decentralized. I should own all of the data and relationships contained within, instead of some company tos‘ing me to death. If I spent all that time building up my profile and finding my friends, why shouldn’t I be allowed to take that with me?


Posted in Technology | Tags , , , | Comments Off

Social Networking

Posted by Thomas Sat, 26 Apr 2008 17:10:50 +0000

I am seriously unhappy with Social Networking as implemented today. There are too many sites and not enough information sharing. Each site hoards the information you put into it (countless hours no doubt). I’m pretty sure that most social networking sites would argue that their terms of service explain how they own that data and those relationships. Every new site requires me to re-enter my information and re-find my friends. Sure some applications try to help you find your existing contacts, but who knows what email address my friends used, or what user name they picked?

Why can’t I have a single global profile, where I list all of my identities on all of the sites with social networking (flickr, facebook, linkedin, etc). And have links on this global profile to all of my friends’ global profiles. Get all of the social networking sites to use this global format, and voila. Whenever I sign up for a new fadsocial networking site, I tell it where my global profile is, and then poof, it knows exactly who all of my friends are, and it can easily find all of the ones that have signed up for their new hotness. No work for me! Yay!

I guess this is what XFN and FOAF are for, but it’s a long way off…

Posted in Technology | 2 Comments

TheLucid WP Theme Release

Posted by Thomas Sat, 12 Apr 2008 11:34:32 +0000

So this is my first post (likely the first of many) about the release of my port of TheLucid Typo theme to WordPress. Please note that this is the what I recall being the original release of TheLucid theme from the SonicIQ guys, and not their 1.1 release of the theme. There were a couple of comments in my original post requesting it. I have no excuses for not posting this sooner, and I really, really, really, really feel badly that it’s taken me this long to get around to it. Half of me was hoping to update it to the newer upstream version, but that never happened. And I was hoping to maybe add gravatar support, and that never happened. And I couldn’t really remember exactly what you’d need to get it installed, and so weeks and months went by. :(

Anyway, so I think all that you need to do is untar lucid into your themes directory, untar my patched version of addicted to live search into your plugins directory, for non-2.5 WP installs, install the Gravatar plugin, and then enable them all. In addition, (as was pointed out in the comments) you’ll have to edit “javascript/lucid.js” to point it to the right location. I’ve tried to add some versioning into the names, as inevitably there will be bugs that need to be addressed. And, as a disclaimer, this is only for WordPress 2.2, so I make no guarantees about it working with later WordPress versions (or even working at all :) ).

Please let me know if you guys have any issues with it.

Update: I edited this post some, updating the link to the current version, mentioning the Gravatar plugin, and that you’ll have to edit lucid.js to get search to work properly for you.

Posted in General, Technology | 8 Comments

Google App Engine

Posted by Thomas Tue, 08 Apr 2008 19:34:22 +0000

I totally know this guy.

Posted in Technology | Comments Off

Google Movies Inline IMDB ratings

Posted by Thomas Thu, 22 Nov 2007 21:09:45 +0000

I have for quite some time been in want of a Greasemonkey script to display IMDB movie ratings on Google Movies. I finally ran across the Inline IMDB ratings script. I seasoned it to taste, and the result was Google Movies Inline IMDB ratings. I tried to optimize it so that it would just query the IMDB page once per movie, but it could probably still use a little optimization… Perhaps someone else on the intarwebs will find it useful. :)

Technorati Tags: , , , ,

Posted in Technology | 1 Comment

If I were ever to build a last mile ISP

Posted by Thomas Fri, 28 Sep 2007 01:02:24 +0000

Another post that’s been lingering in the drafts for too long…

If I were ever to build a last mile ISP, this is what I’d do:

  • QOS things like I would QOS my own network, relegating bulk services to the bottom of the heap, giving sensible priorities to everything else. No odd net-neutrality biases, just being fair or unfair equally to everyone, just to keep base services snappy so you don’t notice that your neighbor is hogging all the bandwidth. Buy a decent size pipe, but set expectations that everybody will be sharing it and to be good stewards of a communal resource.
  • Allow multicast. I don’t know how this would work, but nobody else is doing it, and I would imagine it would help a bunch. Ideally this would cut down on transit bandwidth, with the added plus of just being really cool.
  • Allow/maybe even facilitate file sharing across the local network. Intra-network bandwidth should be cheap. Let people share all they want on the local lan. Give some guidance so that people know what they’re sharing, how that will affect their bandwidth and their personal computer’s performance, and what liabilities they might incur by sharing.
  • Pursue aggressively cached content. Start with http, but cache as much as possible, even videos or whatever. Maybe even try to work with the big content providers to see what we can work out to be mutually beneficial. But, don’t sacrifice bandwidth for latency. Always make sure that the cached content is as snappy (if not more so) as the original.
  • Give little to no professional technical support. Other cheap ISPs have done this. I would too.
  • Use a good system to mitigate virus traffic and segment offenders from the network. I think that there are a couple of implementations in the wild doing this. Re-checks must be quick and efficient, so as to not punish people unduly.
  • Don’t provide any email or web hosting. Less complexity, less to support, and less to break. Point them to nice email hosting like Gmail or Google Apps.

Posted in Technology | Comments Off

Net Neutrality Part II

Posted by Thomas Wed, 26 Sep 2007 00:54:32 +0000

This post probably isn’t quite baked all the way, but you get the drift and I get it out of my draft blog posts…

I wrote Part I quite a long time ago, but recently, the Justice department released an opinion coming out in favor of allowing market forces to determine whether or not an ISP can offer non-net-neutrality tiered services. And the I’d-rather-see-less-legislation-than-more-legislation side of me can see their argument. But I’m not really sure here that free market forces would truly be able to outweigh the telco’s greed and desire to get in on a piece of the proverbial pie. We’re talking about a service that every day comes closer and closer to being less like a luxury and more like a necessity (almost as much as electricity, water, etc.).

So, they cite the Post Office, charging differently for different types and sizes of parcels, expediency requirements, and safety requirements. That’s fine. That’s the market at work responding to people’s willingness to pay more for more services, while a perfectly reasonable form of sending a package will always exist at an acceptably low cost. But, this does not directly correspond to what I think has been proposed. What has been proposed is that someone would pay not based on the size or type of parcel, but what the parcel contains. I suppose the Postal Service analogy does make its way through, in that the sender (the website you’re trying to reach) would have to cough up the extra change. But there the parallel breaks down because then you don’t really have control if the website you want to go to has paid extortion money to your ISP to actually allow them onto their network.

So, in this parcel analogy, let’s say there is no other option but FedEx Express Shipping. Shipping is cheap, and everybody’s parcel is equal, and things get around the country pretty quickly. Let’s say you just bought a book from Amazon. They ship it, it goes into shipping first come, first served, and you get your book. In this new world though, Amazon has to pay for your package just to eventually arrive at your house. I suppose they could even do tiers and say you have to pay us to even allow your package through FedEx shipping, pay even more to get the old level of service, and can pay even more for Extra-Express Shipping, which will bump you to the front of the line in the FedEx shipping world. Now that may be fine if I choose exactly how quickly I want my book to get here, but it’s not ok if the shipper extorts money from the big shippers like Amazon and plays favorites with some other book seller.

Shouldn’t market forces drive the cost down and the quality of service (no pun intended) up? Doesn’t Amazon sign a deal with FedEx that’s mutually beneficial to both of them, and the consumer.

I think there’s also a lot to be said about how people don’t really have a choice when buying high speed internet. I ran across a comparison of provider choice in the US versus the UK and it was mind boggling. They had something like 50 providers to choose from where we have 2. Doubtful that the market can work itself out with odds like that.

Posted in Technology | Comments Off

Net Neutrality Part I

Posted by Thomas Thu, 06 Sep 2007 18:21:05 +0000

There has been a whole lot of buzz around net neutrality, so I’m going to take a crack at it from my perspective. There are a lot of people out there commenting on this, such as here and here and even at Ask A Ninja. I’m in the middle of reading some of the commentary over here at the moment. I’m in the middle of a 65 page position paper that I’m not sure quite gets the gist of the real network neutrality debate. Which is especially hard to do now, because none of the telcos have actually yet disturbed network neutrality. So, that means that everyone is commenting on pure speculation as to what the telcos might do in the future. And, unless you have some inside information as to what that might really turn out to be, you’re sort of tilting at windmills. From what I can gather, the telcos would extort money from website owner. Telcos would penalize those who didn’t pay up, making their websites slower or even unreachable. And in today’s “Web 2.0″ atmosphere, latency is king, which is why I can imagine many websites would pony up to gain yet another advantage. Some say that it would akin to freeways

If you think of this in terms of freeways, what if the rich people were allowed to go faster than the poor people simply because they paid more taxes?1

Which I don’t think is quite right, because I don’t think that the telcos would make us pay, but rather I think they would find more money getting websites to pay. This document (that I’m not done reading yet) speaks to congestion economics, which I can only imagine are really talking about user of streaming video, peer to peer (P2P), and Bittorent traffic squeezing out other users. Which I also don’t think is quite right. I would imagine that most people would be happy to have those general file transfer protocols QOS’ed heavily to make room for the latency sensitive traffic such as http, voip, ssh, streaming video, etc. (Streaming video is both high bandwidth and latency sensitive…) I think he misses the point and the likely way that this will be turned against internet-goers. As always, companies will pass the buck. If they have to pay extra to get better latency to their customers, but ultimately the cost will simply be passed on to the customer as higher priced goods and services. So the telcos might squeeze the website owners in the beginning, but we’ll get squeezed in the end. So not only do I think that he missed the way that net neutrality will be used against us, I imagine that he also missed many of the technical aspects of how hard it really would be to really reduce peer to peer traffic. Not only will people turn to obfuscation, encryption, and anonymization, but QOS’ing bittorrent traffic might actually have the exact opposite effect, being more detrimental to an ISP’s bandwidth2. Now whether or not this is true needs more study, but it is interesting none the less. Also, if telcos were to implement what I have outlined here, I am curious if they then become liable for the content being transmitted over their network. Up until now, I don’t think that the telcos are in any way liable or responsible for anything illegal done via their phones or via their backbones. I think they’ve been immune to such lawsuits, but I’m curious if they start filtering on the application level, if they will then be sued so that they have to filter for illegal music or movie downloads, child pornography, spam, viruses, etc.

Posted in Technology | 1 Comment

Re-encoding video

Posted by Thomas Sun, 24 Jun 2007 19:48:18 +0000

For my own future reference when trying to re-encode video (to a dumber mpeg4 implementation for my linkplayer2) with multiple audio tracks all at the same time:

ffmpeg -i 1x01\ Space\ Pilot\ 3000.avi -f avi -vcodec divx -b 1133k -acodec mp3 -ab 128 output.avi -acodec mp3 -ab 80 -newaudio

Be sure to verify video playability, quality and audio sync.

Technorati Tags: , , , , , , , , , , ,

Posted in Technology | Comments Off

Makes mouths happy :)

Posted by Thomas Sun, 10 Jun 2007 00:56:21 +0000

I have been supremely pissed at my home network. After having to get it back up and running twice now since I’ve gotten back (due to power eventss), I’ve decided two things. One, that my current ups hasn’t been pulling its weight around here and could use a little more exercise. And, two, that I need to put my critical infrastructure onto ups. Those fit together nicely, don’t they? My ups is now taking on more load and getting more of a workout. I think I’ve only had one day where the power was out long enough to be on battery for longer than a couple of seconds, so my reduced run time really shouldn’t be an issue, even if it does worry me some. I didn’t put my filer on it, but making sure my box, the router, switch, and another core infrastructure box weathers the storm well should let me sleep a little better at night.

All of this frustration has caused me to kick it into high gear and fix many things that have lingered half-broken, the laundry list of which I’m not going to go into. Let’s just say that I think NFS over tcp is a God send, and how you should always disable fam for Samba on Solaris. Makes mouths happy. :) Getting those two things working smoothly really put my mind at ease. I was really worrying there for a little while that I’d invested all of this time and money and in the end it wasn’t going to perform well at all. Hopefully, now, though, it will chug along nicely.

Posted in Technology | Comments Off

I really don’t know what to title this one

Posted by Thomas Mon, 26 Feb 2007 00:08:53 +0000

The weather has been nice enough the past two days that I’ve gone riding in the afternoon. Yesterday, some dude in a car as it was passing yelled out to me, “Hey, Lance, get off the road!”. Odd.

I have been worrying somewhat about the root partition of the new server I’ve been working on for Wesley. It started out as ~250MB, which is fine, but if you have a couple of kernels installed, /lib/modules start eats up the space quickly. So today, I set out to alleviate the space problem. All of the partitions, except for boot, are on LVM. So it was simply a matter of resizing. The partitions are all ext3 for penultimate safety reasons. Since I tried once to shrink /home and failed, I just removed the logical volume, increased the root partition and resized the filesystem. It was easier than I ever imagined. Like three commands and it was done. I could simply remove /home because there as almost no data on it, so I just backed it up and restored it. The hardest part of the whole process was waiting on the ext3 format of a 218GB drive (it’s rather slow and the format itself uses up 188MB, but that’s for another post/rant). Chalk one up for a good decision to endure the extra overhead of LVM.

In other technological news, I installed Sun’s java instead of the crappy gcj on the box that runs my Azureus now. It had been flaky as of late, going for a while and then dying. I didn’t even realize it was using the gnu java, so I installed and hopefully it’ll be happier now.

I was going to rant about OpenID and how I don’t get it and how it’s the latest meme and the fashionable fad. And I was going to cite this’s guys post. But then he reneged on his stance and wrote this. There remains something that I don’t like about it, but I can’t quite put my finger on it.

I also finally watched “The Da Vinci Code” this afternoon. I had put it off forever, but after I heard a roundabout endorsement as a good movie, I decided to watch it. As a whole, I’d say that it was better than I expected. Even given the risque content, I can’t deny that the story really was decent.

Posted in General, Technology | Comments Off

It has begun

Posted by Thomas Mon, 22 Jan 2007 02:12:00 +0000

I’ve felt of late that over the weekend I tend to post several short, disjointed posts, so I figured I’d save up and just post once this time.

Hilary and Barack and who knows who else have formed presidential exploratory committees.

“You pick the smartest, most capable, most honorable individual you can think of…”
— Leo McGarry

I think that sentiment will be driving my decision. A person who is honorable, trustworthy, dare I say patriotic, who would adhere to a more strict interpretation of the Constitution. Actually, I don’t think I can use patriot as a criteria. It has been twisted. I don’t mean it in its current connotation, but that connotation from the Colonial era. A statesman, a patriot, a federalist, a contitutionalist.

I finally just put 2 and 2 together. For the past day or so, I’ve noticed a severe slowness in the responsiveness of one of my shells in a screen session. I had also noticed in a “ps axf” that there was an ssh session open to wesley. Neither of these things were adding up. I just realized that most likely I had ssh’ed to wesley, then back again to argento. The reason for the slowness wasn’t due to high load or low memory, but simply network lag and overhead of going to Texas and back again. Oops…

I have spent a good deal of this weekend again working on the home network. I installed a new Xen image for my database machine, figured out that Samba can’t do straight Kerberos authentication (only with real AD :(), packaged Resin for Debian for real this time (yay! finally!), watched a bunch of Scrubs, watched a bunch of movies over again, got Azureus working headless on my new shell server (compute0), did some laundry, stayed up too late, got up too late, found out my internet connection can push 15Mbps+, setup cricket for snmp monitoring of all of my new machines, hmmm, that’s all I can think of right now…

I haven’t fixed the car door and it’s been too cold to ride or finish the table.

I’m out of photos now. Must take more.

Posted in General, Politics, Technology | Comments Off

Late Night Hackery

Posted by Thomas Mon, 15 Jan 2007 14:33:18 +0000

I reallly love those late night/early morning hacking sessions when things are finally clicking. It’s a good feeling when you look at the clock and can’t believe that it’s 2:30 in the am and you don’t know where the last two hours have gone. And I love it when you get done what you set out to accomplish before sun comes up. Don’t get me wrong, I’ve had my fair share of all-nighters. Getting the Crunchtime video out the door comes distinctly to mind.

This weekend, I cut over my network from the mundane, basic WRT54G doing everything, to having a shiny new dual wan, internal dns, dhcp, qos, Debian based router. It’s my old dual Celeron with two of the dual port e100 cards. The primary reason for this was internal dns, so that I can get Kerberos running. But that’s somewhat on hold, as I’m not 100% sure that moving everything to kerberized nfs would be the simplest thing, maintenance wise for my clients. I guess that CIFS has authentication built in, so maybe it’s not too big of a deal. I just don’t know right now how hard it will be to get kerberized nfs clients. The main goal last night was to get WPA2 Enterprise working on the WRT54G, authenticating to a FreeRadius server, authenticating to my LDAP server. And it all works now (802.1x is so cool)! I should have documented it better, but I was more concerned with getting the concepts down and getting it working. I still have the wiki up and running with nothing in it. I really should be posting my notes up there… Also during this process, I figured out how to get wpa2 working on Debian, as I’d never figured it out before, or really ever had any need to… I will say this, though, that NetworkManager really is quite slick.

Posted in Technology | Comments Off

New Coffee Table

Posted by Thomas Wed, 10 Jan 2007 00:02:44 +0000

I am now the new, proud owner of my very own, real coffee table. It was finally dropped off today and I put it together when I got home. I think I’ve already got a little grease/vegetable oil on it, so I can’t eat on it again until it’s finished. :(

Steve Jobs had his keynote today at MacWorld. I’m have two questions about AppleTV. One, why doesn’t it have a firewire port so it can stream TV from a cable box, and two, does it support UPNP? They also announced the iPhone. It looks cool, but I’m not one much for that kind of stuff. I still have really old, plain cell phones, and I still don’t have much need for an iPod…

I wore my Greece shirt to work today. Only one guy noticed and mentioned something to me about it. That tells you something about the guys I work with. Or maybe they’re all afraid of me. Or maybe they noticed and didn’t say anything.

Posted in General, Technology | 1 Comment

Monday Update

Posted by Thomas Tue, 09 Jan 2007 01:31:54 +0000

I’ve finally gone through my photos from the past several months and the photoblog will be on autopilot for the next (almost) 2 weeks, just as Marc’s is at the moment. Hopefully I’ll get around to posting the raw pics into original soon.

I heard a song on Scrubs during my binge that caught my attention from Kutless. I downloaded some songs and they seem like a cool band.

I bought 4 2-port 10/100 e100s on ebay (a buy it now) for $30. I probably would have spent $20 on one single port from best buy or fry’s. I need them for a router, so I can finish building the home network (I need real dns before I can setup kerberos…). Also on the tech front, I realized that even though my webhost symlinks uptime to /bin/true, I can still read /proc/loadavg (and top will show it as well). But the bad part about this is that I’ve seen the load be 30. Now I understand why it’s unbearably sluggish sometimes. :( This is almost unacceptable and I’m half inclined to setup a cron job to monitor it to see what it’s like over time…

I’ve also solved a problem with my Nexenta system that’s been driving me up the wall. I had gotten Nexenta to work with OpenLDAP, but it wasn’t seeing the groups. I finally posted to sparks-discuss and they solved it for me pretty quickly. It was a known bug, but they had a decent work around, so I was pretty happy with the result.

Posted in General, Technology | Comments Off

Xen and AFS

Posted by Thomas Thu, 28 Dec 2006 00:37:00 +0000

I really hate it when you dig down into some cool possibility, only to realize it isn’t possible. I’ve been playing a lot with Xen as of late. It’s really nice to be able to have another machine in a virtual sandbox. A while back I was talking with some guys from work about the possibility of using Xen in conjunction with AFS to create a very highly available compute cluster. Xen has the ability to migrate entire virtual machines from on physical machine’s memory to another physical machine’s memory, while the virtual machine continues to run and process stuff. It does not account for the “disk” associated with the virtual machine, which is where AFS comes in. AFS has the concept of cells, and the ability to move data on the server transparently to the clients. So it would seem that you could transfuse these two technologies to create virtual machines that would migrate around on various physical machines all transparently and automatically. But, here’s the snag. You can migrate Xen instances on the fly, but I don’t think there is currently any way to automatically fail over a virtual instance if a physical machines dies. And the same with AFS. You can migrate a R/W cell, and you can automatically fail over a R/O cell, but you cannot fail over a R/W instance. So, basically you can avoid downtime through scheduled maintenances, but can’t gain high availability through these technologies currently… :(

Posted in Technology | Comments Off

Tech Support

Posted by Thomas Thu, 28 Dec 2006 00:36:27 +0000

I have more than a few artsy friends in the graphics/photography/video/animation field. Several have been in the real world for a few years, while others are just starting. It really makes me wish that we could all start a little consulting business. There are at the very least Amy, Tycen, Adan, Marc, and Nancy. They could all do the artsy thing, and I could do all the tech support/IT for them…

Posted in General, Technology | Comments Off


Posted by Thomas Sun, 03 Dec 2006 02:28:28 +0000

I really could kill the developers over at Pixelpost. Hulk gets angry when he reads the forums. Hulk gets angry when he looks at the code. Hulk gets angry when things that should be in the upstream codebase aren’t. Hulk gets angry when he spends most of the day patching the new code (not that he minds that much, and not that it’s the worst way to spend a Saturday, but I really shouldn’t have to). hulk gets angry when he gets the impression that the developers just want everything to be made into an “addon” when “addons” won’t really get you very far for some feature requests. Things that Pixelpost really needs to focus on and put in their roadmap/milestones (if they have any which I would doubt):

exif data in rss/atom feeds
thumb/full/no image in rss/atom feed
consistency in rss/atom feed content
rss/atom comment feeds
individual photo comment feeds
code formating/styling/standardization
code reuse/breaking into functions
out of the box anti-spam and captcha
header/footer in templates
prettier urls: simple: /photo/1 and more complex slugs: /2006/12/01/kids
database independent
remove “no intrusion”, 404 crap
standardize checking of possible x= parameters

And probably many more. It’s really not that bad of a product. And of course I use it, and I used it for Marc’s and Ben’s photoblogs, because when it comes down to it, it does its job, and it does it in a pretty short amount of code (because then it’s easy for me to hack on), but it really could use some polishing. I could probably easily get a dozen patches in to the development team, but I hesitate because if I did a whole lot of work to break all of my code updates into patches and then submit them to the developers, only to have them thumb their noses at me, well, I don’t want that to happen…


Posted in Technology | Comments Off

Python/dynamically type languages redux

Posted by Thomas Tue, 17 Oct 2006 01:41:47 +0000

I still don’t understand Python and its zealots. I find myself all too often in little altercations over the choice of languages and platforms with guys at work, specifically with my great disdain for Python.

I think that if you are an advocate of a dynamically typed language, then you are lazy. You don’t need to be programming if you can’t plan ahead for what variable types you are going to need. If you have a problems thinking that far ahead, then you really do have problems. I think it’s the lazy man’s way out. You can’t be encoumbered by declaring the type of your variables. It hinders your process and bogs you down. You can’t be saddled with such things. If you can’t figure it out, then you shouldn’t be programming. I add a type to a variable without thinking, just like I add a semi-colon to the end of every line. Just like I add a period at the end of every sentence.

I cut my adult programming teeth on C and Java, which are both strongly typed, and probably have permanently influenced the way that I approach solving problems in code and the length of code I consider acceptable for even the simplest of programs. I don’t even see the “boilerplate” any more and I don’t see people’s problems. Importing classes in java is a necessity to me, and I know and understand that, yet when I see imports in python, I always perceive them as the author’s attempt to be cooler than he really is. I don’t know why that is… But what I try to myself realize (and may or may not really reach a decent level of zen in) is that neither I nor you, Mr. Zealot, have the right answer. If there really was one best language then we would all be using it, and no one would be writing new ones, as I am sure someone came up with one just today. Don’t be a zealot and say that your language is better, and force it down my throat. Because it is not the one true language for all of the world and for every application.

I try not to be a zealot for Java, but it’s especially hard for me when other zealots rear their heads and plead their case. I try my best not to force my opinions down other people’s throats, and I appreciate it when others due the same. Don’t waste your breath trying to convince me that your way is better, show me your way is better and I will immediately fall in line. But, if you show me, and you don’t convince me, then you’d better go back to the drawing board and try again.

In my quest for answers, I ran across this.

Let’s face it: your average commercial application isn’t burning CPU cycles solving NP-complete problems. We typically write code that moves chunks of data about and adds up a couple of numbers. In these scenarios, is it worth worrying about the relative performance of the language used to do the moving and adding? Not in my book.

Most of the time the computer waits on you, just like in Mother Russia, and not being stupid and choosing a decent algorithm is key. And I, too, do not care what language it was written in as long as it does its job currectly and in a timely manner. I, in fact, like many other things, do not care for a very, very long time, but then care immensely about how well it will do its job. But then he quotes this:

Justin Ghetland experienced this recently on a Rails project. Having coded the same application twice, once in Java and once using Ruby on Rails, he was surprised to discover that the Rails application outperformed the Java one. Why? Justin believes it’s because Rails does smarter caching.

He compares Java to Ruby on Rails. How can you do that? How can you compare a language to a platform. Of course Java will loose if you’re running it against some other platform that caches the result. Are you stupid? A 2 year old could tell you that. I digress as he does into why Ruby is cool because you don’t have to write sql or some such blather.

Python was derived from ABC and I have as of yet to hear the true reason why he chose as he did. Are the perceived benefits of implicit declaration, statement nesting by indentation, and smaller size === more readable in fact true? Is the benefit perceivable or even quantifiable? Isn’t readability in the eye of the beholder, or more precisely in the eye of the maintainer?

I have not been one for trying to fix some else’s code in quite some time. Ever since that first time or two, I realized that the probability of me finding your error in your non-trivial code was very slim. And in the several minutes that I would be trying to orient myself with the code, the author would figure it out.

Plus all of my other gripes:

the language should not enforce style guidelines
the correctness of a program should not be dependant on indentation
how easy to comment out a code block and not affect the surrounding code
how easy to temporarily copy and paste new code into a block
how easy to determine the end of the code block
“it forces correct coding style” — indentation is only one of many factors of proper coding style, which of itself is debatable; would you want to enforce CamelCase or Hungarian notation at the language level?
“I dislike using braces because I have to indicate my intentions twice: once for the compiler and once for humans.” — couldn’t repetition be considered good for readability?

“When you get to the bottom of it, however, I write programs in Lisp for the same reason I write prose in English—not because it’s the best language, but because it’s the language I know best.”

Well, that last statement really is true and the crux of the whole thing. I know C and Java, and Python isn’t like them in more than a few ways. I ignore certain things about my prefered languages, and the other zealots do the same. I really don’t care what language something is written in. Do I care what language Firefox, Gaim, or xterm is written in? Surely not. I only care that they do their job and they do their job well. If they don’t then I find something else. This is how it should be. Survival of the fittest; a capitalist chosing of software.

I guess what irks me the most is when the zealots proclaim that Python is the best, one, and only, and then their apps suck. Don’t come to me proclaiming the wonders of a language, the ease with which this allows one to code, the brevity, the veritable snake oil-wonder language, and then your apps still suck. I shudder to think how much they would suck if they chose a “harder” and more verbose language.

And why does WordPress have a stupid little draggable ui, yet no autosave or type-as-you-go spell check. Get your priorities straight! Features first, eye-candy later. Function before form.

Posted in Technology | Comments Off

Some links for the Xen/ZFS post

Posted by Thomas Sat, 07 Oct 2006 22:42:50 +0000



Posted in Technology | Comments Off

Xen, Solaris and ZFS

Posted by Thomas Wed, 27 Sep 2006 01:21:47 +0000

So, I’ve been wanting to play with ZFS for a really long time. Finally tonight I got my ducks in a row. I should have written some notes on the process, but I now have a functional OpenSolaris server running as a domU in a Debian dom0 in a little Xen cluster on my 2U.

Following (basically) the instructions on the web, I got the xen packages installed on top of my pre-existing Debian Etch install. I bought a WRT54G over the weekend, so that I could free up the 2U for more ambition endeavors. I uninstalled Shorewall so that it wouldn’t get in the way, and disabled dhcp3-server.

I installed

xen-tools xen-utils-3.0.2-1

I don’t know if I need both of the xen-utils packages… I was using Lilo (yay lilo!), but apparently Xen needs some functionality only available with Grub. So, I tried to install that, but it continually locked up during grub-install (that fix deserves its own post…). I got that fixed, and it booted fine. After configuring it for serial access, I was finally good to go. The next hurdle involved figuring out that not one of those xen packages added the necessary bridge interface to /etc/network/interfaces.

auto xenbr0
iface xenbr0 inet static
bridge_ports eth0
bridge_maxwait 0

Everybody seems to use a different name for that interface. There was at least xen-br0, but the Debian scripts that I installed are looking for xenbr0. You also have to manually bring up the bridge. Note that this will clobber the settings of eth0. So, if you were ssh’ed into the box, the ssh session will die, and, if the bridge came up properly, you can now ssh into So, if eth0′s ip was dynamically assigned, it will now be static and That took me a bit to figure out. So, once that’s done, you should be able to edit /etc/xen/xend-config.sxp, and finally do a xen-create-image and xm create that won’t error out.

So, I created a Debian image to verify that everything was working, and then I downloaded the ready to use Solaris domU image (twice because apparently I didn’t download it all the first time). I got it running, added some pseudo disks to the xen config, and tinkered with ZFS. I added me as a user and added a home directory, got the permissions all setup and even was able to mount it remotely via NFS. It’s all far from automated, but it is somewhere. I realized in all of this that I probably could export my fibre channel drives via xen to the Solaris domU. I could then put ZFS on top of them and re-export them to the entire network. It’s convoluted, I know, but it’s the best solution until Linux gets some form of ZFS. The overhead of xen isn’t supposed to be that bad, but it still seems to make for an overly complex, yet working solution. I also had thought that there wouldn’t be any reason not to be able to run apache, ftp, etc. directly on top of OpenSolaris, but dealing with OpenSolaris the little that I have, I would pull my hair out. So, just having nfs running, running all of the other value added services from nfs mounts, seems like a possible plan. You get clean snapshotting from ZFS for everybody. I guess the next hurdle is moving everything over to LDAP and Kerberos. Looking forward to it… :)

Don’t worry if you didn’t follow any of that. I guarantee you are not alone.

Posted in Technology | 2 Comments

Bash Auto-completion

Posted by Thomas Fri, 15 Sep 2006 19:20:21 +0000

http://kasparov.skife.org/blog/tech/ssh_completion.html has a pretty cool implementation of being lazy for your ~/.bash_completion file:

SSH_COMPLETE=( $(cat ~/.ssh/known_hosts | \
cut -f 1 -d ' ' | \
sed -e s/,.*//g | \
uniq | \
egrep -v [0123456789]) )
complete -o default -W "${SSH_COMPLETE[*]}" ssh

Because I am meddlesome, and since his was excluding some hosts, I seasoned to taste:

SSH_COMPLETE=( $(cat ~/.ssh/known_hosts | \
cut -f 1 -d ' ' | \
sed -e s/,.*//g | \
sort | \
uniq | \
egrep -v "^\|[0123456789]") )
complete -o default -W "${SSH_COMPLETE[*]}" ssh

It’s rather cool, becuase it will use hosts out of your known_hosts file for input, which is nice because it practically always be up to date. Be sure to add completion to your shell by editing your .bash_profile or whatever on your distro. Yes, I know that can do sort -u vs sort | uniq. Old habits die hard. Potato/Potatoe.

Keywords: bash auto completion ssh

Posted in Technology | 1 Comment

On the lighter side of things…

Posted by Thomas Mon, 11 Sep 2006 02:29:55 +0000

I procurred a PA-RISC system on Friday. For free! Yeah, baby. Since Debian is awesome and has a port, I’ve been working on it for a good part of the weekend (see documentation here, here, here, and here). It had an old Debian install on it, testing even, but I was not privy to any of the passwords. And, since udev and 2.6 and all, I figured that it would be a good idea to have a nice clean install, instead of a dist-upgraded one. So, instead of doing a simple dist-upgrade on Friday, I’ve been working most of the weekend to get Debian reinstalled on the thing. I wasn’t able to get XFS on it, due to some odd kernel module bug. I posted to the mailing list, but the buggers haven’t answered back. I’m not sure if I would reinstall now, anyway, since it was a pain, and is really slow, and my internet sucks. BTW, it’s a smoking 100MHz RISC chip, an HP 712/100, with 64MB of RAM, a 2G HD, and 10 whole Megabits of network. It’s an HPPA and my first non-x86 architecture, so I’m pretty happy. I don’t know what I’m even going to do with it, but it’s sitting under my feet at the moment… Maybe it’s time to plop down some $$$ for that O2 I’ve always been lusting for…

Posted in General, Technology | 2 Comments

DVD audio commentaries

Posted by Thomas Sat, 04 Feb 2006 21:54:45 +0000

I don’t know about you guys, but I love dvd commentaries. I’m not entirely sure why, but I do. And just about as much as I love audio commentaries, I love linux, transcoding, and my avel linkplayer2. Why you may ask? Because the synergy of those things allows me to be lazy and just watch movies without having to deal with the phyical dvds. BUT, it also means that the movies that I watch don’t have the commentaries on them. So, when I do own a great movie, such as Garden State, I figure, hey, I’m a reasonably bright individual, why don’t you re-encode the movie to include the audio commentaries? Well, that’s just what I did, and I’m going to tell you how I did it.

Rip & transcode w/ dvdrip. This gives you a project directory in ~/.dvdrip, the vob files, and a shiny, newly-ripped avi. For me this took about 2.5 hours and made my system virtually unusable. Load avg was over 10 for the duration. :( Time for new ram. 2GB anyone? :)

Rip the audio from the vobs like so (the vobs are in the directory vob/001/):

tccat -i "vob/001/" -t vob -d 0 -S 0 | tcdemux -a 2 -x ac3 -S 0 -M 2 \\
 -d 0 | tcextract -t vob -a 2 -x ac3 -d 0 > commentary-1.ac3
tccat -i "vob/001/" -t vob -d 0 -S 0 | tcdemux -a 3 -x ac3 -S 0 -M 2 \\
 -d 0 | tcextract -t vob -a 3 -x ac3 -d 0 > commentary-2.ac3

Garden State has 4 ac3 audio tracks: 0 is english, 1 is spanish, 2 is the first commentary, and 3 is the second commentary.

Add the commentaries to the original avi:

avimerge -o tmp.avi -i garden_state_xvid_dvdrip.avi -p commentary-1.ac3
avimerge -o garden_state_xvid_dvdrip_with_commentary.avi -i tmp.avi \\
 -p commentary-2.ac3

Voila! Now you have an even shinier and newer avi with 3 total audio tracks and 2 new audio commentaries. Looks like each new audio track cost us roughly 141MB! in additional storage space, but, hey, it’s worth it.

Tune in next time for another installment of Thomas’s Cool and Overly Complex Technology, we’ll discuss the possible advantages of the new h264 codec to improve quality while decreasing filesize.


Technorati Tags: , , , , , , , , , , , , ,

Posted in Movies, Technology | 2 Comments


Posted by Thomas Fri, 04 Nov 2005 00:18:29 +0000

I broke down and put up a captcha for comments. I keep getting spam from party poker and I WANT TO RING THEIR NECKS! Get a f*cking real job and a real business model instead of trying to get a higher pagerank with links from my page to yours. WordPress uses nofollow you morons. And if you’re not trying to up your score on the search engines, then stop annoying people and stop trying to get people to stupidly stumble onto your site and give you their money.

Posted in Technology | 2 Comments

Linkplayer2 & NSLU2

Posted by Thomas Wed, 03 Aug 2005 01:16:25 +0000

Since I’ve been looking into buying an Avel Linkplayer2, I have found out that you can get the Linksys NSLU2 to act as a media server for the Linkplayer. That way I could have my external connected to it, to serve as a normal fileserver as well as serve up content for the linkplayer, without having to run their software on a linux box nor have to connect the drive directly to the linkplayer. Fun stuff.

The weird thing is that in thinking about this I have a craving to fab essentially the same thing as the NSLU2, but with 1394b and gigE. How nice would that be? You could have raid 5 over 800Mbit firewire, with hotswapability due to CHEAP ata drives in firewire enclosures. ATA drives are freaking dirt cheap nowadays after rebates. You would even gain some reliability if you used different drive vendors as they would be less prone to crash at the same time. Everything would have its own dc power supply. No noisy fans or high current draw.

Maybe I should take up pcb fab as a hobby?

Posted in Technology | 2 Comments

xmms & rhythmbox

Posted by Thomas Sat, 09 Jul 2005 12:48:46 +0000

Since I can’t post comments anonymously to this guy’s blog, then I guess I’ll have to settle for just posting to my own and hope that he sees it somehow… Anyway, yeah, I agree that rythmbox probably sucks, but if I’ve ever used it, then I never used it again and have no recollection of ever having used it. I use xmms under linux and it’s ok. But my real beef is, WHY ISN’T THERE A WINAMP-CLASS MP3 PLAYER FOR LINUX!?! Nothing I have found even comes close.


Posted in Technology | 1 Comment

Apple and Intel

Posted by Thomas Sat, 04 Jun 2005 14:44:09 +0000

Ok, I can’t resist throwing my hat and my two cents into the ring on this one.

Let me list the big players, so that I don’t forget to mention anyone: Microsoft, Apple, Intel, AMD, and Dell. Oh, and Best Buy.

There has been an extremely large amount of buzz going on about the possibility of Apple ditching IBM for Intel. And it always seems to me that nobody ever can really figure out why the big dogs do what they do. When in reality, it shouldn’t be all that hard to figure out because they are operating by some really simple ground rules. Make more money than the other guy. Make better products than the other guy. Make them cheaper than the other guy. Make your margins bigger than the other guy. Sell more units than the other guy. Pretty simple, right? :)

Apple’s market share for Q4 2004 was about 2.88%. And their Mac mini has seemed to have helped their iMac/eMac sales numbers for Q1 2005, jumping from 217,000 for Q1 2004 to 467,000 for Q1 2005. Which doensn’t sound too shabby, at least to me, anyway. Jobs has been looking to double their market share for quite some time. But since that article was published in 2002, their share has actually dropped. What they’ve been doing obviously isn’t increasing their market share and when they have radical departures from the norm, they shine (e.g. the mini and the ipod).

So, if I was Apple, what would I be trying to do? Gain market share in markets that I’m not currently in. Like the cheap, primary desktop market. Which is why I mentioned Dell earlier. More and more people everyday are fed up with Windows. As the masses become more technology savvy, they understand better that Microsoft does a crappy job of security and most other things, with no real change in sight. Apple can deliver features right now that Microsoft probably has planned for Longhorn, which won’t be released for several years. I dare say that the time is finally ripe for the masses to move away from the Wintel architecture, and I’d bet that Apple wants in on that. Apple doesn’t want the Lintel platform to be the next big thing. They want the Mactel platform to be the top choice of ma and pa consumer. Apple should be marketing a device that’s stable, the “world’s most advanced operating system”. Finally, an elagant, secure, and stable OS. Run on the latest innovations. It just works. Surf the future — safely. Enjoy an elegant, uncluttered workspace. This is what people want. Or at least that’s what Apple is going to try to get everyone to believe.

When suggesting a new computer for my friends or family, I usually recommend to them a Dell. Why? Because they can get a $399 computer that is far more computer than they should ever need. Why then do they usually wind up buying some white box from Best Buy? Because they don’t want to have to ship the damn thing all the way to Austin to have it worked on. They want to be able to take it back to where they bought it and have it worked on in a timely manner. Now, whether or not Best Buy actually does work on it is something completely different. What works here is the perception that all someone has to do is bring to back to where they bought it, and it will be worked on there, with no shipping involved. I don’t know if this is true or not, since I don’t particularly need Best Buy to work on any of my machines, but the premise is what sells.

So, if Apple is trying to become the new Dell, how would they accomplish that? Well, first they have to compete on the price. $399. The $500 mac mini was a great start, since thier normally beastly priced G5′s sell for $3000. I can’t imagine them getting away from the market that they have loved for so long, but are simply trying to sell a larger audience what they want, at the price that they expect (cringely article). Hence the change to Intel. Dell has been wildly successfull using Intel in their computers and has flirted with the possibility of adding AMD chips to their line, but that marriage has never come to fruition. So, if Apple has based a lot of their marketing on the fact that their computers are in fact super computers, worthy of export restriction, then why would they make the decision to lower their standards to those of the generally lower performance of Intel? Because Intel has market share. Because Intel has the ability to pump out the chips like AMD apparently can only dream of. Because Intel can undercut AMD on the next generation of processors, the dual core guys . Apple shouldn’t be concerning themselves with the penultimate performance rating. They are concerned with perceived value and cost (Source: ExtremeTech). Which is a battle that Intel is currently winning.

So, then Apple already started selling Mac minis at Best Buy. All they have to do is start selling a cheap, stable, elegant, fast, and secure platform at Best Buy, and the world is their footstool.

Maybe the new alliance between Microsoft and IBM for their new PowerPC chips has chased Apple away from IBM. As Dan Knight says, “IBM can’t produce 3.0 GHz G5 processors for Apple – but for Microsoft they can reach 3.2 GHz? It just doesn’t make sense.” And I agree with him. That doesn’t make sense. Maybe Apple didn’t really care too much about upping the MHz because they werern’t interested in continuing the line in its current form.

Ok, I just ran across this article from Cringely. It says a bunch of what I am saying, but he said it A YEAR AGO! I guess that we differ in that he thought that Apple would only sell their macs in Apple retail locations, but I’m guessing that they’ll let some other companies do that, since I don’t believe that their Apple stores did all that well. Look for Apple to get out of the hardware business, in both ipods and macs and get into selling software and music and movies.

We’ll see how close I am to the mark at the end of the week.

Posted in Technology | 1 Comment


Posted by Thomas Thu, 19 May 2005 00:49:49 +0000

Did you know… that listening to Pat Green makes me crave Taco C?

I don’t think that I can write code without havning dual monitors. I guess that I’ve just gotten too used to them. Which is quite unfortunate, because they are a luxury I fear I will only have at home.

CSS is a damned lot of work…

Posted in Technology | 4 Comments

My nerd score

Posted by Thomas Fri, 13 May 2005 01:01:50 +0000

I am nerdier than 81% of all people. Are you nerdier? Click here to find out!

Posted in Technology | 4 Comments

Media PC

Posted by Thomas Sun, 01 May 2005 00:27:58 +0000

Too all of the geeks who read this. I gave up on it. Too loud. But, the point of this post was to mention that I couldn’t get the ISA sound card to work… I finally realized that I didn’t have ISA support compiled into the kernel… DOH! Even when I recompiled with ISA support this time, it still didn’t work. But by that time, I had pretty much given up… A word of advice, use PCI sound cards instead…

Posted in Technology | Comments Off

Hard drives

Posted by Thomas Sun, 24 Apr 2005 00:09:18 +0000

AAAHHH! Not again. At first I thought that the power outages had fried my firewire enclosure (since everytime I plugged it in, it froze my computer, DOH!). So, I simply plugged the drive in directly. And that worked, but I started to see some errors, and checked the speed. It was excruciatingly slow. So, I’m in another mad dash to backup my data. And some of it I’m really doing to need here pretty soon. Looks like I’ll be in the market for some more disk space sooner than I had planned. Especially since I just bought that new drive… I really need to look at some long term solutions, but I’m afraid that they aren’t going to be cheap. But I’m tired of this crap! Man, so little money, and so much stuff to buy…

Posted in Technology | 2 Comments

A While

Posted by Thomas Fri, 25 Mar 2005 12:49:28 +0000

It’s been a while since I’ve posted, so here we go.

First, I wanted to say that the reason that I was up so late a while ago (this post) was because I was working on a little pet project. I call it Firewire Debian. It’s a little bootable cd that essentially boots Linux on an Intel machine off of an external firewire or usb hard drive. It’s not so big a deal about usb booting, but moreso for booting off of firewire. Apple has had it for a while, as I understand, but not so much for Intel-type products. You can check out my preliminary documentation here. It could use some polishing, and it may be quite wordy, but, hey, all the info is there. If you really wanted to try and replicate what I did, then I believe that you could. I haven’t posted the ISO, but if anybody wants it, then I can upload it and post a link to it.

I worked a little bit in Houston yesterday (Monday) doing some simple tech support stuff for MUD71. The best part of that was probably getting to eat lunch and hang out with Josh Masterson. I had to drive down to Sugarland to get there, but it was worth it. (It’s weird to realize halfway through a thought that the person that you are talking about is going to read this later on today…) Then I drove back to CS, which was quite an adventure. I usually don’t have issues driving in traffic, but between not having a good deal of quarters to pay for tolls and not really knowing the best way to get back, I had issues for a while. I don’t know how many times I had to turn around, but it was WAY TOO MANY! I usually knew where approximately where I needed to be going but was having a hard time figuring out how to get from point A to point B, even though I could see point B. Traffic was heavy and fast, but that just meant that I got to have some fun while driving fast.

Well, after what has seemed like forever, I finally have gotten a few leads on jobs. And they all have come at once… (When it rains it pours.) So, I had a phone interview with Perot Systems on Wednesday. Just before the interview was supposed to happen, I got an email from Google, saying that they wanted to setup a phone interview, too. So, I have that one a little later on today. After the interview with Perot, they wanted a face to face interview, to I have a team interview with them on Monday in Plano. Man, I guess it was a good thing that I wasn’t planning on going home for Easter. I also had a lead with a Java job, but I think that I’m going to pursue these systems admin positions, as that’s really where I want to be heading, job wise.

Posted in General, Technology | 2 Comments

The top ten reasons why you should hire me

Posted by Thomas Mon, 07 Mar 2005 16:44:43 +0000

Here are the
Top Ten reasons why you should hire me as your next entry-level Linux System Administrator:

10. I have eaten breakfast from a vending machine.
9. I have used Debian GNU/Linux for over five years, starting my freshman year of college. I liked Debian before it was cool to like Debian. I’ve even made a few Debian converts. Yes, I may be a Debian zealot, but don’t let that scare you.
8. I have administered Debian GNU/Linux for that same timeframe, including setting up my own DNS, NFS, NIS, Samba, FTP, and Web services, just to name a few.
7. I work everyday on my own Debian GNU/Linux workstation. I am comfortable with, dare I say love, VIM, the command line, and editing flat config files.
6. I graduated in 4 years with a degree in Computer Engineering from Texas A&M University.
5. I am a quick learner, a fast problem solver, and have a high attention to detail. I think that sys admin skills are sys admin skills. As such, it should be trivial to switch from one distro to another.
4. I can find just about anything on the internet, including finding solutions to problems other people can’t find and finding solutions to problems faster than most.
3. I eat, sleep, drink, and breathe Linux. I have literally found found bugs in my code in my sleep and have dreamed about kernel messages.
2. I make computers do what I tell them to do. They cower before my very presence. Many a friend has called on me to fix a problem that goes away as soon as I enter the room.
1. I am passionate about Linux and have an aptitude for its administration.

What I mean is that I think that I am a smart and hard worker and can do an awesome job. Somebody just needs to give me a chance. I know that I won’t let them down.

Posted in Technology | 3 Comments


Posted by Thomas Sat, 05 Mar 2005 23:46:09 +0000

Men’s Retreat was glorious. We stayed up last night ’til something like 3am debating about science and Christianity. I finally figured out that I’m not so sure that everybody really can think for themselves. Yeah, I’ll agree that in an ideal world, everybody should have a thorough thought process and reasoning behind their choices, actions, and decisions… but we’re not in an ideal world. So, I would say that there are a bunch of people who don’t care and even more that really can’t grasp all of the subtleties of certain concepts. Anyway, if you have some free time, I’m sure that Jeremy would love to argu^H^H^H^Hdiscuss it with you.

So, really what I was going to post were my accomplishments for the day. Primarily, I captured the second tape of video from Crunchtime, since I finally have space now to work with, since I setup the 250GB drive. I also figured out some stuff about aspect ratios, et. al. of the video, since the video that we took is in a 16:9 format. I had exported the slideshow in 4:3 without knowing it and now know how to export in 16:9 as it should be. I also cleaned up the Crunchtime Productions video, since the backwards part was generally crappy and choppy.

The job hunt still continues, but with no feedback. Although I did finally pick a business card layout that I liked and went to Kinkos to have them print them up. Tomorrow (after 7pm) we’ll see how that comes out.

Enough for now… I’m going to watch some West Wing.

Posted in General, Technology | 1 Comment

xfsdump and xfsrestore

Posted by Thomas Fri, 04 Mar 2005 15:05:16 +0000

xfsdump and xfsrestore to the rescue! I tried xfs_copy just before I left last night, but when I got back today and tried to mount it, the mount failed. So, I dug around some and found a couple of programs called xfsdump and xfsrestore. One apt-get command later, I had them both, as I didn’t have them before. I used xfsdump -J - / | xfsrestore -J - /new, as found on the xfsdump man page… 51 minutes later I have the data copied. Yay! And it mounts properly! And the best part is shown below:

[tlg1466@argento tlg1466]$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2             19998656   4253776  15744880  22% /
tmpfs                   257364         0    257364   0% /dev/shm
/dev/sda3            223074000  80541848 142532152  37% /home
/dev/sdb4            108998244  80562312  28435932  74% /mnt/sdb
[tlg1466@argento tlg1466]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              20G  4.1G   16G  22% /
tmpfs                 252M     0  252M   0% /dev/shm
/dev/sda3             213G   77G  136G  37% /home
/dev/sdb4             104G   77G   28G  74% /mnt/sdb4

I got my 12GB back! I’m so happy.

Posted in Technology | Comments Off

New Hard Drive

Posted by Thomas Fri, 04 Mar 2005 02:14:00 +0000

My new Seagate 7200.8 250GB SATA drive came today! It took a trip home to grab a Molex to SATA power connector and four screws to get it properly installed. It took something like eight minutes to transfer the four Gigs of data from the old / to the new / partiton, and then another hour to transfer my data from the old /home to the new /home. It then took like four hours to figure out what exactly I needed to do to get lilo on the new drive properly. So, it’s working now (duh! from where do you think I am posting?), but I have one pretty big problem. I used the command time tar cflp - . | (cd /mnt/new_partition; tar xflp -) to copy the files. I found it some time ago from Usenet, before the bastardization of the interface by Google… :(. Anyway, check out the df:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2             19998656   4254016  15744640  22% /
tmpfs                   257364         0    257364   0% /dev/shm
/dev/sda3            223074000  93303252 129770748  42% /home
/dev/sdb4            108998244  80562312  28435932  74% /mnt/sdb4
[tlg1466@argento /]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              20G  4.1G   16G  22% /
tmpfs                 252M     0  252M   0% /dev/shm
/dev/sda3             213G   89G  124G  42% /home
/dev/sdb4             104G   77G   28G  74% /mnt/sdb4

Where the crap did those extra 12GB of data come from? This greatly disturbs me…
Just for kicks, I’ll give everybody the rest of the info about my drives:

[tlg1466@argento /]$ fdisk -l
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         125     1004031   82  Linux swap
/dev/sda2             126        2616    20008957+  83  Linux
/dev/sda3            2617       30401   223183012+  83  Linux

Disk /dev/sdb: 122.9 GB, 122942324736 bytes
16 heads, 63 sectors/track, 238216 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2003     1009480+  82  Linux swap
/dev/sdb3            2004       21844     9999864   83  Linux
/dev/sdb4           21845      238216   109051488   83  Linux

[tlg1466@argento /]$ sudo xfs_info /dev/sda3
meta-data=/home                  isize=256    agcount=16, agsize=3487234 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=55795744, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=27244, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
[tlg1466@argento /]$ sudo xfs_info /dev/sdb4
meta-data=/mnt/sdb4              isize=256    agcount=26, agsize=1048576 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=27262872, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=13311, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

Posted in Technology | 1 Comment

Google and RSS

Posted by Thomas Thu, 03 Mar 2005 20:56:37 +0000

I wrote an email to a cool guy named Robert Cringely. Who knows if I’ll hear anything back from him. I bet that he receives a HUGE amount of email, and mine was pretty long and wordy… oh well, at least I tried to sound cool… :) Here it is:

Dear Mr. Cringely,

I am an avid reader of your weekly column and appreciate your insight into the bigger picture of the technology market. While I can only imagine the mountain of email that you must receive everyday, you are the only person that I thought might answer a weighty question that I have about that bigger picture: Why is it do you think that Google has as of yet to enter the RSS Reader market?

They already possess similar technology (their Groups and Gmail interfaces) that would allow them easy entry into this market, as well as their targeted advertising and their already large collection of RSS material. It seems to me that Google is missing out on this new way for people to interact with the internet. I cannot even speculate what Ask Jeeves will do next with Bloglines (my RSS reader of choice) or why Google might not have been interested in their acquiring their. I have never seen advertising from Bloglines but might guess that it is inevitable. I know that Google has a proven record providing relevant text ads and the ability to relate web sites to one another. This would not only provide a service to the user by providing good ads, but also by suggesting content similar for the user to read.

I honestly apologize for the length of this email, as it seems to have run away from me a bit. But, I hope that the length and wordiness have not undermined my little attempt at insight and relevance.

Thank you for your time,
Thomas Garner

Posted in Technology | 2 Comments

MPEG2 vs DIVX DVD Burning Cost Analysis

Posted by Thomas Tue, 01 Mar 2005 23:35:58 +0000

Let’s see how huge of a nerd I am…

Here is a graph of the cost of burning X number of episodes.

Right around the 924th episode, it becomes cheaper to have bought a nice, new Philips DVP642 DivX-Certified Progressive-Scan DVD Player from Amazon.com for $69.99, than to have used your old DVD player and transcoded all of the episodes to MPEG2.

Say, for instance you like The West Wing. They are currently in the middle of their sixth season. You can either have 12 discs of awesome DIVX-ness, or 48 discs of MPEG2-ness, that amount to the same six seasons. Personally, I would prefer to have the 12 to the 48. Plus I would rather take the 1/2 hour per season burning the DIVX than the 13.5 hours transcoding and then burning the MPEG2.

Plus, you get the “just plain cool factor” of having a really sweet DVD player that plays DIVX.

Now, if only I had an awesome HD projector, Mac Mini, and 7.1 sound system for all my audio/visual needs…

Posted in Technology | 3 Comments