2011-02-02

Crowd-Sourced Relevancy Ranking in Bing or why Google is wrong

Let me start by saying I’m a Google fanboy and a half. I love that company and use all their products. My life revolves around my Nexus One and it’s deep and tight integration with Google’s stack which is my extremely large clustered brain-annex that’s constantly available with a few taps of my touchscreen.

Google is awesome and great and I love them to death. That said, they are wrong about Bing copying their results.

See these first-party posts for details:

Thoughts On Search Quality (Bing Blog)

Bing uses Google’s Results — and denies it (Google Blog)

Why Microsoft is not in the wrong: Crowd Sourcing is not stealing

What Microsoft has done is create a genuinely useful process for improving the relevancy of the Bing search engine. It’s not a groundbreaking technique. Like most web based application in the world, they just monitor your behaviour. You type ‘foo’ into their search box, and are presented with 10 results on your first screen. You click #2 result. This is recorded. The same thing happens for 10,000 other users. Eventually Bing figures out that the #2 result really should be the #1 result, and ups it’s rank.

This is no different than Amazon’s suggestion engine ‘Users who viewed this product ultimately bought this other product’.

This process doesn’t depend on Google. This process would work purely as a way of improving rank within the Bing system, with no outside influences. It also doesn’t really require a special toolbar to make it happen. You could collect that data through their normal web interface just as easily.

This is not an issue of spyware, cheating, or copying. It’s just Bing using crowd-sourced data to determine relevancy. It’s very smart. It’s not dishonest. Get over it Google.

How to improve search result relevancy?

I find this interesting because, currently Google faces a huge challenge: Reducing the amount of spam that is polluting their results. Bing clearly has a tactic for that, though it may or may not be completely effective. The value of the click through data is only as good as the user who clicked on the result. If the majority of people searching for “Foo” clicked on say, option #3 instead of #2 in our previous example, and if #3 was a spam site, then #3 would get ranked up. Bad news!

Looks like Bing is trusting our judgement as a user community. We know what’s relevant and show it by clicking through. If we choose a spam site as our main result, Cest La Vie!!

But how can we improve search relevancy, and reduce false positives in our result set? The answer so far seems to be “curate the web”, or like Bing, use a “mechanical turk”, aka click stream crowd-sourced relevancy, assuming that people will be able to express their preference as a whole and emerge the correct answer over time.

Our solution: Contextual Search

My company has a different solution: Working at a higher level of abstraction than words and documents. We have built a novel search tool that allows users to search for contexts, not documents, and make decisions on contexts.

Find me “license” where the document has “drivers license” in it, but not where it has “fishing license”, unless it also has something I want in it like “drivers license”. Traditional boolean search, which operates against an inverted index of terms to documents (which is what both Bing and Google offer), does not provide for this kind of decision making. It’s impossible without changing how the data is indexed, and that’s not anything these guys are going to be doing anytime soon. They have too much invested in their current methodology to change.

We’re hoping to launch a public search site sometime this year that presents our novel approach to improving relevancy in search. I look forward to seeing how it performs compared to Google and Bing.

More on that later, when it’s closer to reality.

What do you think, world at large?

Does anyone have any other ideas about search relevancy? What are some other tactics one might employ, beyond Curating or Crowdsourcing? How else to make the spam go away?

2010-08-02

My new favourite tool

During the course of my work, I use a hex editor a lot.

Specifically, I use a hex editor most for reverse engineering binary file formats that have no documentation or for fixing corrupted files, and the like. One thing that I've always wanted, was some way to view the binary contents as structured data.. Like "Starting at this byte offset, consider the next four bytes to be an integer, and show me that integer, then, using that integer, take that many bytes immediately following it, consider them to be string data, and decode as UTF8 or EBCDIC... etc.

All that is fairly complex, and generally well beyond the facilities of anything short of a fairly low level and full-featured programming language.

Well, looks like the folks over at SweetScape realized this is a workflow that at least SOME people need to have... and so they built the perfect tool for doing that. It's called the 010 Editor. It has this great feature called Binary Templates as well as scripts. It's the bomb, and it's accelerated my reverse engineering work by at least an order of magnitude.

I'm buying a license, and anyone know know me will realize that's a pretty big deal. I'm a big fan of FOSS and generally try to use as much of it as possible, avoiding commercial apps... but this is a big exception. SweetScape is a small company run by a father and son team. Hardly "The Man". Lowell and Graeme Sweet -- you rock the block and treat the bytes right. ;)



2010-02-12

Big Data

When I was a kid -- 7 years old -- I got my first computer. At that time, using a computer meant programming a computer. My Commodore VIC20 didn't have a storage device when I got it (I got a tape drive later on). So, if you turned it off, the in memory program was gone. You had to type it in again the next time you started up the computer.

I remember learning about programming. One of my first programs was just:

10 PRINT "TROY"

I was delighted to see my name show up on the screen. The computer knew me now. I was being reflected back from the screen in glowing pixels. I'd taken a little bit of me, and put it into the computer, and the computer was letting me look at it, and learn about myself, and learn about the computer, at the same time. I was hooked.

10 FOR I = 1 TO 10
20 PRINT "TROY"
30 NEXT


This blew my mind. While there was something magical about seeing my name on the screen the first time. Seeing it ten times was just really really exciting. Something about the quantity was just really cool... and how FAST it happened. It put my name up there ten times as fast as it put it up there one time!! Thrilling!!

10 PRINT "TROY"
20 GOTO 10

The screen was filled with my name.... and it didn't stop. I got up and I ran to tell my mom. I drug her downstairs into the basement and babbled on about the computer and how it was just going and going, and how I did it. GOTO mom... GOTO! My excitement was more than I could contain. I was officially a rocket scientist now. My excitement, like the program was INFINITE!!! I created something that was endless, infinitely long. There was no way to count how many times it put my name on the screen. My computer, at that moment, knew only one thing -- my name, my program, and it would run it forever until I told it to stop. Happily. This was a kind of love and dedication that was far beyond what any person could ever give. There was something really deep here, between me and the computer.

Fast forward now, 24 years later. I'm a software engineer for a living now, and have been for a while. At my company, I recently got promoted to a fancier title "Director of Software Development", and all the responsibility for success lies on my shoulders. People answer my job advertisements with the salutation "Mr. Howard". That part freaks me out.

I'm still excited by big data. That infinite loop of TROYs on my screen was just the start. Now I design systems that process terabytes of data at a time on hundreds of servers. One of the most fascinating parts of my job these days is still the same as when I was a kid. I love hitting "Run" on a unit test, and seeing what happens. I feel good when it's successful once. My next step, almost without fail, is to see what happens when it runs ten times in a row.. Then 100... Then 1000... 10000... 100000... I just keep adding zeros until the thing breaks down, or until I get bored with it.

Big Data is still exciting, still fascinating. I've now given my computer programs more interesting sample material to work from, and so their world view has expanded. Now instead of only knowing me and my name, my programs know all the details of the personal and business lives of thousands of people whose email is processed by the programs. I think my computer still loves me more than any of them though. Secretly, somewhere in there, I know there's an infinite loop on a background thread that's just cycling over the string "TROY"... forever.


2009-10-22

An Infinite Stream Of Bytes

No, I'm not about to wax poetic about the deep ontological issues raised in The Matrix, or speak meaningfully about how transient the modern world of communication is and how the artifacts of our lifetime have become ephemeral such that our posterity will not be able to remember us, even if they wanted to.

Instead I'm going to post a code snippet that solves an annoying little scenario that comes up every now and again when writing parsers.

Basically, it goes like this:

You're writing a parser, and you need to check every byte in a stream of bytes coming from a file/network/etc.. You might need to read forward or read backward a little, to match a multi-byte pattern or value within n bytes of another value. You figure instead of "peeking and seeking" against the stream (what it's read-only!?!?), your parser can just stored the state, and still only look at a single byte at a time. That's great and all, and you do a quick implementation using stream.ReadByte, which seems to work...

Except it's slow. You know from experience that block reads are way faster, and you want to read a block of data that's say 1k or 4k from your stream, and then parse that, fetch another block, parse that, etc... but what if your pattern straddles two blocks? What if the first byte of a two byte sequence is the last byte in a block and the next block's first byte is the second character? Now your parser needs to stop what it's doing, exit the loop, go grab some more data, then restart it's iteration over that.. You could build all that behaviour into your parser (for every parser that you write).. but it's non-trival to deal with. In fact it's a real pain in the butt to refactor a parser to work that way.

Also, you think to yourself "Man... It would be SOOOOooooo much nicer if I could just write a foreach loop, and like get every byte in the stream in one bit long iteration... Why doesn't System.IO.Stream implement IEnumerable?!?" It totally makes sense that it should...

Anyhow, story's over. Here's the code to solve it:


public static IEnumerable<byte> GetBytesFromStream(Stream stream)
{
const int blockSize = 1024;

byte[] buffer = new byte[blockSize];
int bytesRead;

while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
for (int i = 0; i < bytesRead; i++)
{
yield return buffer[i];
}
}
}


And in case it's not obvious, I'll explain what this little guy does. It does a block read from the stream (adjust your blocksize to suit or make it a parameter), iterates over the block, uses the yield keyword to return bytes via the IEnumerable<T> interface. The while loop checks the return value of stream.Read() to see if it returns zero, which means, basically, the stream is done (EOF). If there was a partial read (e.g. less than your blocksize buffer) bytesRead will be the amount that DID successfully read, and so your for loop that is iterating over the block uses bytesRead to ensure we only return valid data (if we had used buffer.Length or blockSize, and had a partial read, the stuff after the "new data" would be data from the last read. NOT COOL!).

You could stick this method in your utility class if you'd like, or make a wrapper class that wraps Stream and implements IEnumerable<byte>... whatever you want. Maybe you want to be all modern and cool and make it an extension method for Stream.

Here's an example wrapper class:


public class EnumerableStream : Stream, IEnumerable<byte>
{
private readonly Stream _baseStream;

public EnumerableStream(Stream stream)
{
_baseStream = stream;
}

public IEnumerator<byte> GetEnumerator()
{
var bytes = GetBytesFromStream(_baseStream);
return bytes.GetEnumerator();
}

IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}

private static IEnumerable<byte> GetBytesFromStream(Stream stream)
{
const int blockSize = 1024;

byte[] buffer = new byte[blockSize];
int bytesRead;

while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
for (int i = 0; i < bytesRead; i++)
{
yield return buffer[i];
}
}
}

public override bool CanRead
{
get { return _baseStream.CanRead; }
}

public override bool CanSeek
{
get { return _baseStream.CanSeek; }
}

public override bool CanWrite
{
get { return _baseStream.CanWrite; }
}

public override void Flush()
{
_baseStream.Flush();
}

public override long Length
{
get { return _baseStream.Length; }
}

public override long Position
{
get
{
return _baseStream.Position;
}
set
{
_baseStream.Position = value;
}
}

public override int Read(byte[] buffer, int offset, int count)
{
return _baseStream.Read(buffer, offset, count);
}

public override long Seek(long offset, SeekOrigin origin)
{
return _baseStream.Seek(offset, origin);
}

public override void SetLength(long value)
{
_baseStream.SetLength(value);
}

public override void Write(byte[] buffer, int offset, int count)
{
_baseStream.Write(buffer, offset, count);
}
}


And an example of the extension method way...


public static class StreamExtensions
{
public static IEnumerable<byte> GetBytes(this Stream stream)
{
const int blockSize = 1024;

byte[] buffer = new byte[blockSize];
int bytesRead;

while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
for (int i = 0; i < bytesRead; i++)
{
yield return buffer[i];
}
}
}
}


Enjoy!

2009-07-22

Unicode string detection

I had the need to detect wether or not a given string (in .Net/C#) was unicode or not.. Specifically filenames. I had a situation where a filename might be passed to me, that could possibly contian unicode. If it DID contained unicode characters, I needed to run GetShortPathName and get the 8.3 filename for the file, before passing it into a legacy component that couldn't handle unicode names...

Well, a "big hammer approach" might just call GetShortPathName on every filename, just to be sure... But that's a costly API call if your having to do this a million times a second.

So, long story short, I wrote this little function to detect unicode in a c# .Net string:


public static bool IsUnicode(string s)
{
return s != Marshal.PtrToStringAnsi(Marshal.StringToHGlobalAnsi(s));
}


Now homework for all you kiddies out there... Is this code a memory leak? If so, what should you do to fix it? If not, why not?

2009-01-06

Switch To..., Retry, Cancel

Well, as much as I hate to admit it, I was recently working on VB6 application that uses Office COM automation. It needed to have Word and Excel do a few things while the main application waited.

Every time a user clicked on the UI, you'd get that beautiful dialog "This application cannot be completed because the other application is busy. Choose 'Switch To' to activate the busy application and correct the problem.

This is particularly onerous because if the user does not click switch to, basically, both programs, the VB6 app AND the Office app sit around waiting for the user to do something. Yay!

Luckily I found the quick and dirty answer to this. Somewhere... Anywhere before the point that you make the COM call, just insert these two lines:


App.OleRequestPendingTimeout = 2147483647
App.OleServerBusyTimeout = 2147483647



... and you never have to see that dialog again.

Enjoy!

2008-05-15

Refactoring a big if block into a simple command processor using attributes

Recently someone had a problem where they had some massive control block full of if statements looking at a string, dispatching one of a variety of functions. The if block was massive. Hundreds of if statments, hundreds of magic strings.

Interestingly all the functions had the same signature... So I gave him this example of how to use attributes on the methods to specify the corresponding token, then we use Reflection to scan the assembly for all the functions with that attribute, then create a function table keyed by thier token, to privde fast lookup. This example shows how to creat an object instance and then invoke the method via reflection, but this could be made much simpler if the methods were all static and the function protoype was part of an interface instead of just a unspoken convention.

Here's the "Before" example from the original question...


string tag;
string cmdLine;
State state;
string outData;

...

if (token == "ABCSearch") {
ABC abc = new ABC();
abc.SearchFor(tag, state, cmdLine, ref outData);
}
else if (token == "JklmDoSomething") {
JKLM jklm = new JKLM();
jklm.Dowork1(tag, state, cmdLine, ref outData);
}


A couple of notes:

  • There is no correlation between the token and the class name (ABC, JKLM, ...) or the method (SearchFor, Dowork1).
  • The methods do have the same signature:
    void func(string tag, State state, string cmdLine, ref string outData)
  • The if ()... block is 500+ lines and growing



And here is my example command processor (as a console app):


using System;
using System.Collections.Generic;
using System.Reflection;

namespace ConsoleApplication2
{
public class Program
{
static void Main(string[] args)
{
while(true)
{
Console.Write("[e(x)ecute, (t)okens, (q)uit] -> ");
string s = Console.ReadKey().KeyChar.ToString().ToLower();
Console.WriteLine();

switch (s)
{
case "q":
Console.WriteLine("Finished.");
return;

case "t":
Console.WriteLine("Known tokens:");
foreach (string tokenName in CommandProcessor.GetTokens())
{
Console.WriteLine(tokenName);
}
break;

case "x":
string token = string.Empty;
string tag = string.Empty;
string cmdLine = string.Empty;
string state = string.Empty;

Console.Write("token: ");
token = Console.ReadLine();
Console.Write("tag: ");
tag = Console.ReadLine();
Console.Write("cmdLine: ");
cmdLine = Console.ReadLine();
Console.Write("state: ");
state = Console.ReadLine();

try
{
string output = CommandProcessor.DoCommand(token, tag, cmdLine, State.GetStateFromString(state));
Console.WriteLine("Output:");
Console.WriteLine(output);
}
catch (TokenNotFoundException ex)
{
Console.WriteLine(ex.Message);
}
catch (Exception ex)
{
Console.WriteLine("Unknown error occured during execution. Exception was: " + ex.Message);
}
break;

default:
Console.WriteLine("Unknown command: {0}", s);
break;
}
}
}
}

public class CommandProcessor
{
// our dictionary of method calls.
internal static Dictionary availableFunctions = new Dictionary();

static CommandProcessor()
{
SetupMethodCallDictionary();
}

private static void SetupMethodCallDictionary()
{
// get the current assembly.
Assembly assembly = Assembly.GetExecutingAssembly();

// cycle through the types in the assembly
foreach (Type type in assembly.GetTypes())
{
// cycle through the methods on each type
foreach (MethodInfo method in type.GetMethods())
{
// look for Token attributes on the methods.
object[] tokens = method.GetCustomAttributes(typeof(TokenAttribute), true);

if (tokens.Length > 0)
{
// cycle through the token attributes (allowing multiple attributes
// leaves room for backwards compatibility if you change your tokens
// or consolidate functionality of the methods. etc.
foreach (TokenAttribute token in tokens)
{
// look for the token in the dictionary, if it's not there add it..
MethodInfo foundMethod = default(MethodInfo);
if (availableFunctions.TryGetValue(token.TokenName, out foundMethod))
{
// if there is more than one function registered for the same
// token, just keep the last one found.
availableFunctions[token.TokenName] = method;
}
else
{
// add to the table.
availableFunctions.Add(token.TokenName, method);
}
}
}
}
}
}

public static string DoCommand(string token, string tag, string cmdLine, State state)
{
// the data returned from the command
string outData = string.Empty;
MethodInfo method = default(MethodInfo);

// see if we have a method for that token
if (availableFunctions.TryGetValue(token, out method))
{
// if so, create an instance of the object, and then execute the method,
// unless it's static.. in which case just execute the method.
object instance = null;
if (!method.IsStatic)
{
// this just invokes the default constructor... if you need to pass
// parameters use one of the other overloads.
instance = Activator.CreateInstance(method.ReflectedType);
}

object[] args = new object[] { tag, state, cmdLine, outData };

method.Invoke(instance, args);
outData = (string)args[3];
}
else
{
throw new TokenNotFoundException(string.Format("Token {0} not found. Cannot execute.", token));
}
return outData;
}

public static IEnumerable GetTokens()
{
foreach (KeyValuePair entry in availableFunctions)
{
yield return entry.Key;
}
}
}

public class State
{
public State(string text)
{
_text = text;
}

private string _text;

public string Text
{
get { return _text; }
set { _text = value; }
}

public static State GetStateFromString(string state)
{
// implement parsing of string to build State object here.
return new State(state);
}
}

[AttributeUsage(AttributeTargets.Method)]
public class TokenAttribute : Attribute
{
public TokenAttribute(string tokenName)
{
_tokenName = tokenName;
}

private string _tokenName;

public string TokenName
{
get { return _tokenName; }
set { _tokenName = value; }
}
}

[global::System.Serializable]
public class TokenNotFoundException : Exception
{
//
// For guidelines regarding the creation of new exception types, see
// http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/cpconerrorraisinghandlingguidelines.asp
// and
// http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncscol/html/csharp07192001.asp
//
public TokenNotFoundException() { }
public TokenNotFoundException(string message) : base(message) { }
public TokenNotFoundException(string message, Exception inner) : base(message, inner) { }
protected TokenNotFoundException(
System.Runtime.Serialization.SerializationInfo info,
System.Runtime.Serialization.StreamingContext context)
: base(info, context) { }
}

public class ABC
{
[Token("ABCSearch")]
public void SearchFor(string tag, State state, string cmdLine, ref string outData)
{
// do some stuff.
outData =
string.Format("You called ABC.SearchFor. Parameters were [tag: {0}, state: {1}, cmdLine: {2}]", tag, state.Text, cmdLine);

}
}

public class JKLM
{
[Token("JklmDoSomething")]
public void Dowork1(string tag, State state, string cmdLine, ref string outData)
{
// do some other stuff.
outData =
string.Format("You called JKLM.Dowork1. Parameters were [tag: {0}, state: {1}, cmdLine: {2}]", tag, state.Text, cmdLine);
}
}
}

How to get information about your current culture.

Instead of doing a college survery and asking a bunch of probing questions about the lives of twenty-somethings, there's an easier way to get information about your current culture. Just look at CultureInfo.CurrentCulture.

Here's a quick program that explains how to do that. This can be very useful in debugging and troubleshooting how your program behaves on machines that are setup for other laungages or regions.


using System;
using System.Collections.Generic;
using System.Text;
using System.Globalization;

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
CultureInfo currentCulture = CultureInfo.CurrentCulture;

Console.WriteLine("CultureInfo");
Console.WriteLine("-----------");
Console.WriteLine("DisplayName: {0}", currentCulture.DisplayName);
Console.WriteLine("Name: {0}", currentCulture.Name);
Console.WriteLine("LCID: {0}", currentCulture.LCID);
Console.WriteLine();

Console.WriteLine("NumberFormatInfo");
Console.WriteLine("----------------");
Console.WriteLine("Decimal Seperator: {0}", currentCulture.NumberFormat.NumberDecimalSeparator);
Console.Write("Digits: ");

foreach (string s in currentCulture.NumberFormat.NativeDigits)
{
Console.Write(s + " ");
}

Console.WriteLine();
}
}
}




Base output should look like:


CultureInfo
-----------
DisplayName: English (United States)
Name: en-US
LCID: 1033



NumberFormatInfo
----------------
Decimal Seperator: .
Digits: 0 1 2 3 4 5 6 7 8 9

Filtering a network stream using a wrapper

So not that long ago, someone posted a question asking how to deal with a certain situation. The situation is such that there is a network file stream coming from somewhere, that has certain data you want to keep, and certain data you don't want to keep. Control blocks, extra header information, weirdo protocol, too much data coming back form an API, etc..

My suggestion was to create a simple container object (aka wrapper) to the existing network stream, that operates the same as the network stream, but does the necessary filtering.

Here's an example of how you'd use it, and and example base class implementation of for the filters follows it. In the actual problem case example, he was dealing with a NetworkStream that contained Xml data in irregular chunks, with control blocks as fixed headers. Each header indicates how much XmlData follows. The filter will remove the headers as needed, presenting a simple stream of Xml data to the XmlReader to parse.

I've left out the concrete implementation that actually parses the stream, and here you just have the FilteredNetworkStream base class and an idea of how to use it once you implement it. All that's left for the implementer is to override the abstract method FilterBeforeRead, which contains the customized filtering logic for the particular situation.



using (NetworkStream inputStream = GetNetworkStreamFromSomewhere())
using (StreamWriter outputStream = new StreamWriter(@"C:\Path\To\File.xml", false))
{

XmlReader reader = XmlReader.Create(new FilteredNetworkStream(inputStream));
while (reader.Read())
{

// method returns empty string if current data is discardable
string outputData = GetDesiredDataFromReader(reader);

if (!string.IsNullOrEmpty(outputData))
{

// save desired data to local file
outputStream.Write(outputData);
}
}
}


Here's the base class:


public abstract class FilteredNetworkStream : Stream
{
public FilteredNetworkStream(NetworkStream baseStream)
{
_baseStream = baseStream;
}

protected NetworkStream _baseStream;
public abstract void FilterBeforeRead();

#region Stream Implementation

public override bool CanRead
{
get { return _baseStream.CanRead; }
}

public override bool CanSeek
{
get { return _baseStream.CanSeek; }
}

public override bool CanWrite
{
get { return _baseStream.CanWrite; }
}

public override void Flush()
{
_baseStream.Flush();
}

public override long Length
{
get { return _baseStream.Length; }
}

public override long Position
{
get
{
return _baseStream.Position;
}
set
{
_baseStream.Position = value;
}
}

public override int Read(byte[] buffer, int offset, int count)
{
this.FilterBeforeRead();
return _baseStream.Read(buffer, offset, count);
}

public override long Seek(long offset, SeekOrigin origin)
{
return _baseStream.Seek(offset, origin);
}

public override void SetLength(long value)
{
_baseStream.SetLength(value);
}

public override void Write(byte[] buffer, int offset, int count)
{
_baseStream.Write(buffer, offset, count);
}

#endregion
}

2008-04-29

Getting the field list returned from an ad-hoc Sql query

So, recently I needed to make an application that allowed a user to enter an arbitrary Sql query, and elsewhere in the UI I needed to display a drop-down with the fields that this arbitrary query returned.

This poses a small problem. It's very simple if the user is doing simple queries, that don't take long to execute. You could just run the query, then, take the first result, and get the list of fields. Well.. This works for simple queries that return small result sets, but we needed to put in queries that potentially return as many as 48 million results, using complex queries including joins between multi-million rowed tables, aggregates, and that sort of thing..

In other words, the queries are slow. Really slow. They create a lot of UI lag when I go to get the field names for the drop down box.

My first attempt was to take the query and wrap it up like this:

SELECT TOP(1) * FROM ( // original query here // ) fieldNamesTable

My thinking was that if I specified that I only wanted the first record it's be really quick, even with a complex query. This is true. It's must faster, but it's still slow. Too slow. A lot of UI lag still remained.

So, my second attempt worked much better. I wrapped the query again, but now it looks like:

SELECT * FROM ( // original query here // ) fieldNamesTable WHERE 1 = 0

Instead of specifying I wanted the first record, I put a phrase in the WHERE clause that will always be false. The Sql Server's query execution engine realizes that, and so it knows that the query will never be able to return data. So it immediately returns with 0 results. But I get the field names!! This is SUPER fast!

Enjoy,
Troy

2008-03-24

An amusing bought with the DataGridView control

As usual, WinForms GUI programming is a terrible PIA. Even worse is the flagship of all controls the great beast known as the DataGridView. Working with the DataGridView bends your mind like offensive cutlery at Uri Geller's dinner table.

During my most recent encounter with this control of the third kind, I needed to use a ComboBox column, and that column needed to have an effect on the contents of the other controls. Normally, you could just hook up a cellvaluechanged event or something of that nature, but that doesn't work out on a ComboBox control in a DataGridView column... There's no event that would fire when a user selected an item in the dropdown. Only after the user selected the item, then refocused on some other control.

That was annoying. Too many clicks for the user! When I select the item in the dropdown, the row should react, I shouldn't have to click elsewhere.

So here's a quick example of what I did to get that working. In the example, we handle the datagridview's EditControlShowing event, then grab a reference to the combobox, then unwire any previous events we may have hooked up to SelectionChangeCommitted, then wire the event. In the SelectionChangeCommitted event we call _dataGridView.EndEdit() to effect the other rows.

Enjoy!



public class Example
{
///
/// Constructor
///

public Example()
{
_dataGridView = new DataGridView();

// setup the datagridview here.
DataGridViewComboBoxColumn fooColumn = new DataGridViewComboBoxColumn();
fooColumn.Name = "Foo";
fooColumn.ValueType = typeof(String);
fooColumn.HeaderText = "Foo";
fooColumn.Items.Add("Bar");
fooColumn.Items.Add("Baz");
fooColumn.Items.Add("Fizz");
fooColumn.Items.Add("Buzz");
fooColumn.Items.Add("FizzBuzz");
fooColumn.DefaultCellStyle.NullValue = "Bar";

_dataGridView.Columns.Add(fooColumn);

// hook up editing control showing event
_dataGridView.EditingControlShowing += new DataGridViewEditingControlShowingEventHandler(_dataGridView_EditingControlShowing);

// create a delegate for the method that will handle the event
_comboBoxSelectDelegate = new EventHandler(combo_SelectionChangeCommitted);
}

private DataGridView _dataGridView;
private EventHandler _comboBoxSelectDelegate;

void _dataGridView_EditingControlShowing(object sender, DataGridViewEditingControlShowingEventArgs e)
{
// get the control from the event args.

ComboBox combo = e.Control as ComboBox;


if (combo != null)
{
// remove the event subscription if it exists.
combo.SelectionChangeCommitted -= comboSelectDelegate;

// add a subscription to the event
combo.SelectionChangeCommitted += comboSelectDelegate;
}
}

void combo_SelectionChangeCommitted(object sender, EventArgs e)
{
// handle the event, and end edit mode
_dataGridView.EndEdit();
}
}

2008-03-07

Black Box OMG it's addictive.

BlackBox - A simple puzzle game where you shoot rays of light into a black box, and determine the location of atoms inside the box based on the entry and exit points of the ray.

Seriously addicting.

Here is a Wiki article about it and an online-playable Flash version.

2008-02-13

Sharing Menu Items between ToolStrips on a Windows Form

So, you may have found yourself building a nice user-friendly, somewhat complicated Windows Forms application, that had lots of drop-down menus and right click context menus, and what not.. You may have naively assumed that you could *share* your menu items, so that you have a consistent set of options, icons, and more importantly event handlers for a particular menu item or set of menu items.


Well, I did... I had a Tools drop-down menu with some basic functions that I wanted to also be accessible from a right-click content menu on a treeview. Redundant? Sure... Convenient? Definately.

There's a little annoying detail that says that a ToolStripMenuItem can't be "owned" by more than one ToolStrip. In other words, it can only be in one place at a time. So, when you do something like this:

toolsToolStripMenuItem.DropDownItems.Add(myMenuItem);
treeNodeContextToolStrip.Items.Add(myMenuItem);

The menu item in question suddenly disappears from the tools menu, and appears in the content menu... Hmm...

So, in order to share the menu item, I came up with this hackish solution....I handled the Opening event for the two menu strips and in each one, "took ownership" of the menuitem. So, through sleight-of-hand it seems to exist in one place at a time. We can only get away with this because, ToolStrips, being modal, only show one at a time.

So, here's a simple sample:

Hot-Swap Menu Item Sample

private void treeNodeContextMenuStrip_Opening(object sender, CancelEventArgs e)
{
treeNodeContextMenuStrip.Items.Insert(3, myToolStripMenuItem);
}

private void toolsToolStripMenuItem_DropDownOpening(object sender, EventArgs e)
{
this.toolsToolStripMenuItem.DropDownItems.Insert(5, this.myToolStripMenuItem);
}

2007-11-15

ASP.NET Gridview SelectIndexChanged event not firing

If, you find yourself in the unfortunate position of having a dynamically created Gridview Control in your ASP.NET page AND needing to handle the SelectedIndexChanged event... as I was, you may find yourself banging your head against the monitor trying to understand why the event isn't getting fired.


Well, dear reader, that's because you didn't set the Gridview.ID property, and when the callback for the event is fired, it does that by ID/parameters.. The parameters are there, but the ID isn't.. So it can't find your control, so it can't fire the event.

So, always assign the ID of dynamically generated ASP.NET controls!

2007-10-11

Changing Visual Studio Item Templates

So, Visual Studio is pretty great right? It makes a lot of things really easy, really automated... saves a lot of typing, etc... However, there are still some areas where I find myself repetatively doing the same thing in certain scenarios.

For example -- Making a new class.

Now, I usually just right-click on my project list, Add..., New Item, then I'm presented with an array of the available Item Templates. So, I'll select Class, rename it, and then I'm presented with this:


using System;
using System.Collections.Generic;
using System.Text;

namespace MyNamespace
{
class Class1
{
}
}


And of course, the first thing I will do, is change the class name, set it to public, create an empty constructor, define code regions to keep things organized... and end up with something like this:


using System;
using System.Collections.Generic;
using System.Text;

namespace MyNamespace
{
public class Class1
{

#region Constructors
public Class1()
{
Init();
}

///
/// Initializes field values.
///

private void Init()
{
}
#endregion Constructors

#region Fields
#endregion Fields

#region Properties
#endregion Properties

#region Public Methods
#endregion Public Methods

#region Private Methods
#endregion Private Methods
}
}


WHOA! That takes quite a few minutes of typing... FOR EVERY CLASS I WRITE!!

So, I decided to go figure out how those templates work.

Where are those files at?

The templates are stored in your Visual Studio installation directory. If you're like me, and running a fairly recent version of Visual Studio 2005, installed with default configuration, your install directory will probably be:

C:\Program Files\Microsoft Visual Studio 8\

and the templates are stored in:


C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\ItemTemplates


Under that directory you'll find subdirectories for each classification of template (CSharp, JSharp, VisualBasic, etc.. ), and in each of those subdirectories, you'll find a zip file for each template.

For what I want to fix, I am looking for this file:


C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\ItemTemplates\CSharp\1033\Class.zip

*sidenote: Notice that the last subdirectory in the path is the language code 1033 (English) and if you've installed windows/vs with a different language, this will be different.. this is one reason you may not see your default templates when working across languages (ie, windows is in spanish, and visual studio is installed from the English language version, but is configured to be in spanish.. templates will be missing!!


In the Class.zip file, you will find two files; Class.cs and Class.vstemplate

Class.vstemplate
Class.vstemplate is just an XML file. This contians information about the template, such as what GUID to lookup for the icon to display in the wizard screen, what the sorting order is, what assemblies it references, etc.. For the most part, unless your doing something more complicated than what I want to do, you won't need to edit this. One tag to pay attention to is:

<ProjectItem ReplaceParameters="true">Class.cs</ProjectItem<

This tag says that VS should make a new project item based on Class.cs, and it should parse the file and replace the parameter tokens in it with the appropriate values... So let's look at Class.cs:


Class.cs
Class.cs is the templated code file. The default file looks like this:


using System;
using System.Collections.Generic;
using System.Text;

namespace $rootnamespace$
{
class $safeitemrootname$
{
}
}


So this is the normal new empty class, and the tokens $rootnamespace and $safeitemrootname$ are what will get replaced when VS parses the file and passes in the parameters.


Well, I don't know anything about those parameters... So I'm not going to mess with them. However, I did go make a list of the parameters I found in the default templates ( I could not find a list of these paramaters on the net anywhere...)


$rootnamespace$
$safeitemrootname$
$registeredorganization$
$year$
$guid1$
$ContentTags$
$MasterPage$
$fileinputname$
$classname$
$safeitemname$


So I modified Class.cs to be:


using System;
using System.Collections.Generic;
using System.Text;

namespace $rootnamespace$
{
public class $safeitemrootname$
{
#region Constructors
public $safeitemrootname$()
{
Init();
}

/// <summary>
/// Initializes field values.
/// </summary>
private void Init()
{
}
#endregion Constructors

#region Fields
#endregion Fields

#region Properties
#endregion Properties

#region Public Methods
#endregion Public Methods

#region Private Methods
#endregion Private Methods
}
}

Yay! my fingers are saved... but what you say? all my base are NOT belong to us? True. There is a cache.


Refreshing the Visual Studio Template Cache


The template files are cached in a folder called TemplateItemCache in the same location as the template folder. So..


  1. Close all Visual Studio Windows
  2. Clear contents of TemplateCache folder
  3. Open a DOS prompt (normal one, or Visual Studio 2005 Command Prompt)
  4. Run devenv /installvstemplates (if that doesn't work, run devenv /setup)


Now, when you restart Visual Studio, your new templates will be installed...


BUT! the fun doesn't stop here! The same templating structure for items also applies to projects! But that's next post...

2007-07-10

CodeSnippet: PrintObject

Following up on the same idea as the ListEnum code snippet, this is a method I often use to print the properties of an object to the console.

I write a number of small command-line utilities, and typically, I create a "property bag" type of object that contains all of the possible command-line options. Just before execution, I like to display the options to the user, so that they know that the program knows what they meant by the command-line options.

Here's the code that does that:


private static void printObject(object obj)
{
PropertyInfo[] pia = obj.GetType().GetProperties();
foreach (PropertyInfo pi in pia)
{
Console.WriteLine(
pi.Name.PadRight(16, ' ') +
": " +
pi.GetValue(obj, null).ToString());
}
}



This could, of course, be used for any scenario where you want to inspect the values of the properties on an object. If you were so inclined, this could easily be expanded to have a lot more detail, handle arrays, print the type name of the object, etc.. I have various permutations of this method that do some or all of that as needed. I have considered turning this into a class with all those fiddly bits configurable, but haven't gotten around to it yet.. If I ever do, I'll post it here!

2007-06-08

IComparable and Egocentrism

Today, on the ride home from work on the MAX train (local light-rail here in Portland, OR), I overheard a girl talking to some young Hispanic men. She was babbling on in a typically "White American" way, about cultural differences, and how "We're really more alike than we are different." and that popular media tries to force differences down our cultural throats through advertisements and TV (evil incarnate).

While her stance is in many ways similar to my own thinking, I still felt compelled to consider how I would respond if I were having the conversation with her... It would go something like this...

Why do we put such a fine point on our differences? Why do we go to war over skin colours, eating habits, clothing choices, and other such nonsense? Because human beings are intrinsically scared shitless of sameness. Internally, we must compare everything. We are so bound up in the process of comparison logic, that it permeates our every action. Is this bad or good? Better or worse? Bigger or smaller? Subordinate or superordinate? Our base class is IComparable.

These thoughts consume our lowest level drives.. To be a good person.. To get ahead in life... To be comfortable (as opposed to NOT comfortable, and any degree of comfortable is better than any degree of uncomfortable). To be powerful.. not just powerful, but specifically more powerful than you were before, or more powerful than the other guy.

So we focus on our differences, because through our differences we can find something, ANYTHING to make us special, better, to return 1 on our .CompareTo() call for at least one property.

This got me to thinking about the implementation of IComparable in .NET/C#. Isn't it quite egocentric? To presume that the scope of knowledge within a single object type is sufficient to allow it to be compared to any other type? To consider that I know how to compare myself to any other thing, even if I don't know what that thing is? That notion is quite absurd. What I find interesting about the implementation is the .CompareTo() takes an untyped object as a parameter. Doesn't it follow that an object of a given type should only be able to compare itself to something else of the same type? That in order to compare to an object of some other type it must at least be able to be converted to that type first, so that it can be compared on equal terms?

There's a lot of discussion about that implementation. It could be argued that it's valid, but nonetheless, it's completely egocentric. How do you resolve a scenario, where both foo.CompareTo(bar) and bar.CompareTo(foo) both return 1? Which one sorts higher in the call to SortedList.Sort()? or do they simply not change position relative to one another ever? So first come first serve?

What if IComparable worked differently? I envision it this way... Image a static object called System.Judge. System.Judge has a method .Compare which takes any two objects that implement IComparable. The interface for IComparable requires the object to maintain a property .CompareValues which contains a list of all the values it maintains that it is willing to offer up during comparison, organized by Type, Name, Value. The Judge accesses foo.CompareValues.Types to get a list of types that it is willing to be compared to. Judge calls that from both objects, until it finds a list of compatible types to start comparison with. For all comparable matching types, a comparison result is achieved, and then an average of comparison is evaluated, and the object with the highest average of comparison success is considered the victor. The .CompareTo call would naturally be nested calls on the various IComparable types presented until finally a value type with a fixed, built-in comparision method is found and stops the nesting compare calls.

This sytem would of course be more complicated, and require a lot more processing for each call, resulting in much slower performance.. Ah but the logic would be sound, and that, my friends, is much more valuable than processing time.

Good night.

2007-05-31

Disable Design-Time Support in Visual Studio

I recently wrote a class in c# that inherits from System.Diagnostics.Process. This class abstracts a shelling-to-disk process that I need to do. Something like this:

public class MyShellTask : Process
{
...
}

One thing that bugged me to no end is that, in Visual Studio, when you double click the file in the Solution Explorer, it considered it "designable" even though there was no designer. So that means I got a empty page every time, telling me that it was not designable, with a link to "View Code". Well, "View Code" is what I wanted, not "View Designer", when I double-clicked!

So after getting very frustrated, I did the natural thing.. I googled looking for an answer. I had a notion that I could control this behaviour through Attribute tags on the class if only I knew the right one. Having made designable components before I was familiar with the attributes used for that. I tried fiddling about with Intellisense, Googling, all to no avail.. Nothing worked! Nothing showed up in my Google searches! Good God! What to do now?

Fiddle some more... until finally I found the right attribute:

[System.ComponentModel.DesignerCategory("")]
public class MyShellTask : Process
{
...
}


Note that you must call this with an empty string (don't believe the intellisense comment, an empty constructor call will NOT do the same as calling the constructor with an empty string.) This sets it to a non-category that it doesn't know how to deal with, and so doesn't offer designer support to you!


This also helps custom installer class for use with your Visual Studio Setup projects, which exhibit the same annoying VS UI problems... ie:

///
/// Custom Installer actions for this project.
///

[RunInstaller(true)]
[System.ComponentModel.DesignerCategory("")]
public partial class MyInstaller : Installer
{
...
}


Hope that helps someone! Now there will be at least ONE hit if someone googles up "disable design-time support" or "disable designer support" like I did!

2007-05-09

Java, NetBeans, and Templates, OH MY!

Well, having recently sparked an interest in moving towards a Open Source, cross platform, but still as cool as c#/VS2005 development platform, I of course landed in the middle of NetBeans 5.5 and Java.

Having never programmed in Java before, but understanding it's really similar to c# (or I should say c# is really similar to Java), I immediately started fiddling about as if I were writing c# code. So, it's easy to get past typing uppercase String, not lowercase, and also not too hard to grok "extends" instead of ":" for inheritance. The one-class-per-file thing, well, I guess it will just make me a more organized programmer, however annoying it is. But the things that really erked me was properties.

In c# I can do this:

...
private string _name;

public string Name
{
get
{
return this._name;
}
set
{
this._name = value;
}
}
...

but in Java, that looks like:

...
private String _name;

public String getName()
{
return this._name;
}

public void setName(string value)
{
this._name = value;
}
...


Wow. Extremely obnoxious. Furthermore, I have finally gotten myself broken in with the VS2005 IDE to type "prop" + TAB to get a nice template for my properties. Well, since there is no such thing in Java, this macro also does not exist. So, I proceeded to make a NetBeans code template called "prop", which functions the same way the VS2005 "prop" code snippet does.

So, for all you c# coders who are venturing into the foreign lands of Java, here's a little tutorial on how to add this little cultural comfort into NetBeans.


Property Code Template Installation Instructions:

1. Select menu item "Tools->Options".
2. Click on "Editor" sidebar button.
3. Click on "Code Templates" tab.
4. Select "Java" from languages combo-box.
5. Click "New", and then enter "prop" as the Abbreviation in the dialog.
6. Click "Ok".
7. Make sure "prop" is the selected template, and in the text box below the list, enter these lines:

private ${int} ${_prop};

public ${int} get${Property}()
{
return this.${_prop};
}

public void set${Property}(${int} value)
{
this.${_prop} = value;
}

8. Select "Tab" from "Expand On" combo box.
9. Click "OK".


Now you've got it installed.. Feel free to go to the code and give it a whirl! Have a look at the other macros in the list to see what's built in, and once you figure out the syntax of the template notation, make your own templates!

2007-02-16

Code Snippet: ListEnum

Some times you want to list an Enum and see what it's actual numeric values are.. Well sometimes I do anyway, and when I do, I use:

private static void ListEnum(Type _enum)
{
Console.WriteLine("enum " + _enum.Name);
Console.WriteLine("{");
string[] foo = Enum.GetNames(_enum);
Array bar = Enum.GetValues(_enum);
for(int i =0;i<foo.Length; i++)
{
Console.WriteLine(
foo[i] + " = " +
((int)bar.GetValue(i)).ToString() + ",");
}
Console.WriteLine("}");
}




Enjoy!

2007-02-14

Code Snippet: SQL FileExists

Today in the course of my work, I came across a situation where some of the files referred to in our SQL database were not actually on disk where we thought they were. This was a largeish database of files (over 10,000), and we thought there might be as many as 1600 files missing, so I didn't want to go through each one manually to find the missing files. That led me to this solution: creating a function in SQL to check if the files exist.

The first method I tried for doing this used an undocumented system stored procedure in MSSQL, called xp_fileexist. The code for that looks like this:



-- using MSSQL built-in stored proc xp_fileexist

CREATE FUNCTION FileExists(@File varchar(255)) RETURNS BIT AS
BEGIN
DECLARE @i int
EXEC master..xp_fileexist @File, @i out
RETURN @i
END


It's a pretty simple wrapper around the stored-procedure. Implimenting it as a function provides a more versatile tool for querying howver, as shown in this example usage:

--- usage

SELECT *
FROM tbl_FileInformation
WHERE (dbo.FileExists(PathAndFile) = 'True')


Unfortunately, this didn't do the trick for us at that time. MS SQL server apparently cannot, under any circumstances, see mapped drives. All of our data was on a drive called 'P:', which was mapped to a network accessible storage device, that our whole company uses. Not to be discouraged, I thought to myself "Well, perhaps it's just a limitation of the xp_cmdshell options, not SQL server as a whole. May there's another way of finding this out...".

So that led me to write this next function, which uses Scripting.FileSystemObject via the OLE Automation Options. First things first, I needed to run the following commands to enable OLE Automation, to make it possible:


-- configuring for use of scripting object

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO


That's the SQL native way, the other option is to use Surface Area Configuartion and enable it via the check-box. Once that was out of the way, I could try out my function...

-- Using the scripting object

CREATE FUNCTION FileExists(@File varchar(255)) RETURNS BIT AS
BEGIN
declare @objFSys int
declare @i int

exec sp_OACreate 'Scripting.FileSystemObject', @objFSys out
exec sp_OAMethod @objFSys, 'FileExists', @i out, @File
exec sp_OADestroy @objFSys

return @i
END


But... unfortunately, this gave the same results.

So, the moral of the story? Kids, MS SQL just can't see mapped drives. Give it up now!

If you're lucky enough to have all your data on a drive that's local to the SQL server, and find yourself needing to know if a file you've got referenced still exists, then give these methods a try!

YMMV.

2007-02-08

vista sidebar on XP 2

Well, I was very excited about getting the Vista sidebar to work on XP, until I started playing with the Gadgets.

It turns out the patched version of the Sidebar executable is from a early beta version of Vista. The unfortunate point about that, is that the Gadgets that work with the XP version are vastly different than the Gadgets for the released version of Vista. That means all the gadgets that you can download don't work with the XP version. It also means that all the information about developing Gadgets for the Sidebar is relevant to the Vista version, not the XP version.

To detail some of the differences:

In the early release, Gadgets are not even called Gadgets, they are called Parts. Parts and Gadgets are very similar, in that they both are a zip file containing what amounts to a mini-webpage which is loaded and interpreted by the Sidebar.

In the XP version, a Part is a zip file with the extension changed to ".part", in Vista the extension is ".gadget". In XP, the directory where those files are stored is %userdir%\Parts, in Vista it's %userdir%\AppData\Local\Microsoft\Windows Sidebar\Gadgets.

Parts require a Manifest.XML file, while Gadgets require a Gadget.xml file. The contents of those files, while containing nearly the same data, use different names for all the tags, making them not compatible.

Beyond that there are a number of other subtle differences, as well as a generally limited set of functionality in the XP version, as compared to the released Vista version.

So, that poses a question? Considering the large amount of people who aren't interested in upgrading to Vista, due to performance or cost issues, or just not wanting to uproot and start again, is it worth my time to develop for this patched version of the Vista sidebar? Is there a substantial user-base that would benefit from having more cool Gadgets, I mean Parts, to run in their XP Patched Vista Beta Sidebar?

Perhaps that's a bigger niche than one would initially imagine.

Well, until I have an install of Vista to run the release version, providing a development environment for generating good Gadgets, I may just amuse myself with by playing with potentially pointless Parts.

vista sidebar on XP

So, I was perusing codeproject.com and I came across their current Vista Gadgets Competition. Well, this sparked my interest, not because I am interested in prizes, but because until now, I hadn't heard of Gadgets, or the Vista Sidebar, or really much about Vista at all.

The reason for this is that, I, being slightly conservative regarding willy-nilly-ly installing new OSes as soon as they are available, choose to upgrade by force, only when absolutely unable to do otherwise. That means, I'm running Windows XP. So what is a developer to do now that his interest is piqued? Install Vista so I can play with Gadgets and the mystical sidebar? No! I, of course, choose to google up a nice patched version of sidebar.exe that can run on XP!


Woohoo! I'm excited to say, that this not only works, but works without a hitch. I was able to install and run the Vista Sidebar on XP in mere moments. And that also means I can fiddle with widgets, oops I mean Gadgets. ;)


See my next few posts for more information about Gadgets... But before you do, download the XP version of Windows Sidebar so you can join in the fun too!


Links and knowledge courtesy of MSTN and their article about this, which I found out about by reading this blog post on My Digital Life.

2007-02-07

self-stabilization and dijkstra

So, upon creating this blog, the first thing I felt obliged to do was to show it off to my roommate and idea-raquetball partner Max Strini (his blog).


His first reaction was "that's sort of militant". Probably in reference to the term "vanguard" which is generally used to refer to an aggressive front-line force of some sort. I concurred, but still felt I had made a good choice.



Max then immediately hijacked my computer, launched a new firefox window, and began googling up and mumbling unrecognizable names. Dijkstra, The Humble Programmer, and How do we tell truths that might hurt? suddenly appeared and were read out-loud to me by my excited friend and companion.


He was right!



How fascinating was Dijkstra? Fascinating enough that, even though I had worked 13 hours straight, eaten almost nothing all day, and have a beautiful wife and 1.5 week old child waiting for me, I felt compelled to click about and read more.



I came across self-stabilization and it occurred to me, that the impetus behind creating this blog is a form of self-stabilization. In one sense, it's my own self-stabilization, in which I will divest and store the processes by which I created order from confusion in my daily life, providing a resource which I could use to reduce the amount of overhead needed to repeat these feats. But secondly, it is self-stabilization as and unconscious process of the online tech-blogging community. I rely heavily on the blogged accounts of problem-solving that others so diligently post for all the world to see. The first thing I do when I encounter a new and challenging problem is to check and see who has dealt with this problem before, and what did they do? What solutions are already available in the vast spray of information available from public search engines? Most of the valuable information I find is not the official formal documentation provided by the institutions that created the technologies, detailing every facet of the system with excrutiating thoroughness, but rather the anecdotal, code-snippetted, hyper-linked, semi-stable accounts posted by my unknown peers battling the same dragons. From these bits, I learn how to use the vast and morbid technology describing in the aforementioned chronicles of specificata.


So this blog, is self-stabilization for the blogging culture in which I so heavily rely. If there were no blogs to read, how would I solve many of those problems? Isn't it my duty to give back to that system? Shouldn't I also serve to stabilize this information vortex and let other reap from my trouble-shooting bounty?



Indeed. I should.