Moved my Group To Yahoo

March 11, 2006

Just a quick note to alert everyone that I’ve moved my group from MSN to Yahoo!

MSN was pretty slow. Also, I ran into problems uploading ‘large’ files (my presentation Powerpoints and PDFs). I’ve had realy good experiences with Yahoo! and was able to set up a new group there quickly and easily. I’ve uploaded my content from the Implemenation Misfortunes talk and will continue to add stuff throughout the year.

Also, to cut down on ‘trolls’ and spammer-types, the group has moderated membership. If you want to join in, you’ll need to ‘apply.’ It’s no big deal, but should keep out the ‘riff-raff.’

See you at http://groups.yahoo.com/group/mikeamundsen/

A Configuration File Utility for .NET Apps

March 11, 2006

The new .NET 2.0 has some nice classes to support custom configuration and settings files. I’ve been using a utility class to handle this since .NET 1.0. I still have lots of 1.0 and 1.1 apps out there and will continue to support them for quite a while. I suspect many people are in the same boat. To that end, I offer up my ConfigurationFile class for others to use, if they wish.

I should point out that my code is based on a very nice solution posted by Mike Woodring (http://staff.develop.com/woodring). He has a number of great code examples on his site.

Why a custom configuration file?

While it’s easy to use the existing *.config files support built into .NET apps, that can be a drawback. First, I’m kind of a ‘neat freak’ when it comes to populating the standard config files. IMHO, these ‘belong’ to the .NET runtime and should only contain .NET runtime sections and settings.

Second, and more important, changes to the *.config files can wreak havoc to your running application. In the case of ASP.NET apps, any modification to the config file will cause an app unload and reload to take place. This means modifications to the config files in a critical app can really hamper performance.

So, how hard is this?

Actually, the basic functionality is pretty simple. You need to be able to read an XML file with separate sections like the “appSettings” section in the standard .NET config files. Ideally, you should be able to select any section in the file, you should be able to select a single item from the section as well as iterate through all the items in a section.

The basic functions of a class to support the above would look like this:

namespace amundsen.ConfigReader
{
    // interface for config implementations
    public interface IConfigurationFile
    {
        // get an item by name
        string this[string key] { get;}

        // get the default section 
        IDictionary Section { get;}

        // get a named section
        IDictionary GetSection(string sectionName);
    }
}

Implementing the above interface is pretty straightforward. Below is one way to do it.

using System;
using System.Xml;
using System.Collections;
using System.Configuration;
using System.Web;

namespace amundsen.ConfigReader
{
    public class ConfigurationFile : IConfigurationFile
    {
        private IDictionary mSection;
        private XmlDocument mFile;

        public ConfigurationFile(string fileName)
        {
            Initialize(fileName, "appSettings");
        }

        public ConfigurationFile(string fileName, string sectionName)
        {
            Initialize(fileName, sectionName);
        }

        // indexer
        public string this[string key]
        {
            get
            {
                string value = null;
                if (mSection != null)
                    value = mSection[key] as string;
                return (value == null ? "" : value);
            }
        }

        // returns collection of items in the default section
        public IDictionary Section
        {
            get { return (mSection); }
        }

        // returns collection of items in a named section
        public IDictionary GetSection(string sectionName)
        {
            try
            {
                XmlNodeList nodes = mFile.GetElementsByTagName(sectionName);
                foreach (XmlNode node in nodes)
                {
                    if (node.LocalName == sectionName)
                    {
                        DictionarySectionHandler handler = new DictionarySectionHandler();
                        return (IDictionary)handler.Create(null, null, node);
                    }
                }
            }
            catch { }

            return (null);
        }

        // handles the work of initialization
        private void Initialize(string fileName, string sectionName)
        {
            mFile = new XmlDocument();
            XmlTextReader reader = new XmlTextReader(fileName);
            mFile.Load(reader);
            reader.Close();

            mSection = GetSection(sectionName);
        }
    }
}

Here’s an example config file for testing:

<?xml version="1.0" encoding="utf-8" ?>
<!-- filename: special.config -->
<configuration>
	<appSettings>
		<add key="item1" value="this is item one"/>
		<add key="item2" value="this is item two"/>
		<add key="item3" value="this is item three"/>
	</appSettings>

	<mySettings>
		<add key="my1" value="this is my one" />
		<add key="my2" value="this is my two" />
		<add key="my3" value="this is my three" />
		<add key="my4" value="this is my four" />
	</mySettings>
</configuration>

And here’s a simple console app to test the above class:

using System;
using System.Collections;
using amundsen.ConfigReader;

namespace ConfigReaderConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            ConfigurationFile cfile = new ConfigurationFile("special.config");
            
            // iterate through appSettings
            foreach(DictionaryEntry entry in cfile.Section)
                Console.WriteLine("{0}={1}",entry.Key,entry.Value);

            Console.WriteLine(cfile.Section["item2"]);

            IDictionary  mySettings = cfile.GetSection("mySettings");
            foreach (DictionaryEntry entry in mySettings)
                Console.WriteLine("{0}={1}", entry.Key, entry.Value);

            Console.WriteLine(mySettings["my3"]);
        }
    }
}

But what about caching the file for ASP.NET?

The problem with the above implementation is that the file is read every time you create an instance of the class. This works fine for state-ful apps such as WinForm applications, but is very inefficient for stateless applications such as ASP.NET WebForms solutions.

What we need is an implementation that supports in-memory caching. And here it is:

using System;
using System.Xml;
using System.Collections;
using System.Configuration;
using System.Web;

namespace amundsen.ConfigReader
{
    public class CachedConfigurationFile : IConfigurationFile
    {
        private IDictionary mSection;
        private XmlDocument mFile;

        // constructors
        public CachedConfigurationFile(string fileName)
        {
            Initialize(fileName, "appSettings", false);
        }
        public CachedConfigurationFile(string fileName, bool reload)
        {
            Initialize(fileName, "appSettings", reload);
        }

        public CachedConfigurationFile(string fileName, string sectionName)
        {
            Initialize(fileName, sectionName, false);
        }

        public CachedConfigurationFile(string fileName, string sectionName, bool reload)
        {
            Initialize(fileName, sectionName, reload);
        }

        // indexer
        public string this[string key]
        {
            get
            {
                string value = null;
                if (mSection != null)
                    value = mSection[key] as string;
                return (value == null ? "" : value);
            }
        }

        // returns collection of items in the default section
        public IDictionary Section
        {
            get { return (mSection); }
        }

        // returns collection of items in a named section
        public IDictionary GetSection(string sectionName)
        {
            try
            {
                XmlNodeList nodes = mFile.GetElementsByTagName(sectionName);
                foreach (XmlNode node in nodes)
                {
                    if (node.LocalName == sectionName)
                    {
                        DictionarySectionHandler handler = new DictionarySectionHandler();
                        return (IDictionary)handler.Create(null, null, node);
                    }
                }
            }
            catch { }

            return (null);
        }

        // handles the work of initialization
        private void Initialize(string fileName, string sectionName, bool reload)
        {
            // if we can, load it from the web cache
            try
            {
                if (reload)
                    mFile = null;
                else
                {
                    if (System.Web.HttpContext.Current.Cache.Get(fileName) != null)
                    {
                        mFile = new XmlDocument();
                        mFile.LoadXml(System.Web.HttpContext.Current.Cache.Get(fileName).ToString());
                    }
                }
            }
            catch
            {
                mFile = null;
            }

            // if we have nothing yet, go get it from the disk
            if (mFile == null)
            {
                // load xml document
                mFile = new XmlDocument();
                XmlTextReader reader = new XmlTextReader(fileName);
                mFile.Load(reader);
                reader.Close();

                // set up file dependency and load into the web cache
                System.Web.Caching.CacheDependency cd = new System.Web.Caching.CacheDependency(fileName);
                System.Web.HttpContext.Current.Cache.Add
                    (
                    fileName, 
                    mFile.OuterXml, 
                    cd, 
                    DateTime.MaxValue, 
                    new TimeSpan(1, 0, 0), 
                    System.Web.Caching.CacheItemPriority.Normal, 
                    null
                    );
            }

            // set the collection for use
            mSection = GetSection(sectionName);
        }
    }
}

And a simple ASPX page to test it:

<%@ Page language="C#"%>
<%@ Import Namespace="amundsen.ConfigReader" %>

<script runat="server">

    protected void Page_Load(object sender, EventArgs args)
    {
        CachedConfigurationFile cfile = new CachedConfigurationFile(Server.MapPath("special.config"));
        lbItem.Text = cfile.Section["item1"].ToString();
    }
</script>
<html>
    <body>
        <asp:Label ID="lbItem" runat="server"/>
    </body>
</html>

If you run the above page in debug mode, you’ll find that the file is only read from the disk on the first pass. After that, all reads come from the in-memory copy of the file. Even better, as soon as you update the file, it fall sout of the cache (due to the dependency setting) and the next time the item is accessed, it will be loaded from the disk again.

There you have it

Now you have a simple set of classes that support creating and using any number of custom configuration files for both your WinForm and WebForm solutions.

NOTE: You can download the source code for this article from http://groups.yahoo.com/group/mikeamundsen. The downloadable version was built with VS2005, but you can import the raw class files into VS2003 and recompile without any problem.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


Check out my photos from my Murray, KY trip

March 9, 2006


I had a chance to snap a few pics while on my trip to Murray, KY for my February INETA talk.
You can check out the images at flickr.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


Supporting Content-Negotiation for IIS Webs

March 9, 2006

As part of my current project to implement XML-driven Web solutions, I am re-reading Tim Berners Lee’s Style Guide for online hypertext for inspiration. One of the topics covered is called Cool URIs dont change. Most of it relates to planning and implementing hackable URIs (more from me on that soon). But, in a footnote called "How can I remove the file extensions…" the topic of content negotiation comes up. I was reminded of how nice it would be to be able to support c-neg for my IIS-hosted projects.

What is content negotiation?

Content negotiation (c-neg for short) is the process where servers and clients negotiate with each other to decide exactly which file or file format will be sent from the server to the client. Typically, c-neg focuses on selecting the right language for a browser or identifying a clients form factor (hand-held device) or the extent of its graphics capabilities (only supports black and white images, etc.).

However, c-neg has lots of other possible uses. For my purposes, I want to be able to know when a client is asking for stylesheets, images, or standard markup content.

Why use content negotiation anyway?

One of the big reasons for using c-neg is to hide some of the internal details of the web server tech from users. For example, if you could drop the tailfrom all file requests, users would not need to know whether your site is using HTML, ASP, ASPX, JSP, CFM, etc. in order to find a page at your site. Theoretically they would just need to know the title of the document or the topic.

For example, typing http://cool.server.com/shopping_list might return the document shopping_list.html, or shopping_list.asp, etc. depending on what the server has available. Users need to worry about the tail at all.

Even more to the point, hiding the tails can protect users when the hosting server switches technologies. For example, when I switched my servers from ASP to ASPX, I basically nullified all my URIs from the past since all my links included the .ASP at the end of URIs. Had I been using c-neg for all documents, changing from ASP to ASPX would not have affected users at all and all my links would still work.

How does content negotiation really work?

The process of c-neg is pretty straight forward. When a client (Web browser) makes a request to a server for some resource (document, image, etc.), that clients sends some additional information in the form of headersthat help detail the type of document requested and the preferred or supported formats for that client. For example, when a Web browse asks for an image, it can tell the server it prefers PNG over GIF. Or, if it is a hand-held cell phone, it might tell the server that it can only accept black and white BMP image format.

On each request, the server inspects these headers and, if allowed, can return the best format for the client. Note that I used the phrase if allowed. More on that below.

Some ugly truths about content negotiation

When it comes to negotiating document types and formats, the Accept-Header is the string of information sent by clients to tell servers what the client prefers. And the Mozilla family of browsers (Netscape and FireFox) do an excellent job of sending detailed format information with each request. For example, text/css, image/png, text/html, etc. are all examples of Accept-Header information sent by FireFox when negotiating with a Web server.

But MSIE is pretty awful at this. In fact, for as far back as I can document (at least MSIE 4), MSIE has sent the same inadequate Accept-Header for *every-single-request* – no matter what the resource type (css, javascript, html document, image, etc.). Without going into the really nasty details, MSIE makes it very difficult for servers to make decisions on what to send to MSIE clients.

So folks who want to create hackable URIs that support c-neg *and* work with MSIE, have to resort to some compromises and a few server hacks[sigh].

How do I support MSIE and still use server-side content negotiation?

Even though you cannot count on MSIE to give adequate information on content-types when requesting documents, you can modify your URIs slightly to give your Web server strong hints. The method I settled on is the same one used by the W3C.org site and many others. I decided to place certain document types in similar folders.

For example, all stylesheets (*.CSS) will go in a folder named /stylesheets/. All image files (*.PNG, *.GIF, *.BMP, etc.) will go in a folder named /images/. Client scripts (*.JS,*.VB) will go in /scripts/, etc. Now, when a client asks for a document, the server can use part of the name as a hint. For example, a request for http://cool.server.com/stylesheets/default will allow the server to return default.css (if it is available).

An even better example is in the case of images. If the server gets a request for http://cool.server.com/images/logo, the server might look for logo.png and, if it exists, return that. If log.png does not exist, the server might look for logo.jpg or logo.gif instead. Finally, if the current site uses only JPG files, but next year converts to all PNG format, all the URIs will still work just fine.

OK, so how do you implement c-neg for IIS?

My example of c-neg implementation for IIS is (admittedly) basic, but you should get the idea. Specifically, the implementation outlined here focuses only on standard Web browsers and ignores the details of supporting hand-held devices, etc.

First, I whip out my trusty ISAPI Rewrite tool to establish some rules for supporting stylesheets and image files. If you dont already use ISAPI Rewrite or some other utility, you can get a free version of ISAPI Rewrite from the Helicon’s web site.

Below are two rules I added to my httpd.ini file:

# route any css requests
RewriteRule (.*)/styles/(.*) $1/stylesheets/$2.css [I,CL,L]

# reroute any image requests
RewriteRule (.*)/images/(.*) /imagehandler.ashx?_file=$1/images/$2 [I,CL,L]

Note that in the first rule simply adds .CSS: to the end of any resource request that has /stylesheets/ in the URI. Examples are:

http://cool.server.com/stylesheets/default
http://cool.server.com/myapp/stylesheets/main

The second rule is a bit trickier. Any request that has /images/ in the URI will be rerouted to a special handler that will look for the proper file and send that to the browser. I wrote the handler in C#.

Below is a snapshot of the main code loop for my imageHander:

public void ProcessRequest(HttpContext context)
{
	string file = string.Empty;
	string mimefile = string.Empty;
	string rtnfile = string.Empty;
	string[] tails;
	string[] accepts;
	ContentTypeCollection ctcoll = new ContentTypeCollection();

	// get the file to find
	file = GetQueryItem(context,"_file");
	if (file.Length == 0)
		return;

	// get the list of tails and accepttypes
	tails = GetConfigList("imagetails");
	accepts = context.Request.AcceptTypes;
			
	// get assoc tails for accept-types
	mimefile = GetConfigItem("mimefile");
	if (mimefile.Length != 0)
	{
		mimefile = context.Server.MapPath(mimefile);
		if (File.Exists(mimefile) == false)
			CreateMimeTypesFile(mimefile);

		FileStream fs = new FileStream(mimefile, FileMode.OpenOrCreate, FileAccess.Read);
		ctcoll = (ContentTypeCollection)Serialization.Deserialize(ctcoll, fs);
		fs.Close();
		fs = null;
	}

	// go through accept-types first
	for (int i = 0; i < accepts.Length; i++)
	{
		if (accepts[i].IndexOf("image")!=-1 && accepts[i].IndexOf("*") == -1)
		{
			string tail = ctcoll.GetExtension(accepts[i]);
			if (tail.Length != 0)
			{
				rtnfile = string.Format("{0}.{1}", file, tail);
				rtnfile = context.Server.MapPath(rtnfile);
				if (File.Exists(rtnfile))
				{
					SendFile(context, rtnfile, accepts[i]);
					return;
				}
			}
		}
	}

	// no go through pref tails
	for (int i = 0; i < tails.Length; i++)
	{
		rtnfile = string.Format("{0}.{1}", file, tails[i]);
		rtnfile = context.Server.MapPath(rtnfile);
		if (File.Exists(rtnfile))
		{
			SendFile(context, rtnfile, accepts[0]);
			return;
		}
	}

	// we failed!
	return;
}

There are a lot of loose ends in this code-snippet, but you probably get the idea. By installing this handler at the root of my IIS Webs, I now have basic c-neg support for most standard browsers. And there is quite a bit more that can be done to improve the flexibility and power of this routine – just takes a bit more coding[grin].

Summary

So, to build cool, long-lived URIs, you should use links that hide the technologies on the server. In the case of text documents, this can easily be done using a utility like ISAPI Rewrite. For images and other format-driven URIs, you will need to implement a server-side HTTP handler to work out the details of which format to send to the browser.

Implementing basic content negotiation is the first step toward creating solid URIs. My next step is to create a rational URI scheme that can live over a long period of time. More on that later.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


Murray, KY INETA talk was fun

March 9, 2006

I had a great time traveling to Murray, KY to speak for WKDNUG on the campus of Murray State University last week. I delivered the Implementation Misfortunes talk. I really enjoy this talk since it touches on several aspects of implementing software solutions. Not just how things can go wrong (although that is the hook for the talk), but also on how just a bit of planning and creativity can forestall many common misfortunes that befall long-lived coding projects.

Along with the talk itself, the trip was very nice. It involved a one-hour flight to Nashville, TN followed by a two-hour drive through the country to Murray, KY. For some that might not sound like fun, but it really was! I got to drive past the Grand Ole Opry complex outside of Nashville. I also got a chance to drive through the Land Between the Lakes National Recreation Area. And the weather was excellent – Sunny and mild.

All-in-all, a great start to my 2006 speaking season. I’m looking forward to visiting CHADNUG in Chattanooga, TN on April 11th.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


ISAPI Rewrite to the Rescue

February 27, 2006

I’ve been using ISAPI Rewrite for the last three years to improve the quality of web site URLs. Its become such an important of my web work that its one of the first things I tell my clients to add to their server toolkit. Infact, I continue to be amazed (and frustrated) that Microsoft has not included a powerful URL rewriter tool with every copy of Internet Information Server.

Now, Ive seen lots of examples of using URL rewriters to hide query string arguments from search engines. Ive also seen a few examples of how to use URL rewriters to route users to special sites based on language or browser types. But there are several other important reasons to be using URL rewriters in your web deployments.

Here are a couple of rewriter rules that I add to almost every public web site I am involved in.

Putting the WWW back in the World Wide Web

Most public web sites respond to both http://www.mydomain.com and mydomain.com. Ive gotten into the habit of *not* typing the www. since I know the plain address will work just fine. It means the same to me. However,it turns out that most search engines (Google, Yahoo!, MSN, etc.) do not treat these two addresses the same. Most engines will track links and indexes to both locations. If youre focusing on your web sites ranking with these search indexes, having both addresses can be a problem. To fix this, I use a simple rewriter rule to automatically re-route browsers that present mydomain.com to http://www.mydomain.com. This allows lazy users (like me) to keep typing the short name, but still get the full name in reply.

Heres the rule I use with ISAPI Rewrite:


# force proper subdomain on all requests
RewriteCond %HTTP_HOST ^mydomain.com
RewriteRule ^/(.*) http://www.mydomain.com/$1 [RP,L]

Now, users will always see the full name. And keep in mind that some of the most important users of your web site are search engine spider bots!

The ZIPmouse Internet Directory is one of the companies Ive worked with that has this rule in their rewriter set. Try typing http://zipmouse.com in your browser and see what happens.

Fixing the Missing Slash

One annoying problem with web sites is how they handle missing slashes in URLs. For example, look at this address:


http://www.mydomain.com/members

Usually, what users want is the default page in the members folder of the site. They just forgot about the trailing slash. I use the following rewrite rule to automatically add the trailing slash:


# fix missing slash on folders
RewriteCond Host: (.*)
RewriteRule ([^.?]+[^.?/]) http://$1$2/ [I,R]

Heres another example from the ZIPmouse site: http://www.zipmouse.com/city/seattle

Dropping the Default

Finally, heres a rule I really like to use for public sites. Ill present it first to give you a chance to think about it.


RewriteRule (.*)/default.html $1/ [I,RP,L]

We all know that typing just a folder will force the web server to return the default document. This rule does the opposite. It checks the URL for the registered default page for the site and strips the URL down to only the folder. A good rewriter file would probably do this for all the typical defaults registered on the server:


RewriteRule (.*)/default.htm $1/ [I,RP,L]
RewriteRule (.*)/default.asp $1/ [I,RP,L]
RewriteRule (.*)/default.aspx $1/ [I,RP,L]
RewriteRule (.*)/index.htm $1/ [I,RP,L]

The ZIPmouse directory uses home.html as the default page for their site. You can use the following URL to test the above rule.

http://www.zipmouse.com/shop/computers-and-internet/home.html

Summary

So there are three handy URL rewrite rules that can improve the look and feel of your web sites URLs. If you are not using a URL rewriter yet, I encourage you do start. You can download a free for non-commercial version of ISAPI Rewriter for Microsoft server from their website. There are other rewriters out there, too.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


Hiking the Golden Gate Bridge

February 25, 2006


Hiking the Golden Gate Bridge

This past fall, I got a chance to spend several days relaxing in on of my favorite US cities – San Francisco. One of the fun things was to hike across the Golden Gate Bridge. I took public transportation from my hotel to the foot of the bridge and was able to hike across to a visitor center, enjoy the view and return. Couple hours, lots of fun.

Next time you’re in SanFran, take an afternoon to enjoy the view from the bridge.

my flickr profile

tags:
,
,
photo

Military Misfortunes Author Interview

February 24, 2006
I’m in the final prep for my INETA talk for WKDNUG at Murray, KY next week. While trolling the Web for references, I found an NPR interview with Eliot Cohen, one of the authors of Military Misfortunes.  This book is the ‘jumping off point’ for my talk titled ‘Implementation Misforturnes or Why Some Well-Designed IT Projects Fail.’
 
Even though the book was published in 1991, the material is timeless. Also, like so much that comes from historical research at military colleges, the key points are quite applicable to business.
 

My New Formula for Web 2.0

February 20, 2006

I’ve been working on several fronts to get a better handle on Web 2.0 and related items. As a result, I’ve developed a ‘formula’ – a kinda of shorthand mission statement – that describes what I think Web 2.0 means for the ‘geeks’ among us who need to implement Web 2.0 solutions. And that formula is:

(XHTML+CSS2) * JS
------------------ = Web 2.0
XML+XSLT+RDBMS

Now for the explanation…

Like any web solution approach, there are two perspectives: Server and Client. In my formula the client perspective focuses on markup, layout, and scripting. The server perspective focuses on XML (marking up data), XSLT (transforming that data into a useable form), and RDBMS (storing the data that is fed into XML documents for XSLT transformation. More about my thinking follows.

XHTML

Any serious attempt to build Web 2.0 solutions should start with fully validated XHTML. No slacking on this one. We need to start from a clean slate. Along with XML-validated HTML markup, we need to drop the habit of using tables to control layout. We also need to stop adding font/color and other style information directly to the markup. That leads to the next item in the formula.

CSS2

CSS2 achieved Recommendation level at W3C in 1998. Yep, 1998. Yet some browsers still don’t fully support CSS2 features. In addition, many high traffic web sites still haven’t adopted CSS2 as the default standard for controlling layout and style for (x)HTML documents. The really depressing news is that the W3C is already working on CSS3! It’s time to bite the bullet and commit to using CSS2 as the default layout and styling service for online documents on the web.

JS

JS means JavaScript. Ecma International (the group formerly known as ECMA) is responsible for maintaining and advancing the JavaScript language. The current version (1.5) also known as ECMAScript was approved in 1999. I must admit, I thought JavaScript was a fading dinosaur. But, with the rise of Firefox and the XUL engine that drives it, JavaScript has continued to flourish. Now that Ajax is becoming a key component of leading edge web solutions, there seems little reason to consider JavaScript to be on a downward slide.

Lots can be said on the subject of making good use of JavaScript, but for now, I will point out that only recently have I seen good examples of object-oriented approaches to building JavaScript solutions. And most of those have come from folks already drinking the Web 2.0 punch. More emphasis needs to be placed on building clean, powerful JS objects and using them to animate the user interface.

BTW – Ecma International started work on the next version of JavaScript (referred to as ECMAScript for XML) in 2004. No telling how long it will take before we see that as a common scripting option in browsers.

XML

Not much needs to be said here except that, for my mind, all data should be presented in XML form. Regardless of where is used or how is it stored, data shipped around the web should be annotated – marked up. Most XML tutorials focus on XML data stored as physical files. This makes for easy tutorials, but not-so-good solution implementations. In fact, valuable data will almost always be stored in databases of some time, usually relational. But that’s another element (see below).

Again, Ajax solutions are already taking advantage of this idea by using the XMLHttpRequest object to pull XML data from servers into client browsers for manipulation. More of this needs to be done – including on the servers themselves.

XSLT

I’ve used XSLT as part of the formula, but that’s a bit misleading. In fact, XSLT is just one of the related technologies I consider crucial for dealing with XML data. XSLT is needed to transform XML documents into usable forms. XPath is needed as a way to filter and modify the XML data. Finally, XSL-FO can be used to help format the output. For now, I want to concentrate on standard XSLT to produce XHTML. However XSL-FO was originally conceived as a way to produce XHTML. Up to now, XSL-FO has become synonymous with creating PDF documents from XML.

the point here is that XML data requires transformation and XSL/T should be the key to solving that problem. Even though XSL 1.0 reached Recommendation status in 2001, I still see way too many examples of XML DOM-grepping to pull out needed data and present it for users. Some of this is due to limitations (i.e. client browsers with poor or nonexistent XPath support), but much of it is also due to just plain not getting on board with the technology. It is important to commit to using declarative tools like XSLT and XPath to transform data effectively and efficiently. Both on the server and the client.

RDBMS

Maybe some are surprised to see an ‘ancient’ term like RDBMS in a document about building Web 2.0 solutions. But the truth is, most important data is, and will be for the near future, stored in relation database systems. Sure, there are some object-oriented, even hierarchically-oriented data storage systems in use today. The disk files system is probably the best-known hierarchical model. However, we can’t deny that businesses and even individuals understand and use relational models to store information. And this is a good thing.

At the same time, we need to start requiring the RDBMS model to ‘step it up a notch’ and start supporting the XML+XSLT approach to shipping and presenting data. Most of the big RDBMS tools today support presenting queries as XML output. And some have decent tools for accepting XML data as input for inserts, updates, and other RDBMS tasks. It’s time we all started taking advantage of these features and began demanding more of our RDBMS vendors. For now, we need to commit to always getting our data requests in XML form and always sending XML documents as part of our data update tasks.

So What?

So, what happens when you start using XSLT to start transforming XML data stored in RDBMS and then use XHTML and CSS2 to build solid user interfaces to access that data, *and* use JavaScript 1.5 to animate those interfaces? You have Web 2.0!. This ‘formula’ works no matter what technology or platform you are working with. All these standards are open. None of them assume an OS or proprietary service layer. Of course, none of this is new, right? The technologies and standards have been around for many years. There are already lots of folks doing some parts of this – a few doing it all.

But – to be blunt – *I’m* not doing all this yet. And I should be. I would suspect there are many more out there not yet committed to this kind of formula on a fundamental level. And I would guess some, if not most, of them would like to be doing it, too. That’s what this article is all about.

Over the next several weeks and months, I’ll be working to build a basic infrastructure to support this formula. This will include a server-side runtime built to present XML data from RDBMS transformed via XSLT. It will also include XHTML markup documents modified via CSS2 and animated by JavaScript. In the process, I hope to show how small, but meaningful, changes in the way we think about and implement solutions, can have a big impact on the final results.


Technorati Tags

I tag my posts for easy indexing at Technorati.com


WKDNUG Selects Implementation Misfortunes Talk

February 20, 2006

I recieved notice this past week that the folks at WKDNUG in Murray, KY have selected my new "Implementation Misfortunes" talk for my visit on February 28th, 2006. This talk is loosely based on the book "Military Misfortunes" by Cohen and Gooch. It should be a lively talk.

I am looking forward to visiting Murray, KY. while I’ve been in the area a few times for vacations, this will be the first time I will spend time ‘working’ in Murray. My travel schedule will be interesting, too. It turns out Murray is about two hours from any sizeable airport. Kentucky’s own geographical oddity, I guess. Anyway, I’ll fly into Nashville, TN then rent a car and drive two hours northwest to Murray. Should be a real ‘trip.’

Technorati Tags

I tag my posts for easy indexing at Technorati.com