Aztec IT Services Aztec IT Services: IT blog
Useful information for IT people.
written by Gerry Loiacono ( )

This blog has moved

Wednesday, April 28, 2010



This blog is now located at http://aztec-it.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.

For feed subscribers, please update your feed subscriptions to
http://aztec-it.blogspot.com/feeds/posts/default.

Drive Image Backup Software: An Evaluation

Monday, November 23, 2009


In the last few months I have searched for and evaluated a number of hard drive backup software programs. My goal was to find a dependable, low-priced system for backing up an entire drive, such that I could both browse that image (to extract a couple of files as needed) or to restore that image to fix a drive problem or transfer a drive’s data. I never originally intended to evaluate a series of programs, but problems and limitations encountered with various products forced me to try others. Eventually, I have discovered a program (Macrium Reflect Free) that is free, does everything I need it to do, and has an easy-to-use interface. But the path to Macrium led me across some alternatives. Below are my evaluations of Acronis True Image, DriveImage XML, and Macrium Reflect Free.

Acronis TrueImage

Initially, I chose Acronis True Image (version 9) based of favourable web reviews. Acronis (http://www.acronis.com/) is very speedy. It can image 80 gigs of data in about 40 minutes, and performs compression that shrinks the image to around 50gb. Acronis creates a single-file drive image that can be browsed from within Windows Explorer (provided you have Acronis installed on the computer). Acronis worked well for me across three computers using Windows XP Professional and Windows Vista Home. I never performed a full drive restore with Acronis, but I did restore single files and folders as needed with no problems.

Acronis had one annoying bug (that should have warned me that more was to come): Although I would specify an external drive as the backup location, when I would run the backup, Acronis would often report that the backup location was invalid (even though it wasn’t), forcing me to input the location again. When software behaves in this way, your confidence in it starts to erode (not a good feeling when you are talking about software that you are trusting to backup and possibly restore every file on your drive).

Acronis was also running two services (TrueImageMonitor and TimeOutMonitor) that were slowing my computer down and stopping me from unhooking external drives. I disabled them and this had no effect on what I was using Acronis for (full image backups), so I wish I been given the choice about running them.

Still, I stuck with Acronis, and when a client also needed backup software, I purchased and downloaded Acronis on-line (it had now moved up to version 10). I installed on the client’s computer, rebooted as required, and then….they could no longer access their network drives! The message ""not enough server storage is available to process this command" displayed when they tried to hook up to their required network drive. A quick web search for this message linked me to Acronis version 10; when I uninstalled it, the network access worked again. Needless to say, This was a very serious bug. Still, needing to get the client’s drive backup up, I persevered and installed Acronis version 9. This installed and ran, but failed as it neared the end of writing the backup file on an external drive. I tried again and it failed again. I gave up, used Norton 360s backup to backup files (it doesn’t do a full drive backup, which is what I wanted) and vowed to find a better solution.

I would have loved to stick with Acronis and save myself some time, but the bugs and anomalies finally made the need to switch necessary.

DriveImage XML

DriveImage XML (http://www.runtime.org/) was my second test. I used the handy free version that they make available for home/personal use. Like Acronis, DriveImage XML creates a browsable image, with the added benefit that the drive file structure is saved as XML, making it accessible to other programs. However, as you will see, I lost patience before I got a chance to test out this feature.

I ran DriveImage XML  on my Vista system with 60 gigs of data. The start was not good. Immediately, the program told me I had to weaken security by disabling UAC (User Account Control). This required a reboot. The program then ran, but Vista was not happy, sporting an ominous red icon in the Task Bar, and prompting me to enabled UAC again. This did not give me the greatest of confidence.

Finally, I started backing up. To gain some perspective, Acronis can backup this 60 gigs of data in around 30 minutes. DriveImage XML ran its quaint, old-fashioned interface and chugged away, reporting wildly varying estimates of how long it would take. For a while, it reported it could backup the drive in one hour. Although this was slower than Acronis, I could have lived with it. But as it continued, it adjusted the estimate to two hours.  And it kept chugging away. After two hours, it finally reported that it was finished – and then started the slow task of writing its XML file (with no more progress bar to even give a clue to how long this was going to take). Frustrated, I realized I could not work with a program that backups this slowly. I cancelled the program and started searching for alternatives. But first I took a look at the size of the drive image that DriveImage XML has created. Not only is it slow, but it was big. It had written that image with no compression at all. Still, there are those that love this program, so your experiences may vary – especially if you’re not in a big hurry to get a backup done.

Macrium Reflect Free (Aztec Editor’s Choice)

Macrium Reflect Free (http://www.macrium.com/) has a free version (hence the name) with some limitations (it only does a full drive backup, while the version you purchase can do folder and incremental backups). I only need to do full images, so the free version was fine for me. It installed and ran on Vista without requesting that I disable any security features. Its slick, modern interface has a simple wizard that led me through a full backup and saved the settings I chose for future use. And it is lightning fast, Macrium backed up that 60 gigs of data in 22 minutes! Afterward, I successfully used Windows Explorer to browse the image it created and restore a couple of files as a test. Finally, I installed and ran Macrium on my two other computers – again, it ran quickly and perfectly.

I can fully endorse Macrium Reflect Free as the best of the tested programs if you need to do a fast, browsable backup of an entire hard drive.


Monday, August 18, 2008


UltraWebTree and DataSet Relations: Building a tree on the fly

We're heavily into .NET now, using Visual Studio 2008 along with the Infragistics NetAdvantage controls. The NetAdvantage controls are slick when used in their basic configuration, but if you want to do anything tricky, there are a paucity of examples. I needed to create an UltraWebTree on the fly, to fully take advantage of the ability to format each node as required. The source are a number of database tables. I pieced together this Subroutine from various bits of examples I found. In this example, we are using DataSet Relations to set up the hierarchical structure within the DataSet. This makes it easy to then write the UltraWebTree from the DataSet:


Private Sub BuildTree()
Dim rootNode As Node
Dim childNode As Node
Dim babyNode As Node
Dim ConnectionString As String = System.Configuration.ConfigurationManager.ConnectionStrings.Item("someWebConfigDefinedConnectionString").ConnectionString
Dim obConn As New SqlConnection(ConnectionString)
Dim SQL1 As String = "SELECT Level1ID, Level1Description FROM tblLevel1"
Dim SQL2 As String = "SELECT Level1ID, Level2ID, Level2Description FROM tblLevel2"
Dim SQL3 As String = "SELECT Level1ID, Level2ID, Level3ID, Level3Description FROM tblLevel3"
Dim obCmd1 As New SqlCommand(SQL1, obConn)
Dim obCmd2 As New SqlCommand(SQL2, obConn)
Dim obCmd3 As New SqlCommand(SQL3, obConn)
Dim DataAdapterElement As New SqlDataAdapter(obCmd1)
Dim DataAdapterLevel As New SqlDataAdapter(obCmd2)
Dim DataAdapterCriteria As New SqlDataAdapter(obCmd3)
Dim DataSetAuditTree As New DataSet
Try
DataAdapterElement.Fill(DataSetAuditTree, "Level1")
DataAdapterLevel.Fill(DataSetAuditTree, "Level2")
DataAdapterCriteria.Fill(DataSetAuditTree, "Level3")

Try
DataSetAuditTree.Relations.Add("Level1Relation", _
DataSetAuditTree.Tables("Level1").Columns("Level1ID"), _
DataSetAuditTree.Tables("Level2").Columns("Level1ID"))
Catch x As Exception
Dim s As String
s = x.Message
Me.ErrorLabel.Text = "UltraWebTree is unable to display data due to the following exception message: " + x.Message
End Try
Try
DataSetAuditTree.Relations.Add("Level2Relation", _
DataSetAuditTree.Tables("Level2").Columns("Level2ID"), _
DataSetAuditTree.Tables("Level3").Columns("Level2ID"))
Catch x As Exception
Dim m As String
m = x.Message
Me.ErrorLabel.Text = "UltraWebTree is unable to display data due to the following exception message: + x.Message
End Try
Catch x As Exception
Dim m As String
m = x.Message
Me.ErrorLabel.Text = "UltraWebTree is unable to display data due to the following exception message: " + x.Message
End Try

Dim curLevel1ID As Int32
For Each r As DataRow In DataSetAuditTree.Tables("Level1").Rows
curLevel1ID = r("Level1ID")
rootNode = New Node
rootNode.Text = r("Level1ID").ToString() + " - " + r("Level1Description")
rootNode.ToolTip = "ToolTip: Root Node " + r("Level1ID").ToString()
For Each l As DataRow In r.GetChildRows("Level1Relation")
childNode = New Node
childNode.Text = l("Level2ID").ToString() + " - " + l("Level2Description")
childNode.ToolTip = "ToolTip: " + l("Level2ID").ToString() + " - " + l("Level2Description")
rootNode.Nodes.Add(childNode)
For Each c As DataRow In l.GetChildRows("Level2Relation")
babyNode = New Node
babyNode.Text = c("Level3ID").ToString() + "" + c("Level3Description")
babyNode.ToolTip = "ToolTip: " + c("Level3ID").ToString() + "" + c("Level3Description")
childNode.Nodes.Add(babyNode)
Next
Next
Me.UltraWebTree1.Nodes.Add(rootNode)
Next
End Sub


Friday, March 21, 2008


Exclamation Points (!) in CDO EMail Messages: Solved!


Gerry Loiacono posts information on how to solve the seemingly random insertion of exclamation points when using CDO to send email

While sending automated emails using ASP and CDO, I discovered that an exclamation point and a space were being inserted, seemingly randomly, into the body of the message. I searched around the web and found quite a few snippets of information that led to dead ends. But what did solve it was a reference to line length. Apparently, somewhere along the line, a mail server was inserting breaks into long lines. By inserting carriage return/line feeds (VBCrLf) after each paragraph and break just before sending the body to a stored procedure, the problem was solved.


Friday, September 28, 2007


Internet Explorer's Helping Hands (and why we don't like them)


Gerry Loiacono talks about why Internet Explorer will probably never be able to conform to the W3C specifications

Recently I encountered another incident of what I like to call, "Thanks but no thanks."


I was programming in Javascript, CSS, and HTML, and for a time, was testing my web site solely in Internet Explorer 7. Everything looked good. But when I tried the same site in Mozilla Firefox, virtually every element was sitting in the wrong place and using the wrong font. Having seen this result before, I knew where to look straightaway: my Cascading Style Sheet (.CSS) file.


Sure enough, on the second line of this file, I had used a left parenthesis '(' instead of the required left curly bracket '{'. Internet Explorer 7 (and earlier versions of Internet Explorer) are smart enough to assume that I really meant to use a curly bracket. It fixes the code automatically, and processes the remainder of the CSS file. Firefox, however, reads this as an error and does not process the style in error or anything below it, which is why the page looks so wrong.


Now, at first glance, you might think that IE's behavior is so much nicer. True, the page looked fine in IE. But the problem is that IE's behavior is outside the WorldWide Web Consortium (W3C) specification. That means that although the page will work in IE, it will not work in virtually every other browser. It would be so much better if IE simply conformed to the spec, rather than trying to create a new one. Developers would rather know straight off how their page will look everywhere, rather than getting a false positive.


At one time, when IE was so dominant that it was possible to ignore other browsers, this wasn't a big issue. But nowadays, Firefox and others make up a hefty chunk (34% as of July 2007) of the browser market.


It was interesting to see that this situation was still occurring in IE7, even though Microsoft said they were going to conform IE7 to the W3C spec. In reality, they cannot - and here's why. If they take out their 'clever parenthesis' fixing (and the myriad of other special coding items like that), millions of web pages that are currently working in IE 6 and earlier would stop working. This could be disastrous for developers who were targeting IE only (eg; in-house applications where they were sure that IE was the only browser used).


So until some radical change happens, developers would be wise to test in IE and Firefox - and to double-check their parenthesis!


Saturday, March 03, 2007


Image Maps and Event Handlers

When tackling a new online injury reporting system, my first challenge was to present a generic representation of the human body, then allow a user to click anywhere on the image to place one or more markers indicating the injured areas. Clicking on a previously placed marker would also have to remove it. At first, I toyed with the idea of making the body image a different, solid color, and then detecting the pixel color below the cursor when the mouse was clicked. But I soon found out that there is nothing in the DOM to detect the pixel color. I also considered programming this functionality in Macromedia Flash - I had already done something similar, and I had no doubts that Flash could do it. But I didn''t want to require the client to load a plug-in.

Then I remembered image maps. Image maps are built into HTML and provide a mechanism for defining clickable areas of a graphic. Because the area can be any irregular shape (a polygon), this solved my first problem. I created a clickable image map using the outline of the body as the polygon.

Creating the polygon could have been tedious or impossible without a handy little freeware program called Map This. Written in 1995 (!) by Todd C. Wilson, it still runs just fine on Windows XP. Map This allows you to load a graphic and trace a line around any shape to create a polygon area for an image map. (By the way, the great Todd C. Wilson is still around, and you can visit his website at www.NOPcode.com).

A polygonal image map solved my first challenge. Now I needed a way to find out the X and Y positions where the click occurred. First, I used: onclick="getPosition(event);" for the polygon area that called my function. Then, I got the position with some cross browser event handler code:

function getPosition(e) {
var e = (window.event) ? window.event : e;
var cursor = {x:0, y:0};
if (e.pageX e.pageY) {
cursor.x = e.pageX;
cursor.y = e.pageY;
}
else {
var de = document.documentElement;
var b = document.body;
cursor.x = e.clientX +
(de.scrollLeft b.scrollLeft) - (de.clientLeft 0);
cursor.y = e.clientY +
(de.scrollTop b.scrollTop) - (de.clientTop 0);
}
CreateMarker(cursor.x,cursor.y);
}

Now...how to place a little red dot where the user clicked? For this, I created a hidden <img> tag on the form:

<img src="clonemarker.gif" style="visibility:hidden" onclick="RemoveMarker(this);">

The CreateMarker function clones the hidden <img> and places it where the mouse was clicked:

function CreateMarker(xpos,ypos) {
var clone_image = document.getElementById(''cloneMarker'');
var new_image = document.getElementsByTagName("body")[0].appendChild(clone_image.cloneNode(true));
new_image.id = "m" + xpos + "-" + ypos;
new_image.style.left = parseFloat(xpos) - 4;
new_image.style.top = parseFloat(ypos) - 4;
new_image.style.visibility = ''visible'';
}

You might have noticed that an onclick="RemoveMarker(this);" is placed within each <img>. This function simply sets the current marker as hidden.

And that''s all! A series of interesting challenges, all solved through the elegant power of the DOM!


Friday, March 02, 2007


Keep Your Passwords Safe

If you an average Internet user, you probably have around 10 to 20 user names and passwords for various internet sites and services. If you’re an IT professional, you could have hundreds. Your method for storing and remembering these passwords could very well be unsecure and inefficient. What you need is a secure database where you can store all your user name and password information, access it easily, and lock it from prying eyes and fingers. Enter KeePass, a free Open Source program available from SourceForge.

KeePass calls itself a password safe. It's a database designed specifically for storing entries with a user name, password, and URL. A notes field and the ability to attach documents adds flexibility. A master password or master key disk prevents unauthorized access to your passwords.

KeePass is easy to use and has a polished look, but unlike many other professionally produced Open Source applications, it still feels very much like a work in progress. This is not fully acceptable for a program entrusted with passwords. An example of unprofessionalism was the strangely limiting implementation of the Import from CSV feature. Since all of my current sign-on information was stored in one big somewhat unstructured text document, it made sense for me to format this document and import it, rather than painstakingly copy and paste each line of information for over 70 different items. With no documentation attached to the product, I poked around for a fair while to get any idea about what type of picky CSV format KeePass would accept. Even after that, my attempts to import were either met with no success (but no error message either), or an error message that did not give specific enough information to identify the problem. Eventually, a article in the Forum helped me to get the precise CSV format to work. Needless to say, until I gain some confidence in Keepass's ability to safely store my password database, I will continue to use my old system in tandem.

My uneasy feelings almost completely vanished when I started using KeePass while visiting websites requiring user names and passwords. KeePass groups entries in user-created categories, and also has a full database search, so it''s easy to find the entry you need. Then you just use Control-key combos to quickly copy and paste the user name and password from KeePass to the form. Voila! No more typing of user names and passwords, ever! In fact, the password always remains hidden, so no one looking over your shoulder can see it. And since you don''t have to remember - or type - the password, there is no reason why your passwords cannot be longer, more complex, and therefore more secure.

KeePass is available for free download from SourceForge. Download KeePass



Spyware Protection: Our Recommendations

What program or set of programs do you need to safely protect your computer from spyware? I thought it a little suspicious that Spyware Stormer, the program recommended to me as a reputable spyware/adware remover, chose to attempt to convince me to buy it by popping up browser windows made to look like Windows windows, and to present fake scans of my system already in progress when I had not yet downloaded or installed it. Why would a program that proposes to remove spyware and adware use the very techniques that it is supposed to prevent?

Further research revealed that Spyware Stormer is considered a suspect spyware removal tool. It appears to be a knockoff of the reputable program Adaware. Some screens and internal file names are identical, so it’s likely that the 'programmer' of Stormer reverse-engineered Adaware’s code and now sells it as his own. So I ruled out Stormer, and set out to load my toolbox with some reputable programs I could use to safely remove spyware.

Two types of programs are needed for spyware removal and patrol. The first type scans your system and removes any existing spyware. The second type is for prevention. It stops spyware from getting on your computer in the first place.

For the first type, I chose Spybot Search and Destroy. This is a free program written by Patrick Kolla who started it as a personal project and has expanded it as needs have grown. The program has been recommended by PC World.

For the second type, I use two free programs, SpywareBlaster and SpywareGuard, both published by Javacool software. They provide two different areas of protection. One runs in real-time and the other makes security changes to your browser to provide a safer internet environment.



The Big Brush Strokes of Successful Search Engine Optimization

There's little point in building a beautiful, functional website if no one ever sees it. Some websites are self-promoting; traffic will arrive. Other sites will never be visited unless a pro-active effort is made to establish links from other sites and to get ranked near the top in the SERPs (Search Engine Results Pages). As of April 2005, the leading search engine is Google, with Yahoo in second place and MSN third. These three search engines should be specifically targeted and catered to in all areas of SEO (Search Engine Optimisation).

SEO is all about the doing the little things in the big areas. This is merely a quick introduction to some of the major areas of focus, to help you get started with SEO.

Here's a look at the main areas that need to be addressed to achieve a high search engine ranking:

1. Content is King
Content is still the most important factor in SE rankings. Ensure your content is focused on one major topic, add new content often, have heaps of content, and make it keyword-rich (without overstuffing it with keywords, which could be considered spam).

2. Use <TITLE> and <META> tags
The <TITLE> tag is used extensively to help determine page rank, and often displays as the website description in SERPs. The title should be an accurate, keyword-rich sentence of about 7 to 10 words. The <META> keyword and description tags are no longer used by many search engines, but they are still a good way to keep an on-page record of keywords and description.

3. Clean Up That HTML
Ensure that your DOCTYPE statement is valid. Validate your HTML using one of the free online services. Ensure there are no broken links. Use standard heading tags (H1 through H6) for headings, and make them keyword-rich. Use keyword-rich ALT attributes for images. Use descriptive TITLE attributes for text links.

4. Don't Use Tricks
Don't hide keywords. Don't purchase two domains to host related content and then cross-link them to improve SE ranking. Search engines keep getting better and better at weeding out attempts to boost SE ranking with anything other than actual good content. Don't use anything that could be considered a trick, and keep up to date on the always-expanding list of what the major search engines consider to be 'tricks'.

Finally...search engines are constantly changing the algorithms they use to evaluate and rank websites, and to weed out spam. As of Feb 2007, Google is still the industry leader in providing the most discerning algorithms and therefore the highest quality SERPs. Everyone wants to get a high ranking in Google, but patience is needed, even if the new website you designed has been fully omptimized for your chosen keywords. Google takes 2 months to crawl a submitted site, and up to 8 months to fully weight any incoming links to the site. Initially, it is vital that you get some powerful, real, incoming links, to further increase the chance that Google with crawl the site as a result of one of these links.



RSS for Publishers, Readers, and Developers

RSS stands for Rich Site Summary...or Remote Site Syndication…or Really Simple Syndication. This confusion points to conflict, and stems from an early diversion in a specification developed by different groups. We are left with a range of standards. While this poses some challenges for those who are writing tools to read RSS feeds, it does not subtract from any of the excitement surrounding this new development. RSS is a set of formatting rules used to encode web information. Originally designed for summaries from news sites, it is now used for a wide variety of information types: website updates, book addendums, blog feeds, forums, etc. Once a publisher has converted their information to one of about seven possible formats (XML is but one of the formats), they can then apply to officially syndicate their content.

Here's how RSS content gets from publishers to readers: website publishers use one or more tools to write or convert their content to RSS format. They then make people aware of the URL where their RSS feed is published. Readers then use news aggregate programs to search for and assemble a list of RSS feeds in their interest areas. The news aggregate program automatically deciphers the feeds, and updates them for the reader whenever the publisher creates an update. The third party is the website developer, who also can add RSS feeds to their website.

There are benefits for each player. Publishers get an easy way to increase the distribution of their content, and they are assured that the readers want to be reading it. Readers get an easy way to keep up to date on the items that interest them. Website developers who incorporate RSS feeds get instant relevant content, updated frequently. This is just what the search engines like to see when awarding high rankings.

Publishers looking to feed RSS content from sites that already have a high volume of visitors can quickly buy into this new technology and increase the distribution of their content. Publishers from marginal sites are once again faced with a problem similar to the one posed by trying to gain search engine rankings: they need to find a way to let people know their RSS feed is available. In some ways, the vertical nature of RSS means that it should be a little easier to get included in a directory of related items. We'll talk more about this in future articles.

Readers wanting to browse their RSS feeds of choice can use one of the many RSS Reader programs available. Many are free, including the aptly named RSS Reader.

Website developers will want to add relevant RSS feeds to their site. Ideally, a server-side script (for example, ASP or PHP) should be used so that the RSS feed is delivered to the page as straight HTML. This will gain the most benefit for search engine ranking; the RSS content becomes your content. This approach also benefits the RSS publisher, as the links back to the publisher's site will be 'live' links on the developer's site, again much valued by the search engines. There are a number of free scripts available, including the KattenWeb ASP script and the CARP PHP script.

For a live RSS example (which uses the CARP PHP script behind the scenes) check out Aztec IT's implementation of the Eldis Gender Newsfeed on the Gender Agenda website.

Website developers who do not have server-side technologies available for their website can use JavaScript to integrate RSS feeds onto the page. To visitors, the JavaScript-assembled page will look and present identically to a server-side-assembled page. There is a caveat, however. Because of the way JavaScript assembles pages in memory, search engine crawlers will not see the RSS feed on the page. This means that your page will not gain any ranking points. So whenever possible, use a server-side solution and present the RSS feed as plain HTML on the page.

RSS is a fledging development and the impact it will achieve is unknown. Imagine the implications and possibilities if all of the information on the internet was one virtual database? RSS is a small step in that direction. Will it evolve into something powerful and of great benefit to web users? Or will it be weakened by competing standards or polluted by marketeers? We'll just have to keep reading those feeds until we find out the answer.



XHTML: The Better Choice For Web Page Development

HTML (Hypertext Markup Language) and its various versions have long been the standard language for creating web pages for a great number of browsers. But the anarchy that helped evolved the power behind HTML also created a very loose system for interpreting and validating it. For the most part, browsers that read your HTML page try to be forgiving and will ignore a missing end tag for a paragraph, or a list, mixed case tag names (like <SELECT>, <Select>, and <select>) and attribute values enclosed in double quotes, single quotes, or no quotes at all. The advantage to this loose approach is that your page displays without errors. The disadvantages are many:

  • Your page may not display the way you intended at all, since the browser has to make arbitrary decisions about your coding intentions
  • The 'loose' approach can lead to very sloppy, inconsistent HTML writing, since it may not affect the output
  • Stricter browsers will throw errors when encountering incorrectly formatted HTML. This is even more likely to occur on mobile devices, where the small browser footprint does not include the overhead to correct poorly formatted code

The other big negative about poorly formatted HTML is the search engine spiders will turn away, or report back with inaccurate information, if they encounter a formatting error. Anything that can affect a site's search engine ranking is bound to make web site developers sit up and take notice, and so it should.

So how can you ensure that your site is properly formatted, with no errors to cause problems? You can convert to using XHTML.

XHTML (eXtensible Hypertext Markup Language) takes HTML version 4.01 and closes up some of the weak formatting rules allowed in HTML. XHTML can be described as a 'stricter' version of HTML. It has been in existence since 1999 and has been a stable standard since 2000, so you can convert with no worries about it being a fleeting fad. It's here for good, and it is supported in virtually all current browsers. And by combining the best features of XML and HTML, it has another major advantage: It can be read by any device that can read XML.

Conversion of an existing web site is straightforward but not trivial. Due to the nature of some of the formatting changes, not all of them can be made by using a global find/replace. Some hand coding will need to be done. At Aztec IT Services, we have ensured that this hand coding would only have to be done once, by updating the source templates that we use to create all web pages to generate XHTML code.

The major changes in XHTML and HTML are:

Loose HTMLXHTML
All tags and attributes must be in lower case<P>Here is some text</P>
<P CLASS="headline">Here is more text</P>
<p>Here is some text</p>
<p class="headline">Here is more text</p>
Attribute values must be enclosed in double quotes<p class=headline>Here is some text</p>
<p class='headline'>Here is more text</p>
<p class="headline">Here is more text</p>
All elements must be properly nested<p><b>Here is some text</p></b><p><b>Here is some text</b></p>
All pair elements must have a closing tag<p>Here is one paragraph
<p>Here is a second paragraph
<p>Here is one paragraph</p>
<p>Here is a second paragraph</p>
All stand alone elements (like <input> and <img>) must have a closing delimiter<br>
<img alt="sun" src="sun.gif">
<br />
<img alt="sun" src="sun.gif" />
Attribute minimization is not allowed (checked, readonly, disabled, selected, and others)<option selected id="m3" value="3">Corn flakes</option><option selected="selected" id="m3" value="3">Corn flakes</option>
The id attribute replaces the name attribute in the elements <a>, <applet>, <frame>, <iframe>, <img>, and <map><img alt="sun" name="sun" src="sun.gif"><img alt="sun" id="sun" src="sun.gif" />
A mandatory DOCTYPE element must be included at the beginning of the document. More information on DOCTYPE elements are included below.

There are a number of smaller changes not detailed above. These will be discovered when you validate your page. The World Wide Web Consortium web site includes an excellent XHTML validator.

As noted earlier, a mandatory DOCTYPE element must be included as the first line of the document. Choose from one of the following three elements:

XHTML 1.0 Strict
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
Use this when you want really clean markup, free of presentational clutter. Use this together with Cascading Style Sheets.

XHTML 1.0 Transitional
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
Use this when you need to take advantage of HTML's presentational features and when you want to support browsers that don't understand Cascading Style Sheets.

XHTML 1.0 Frameset
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
Use this when you want to use HTML Frames to partition the browser window into two or more frames.

The document specified in the DOCTYPE tag will be used to validate the document.

When your work is done, and your web site is fully XHTML-compatible, you will have cleaner, easier to read code that can be read without error by a wide range of browsers, devices, and search engine spiders/crawlers. It's definitely worth the time investment to make this change.



Cleaning Up Your Web Application With AJAX

Wouldn't it be nice if you could develop a web application that can use Javascript - a client-side programming language - to get information from a web server? Imagine the possibilities of combining those two powerful and complimentary technologies! No more reloading the entire page every time you need to query a database or read an XML file. Well, now you can do just that, and the way to do it is with AJAX.

AJAX is short for Asychronous Javscript And XML. AJAX combines existing technologies to create a dynamic way to make your application faster, smaller, and more robust.

In the World Without AJAX, you query a server by loading your page with form input variables, or by tacking the variables onto the URL, and then submitting (and reloading) the entire page. This can mean a slow application that is not user friendly.

With AJAX, you work behind the scenes, using HTTP requests in Javascript to send and retrieve server data. Then you update only those bits of the page that need updating. When used properly, this creates a leaner, meaner application.

And best of all, AJAX is a cross-browser solution, supported in both Internet Explorer 5.5 and higher, and Mozilla Firefox. Note that problems in XML support for Opera and Safari exclude the use of AJAX in these browsers, but even so, AJAX is accessible to all but a tiny segment of web users.

How to do it? Refer to the simple and excellent tutorial at W3Schools: The W3Schools AJAX Tutorial.

The Implications: Generally speaking, web developers tend to simplify the design and limit the functionality of web pages to match the (former) limits of client-server data transfer. A large page with multiple calls to database tables can take ages to run and the page size can cause speed issues if the data needs to be updated using Javascript (especially in Internet Explorer, where javascript runs much slower than Firefox). With AJAX, there are virtually no limits to the number of distinct elements that could be included on one page. Using clever programming, each element can load separately and be updated on its own, rather than reloading the entire page. Older applications can be fully rewritten in AJAX, or just problem areas in speed and design can be updated as required.

There's no doubt about it: AJAX is changing the way web developers think about developing, and that means improvements across the board for web users.


Home