Chapter 2. Web Basics


An HTTP Transaction
Fetching Documents Without LWP::Simple
Example: AltaVista
Example: Babelfish

Three things made the Web possible: HTML for encoding documents, HTTP for transferring them, and URLs for identifying them. To fetch and extract information from web pages, you must know all three—you construct a URL for the page you wish to fetch, make an HTTP request for it and decode the HTTP response, then parse the HTML to extract information. This chapter covers the construction of URLs and the concepts behind HTTP. HTML parsing is tricky and gets its own chapters later, as does the module that lets you manipulate URLs.

You'll also learn how to automate the most basic web tasks with the LWP::Simple module. As its name suggests, this module has a very simple interface. You'll learn the limitations of that interface and see how to use other LWP modules to fetch web pages without the limitations of LWP::Simple.

2.1. URLs

A Uniform Resource Locator (URL) is the address of something on the Web. For example:

URLs have a structure, given in RFC 2396. That RFC runs to 40 pages, largely because of the wide variety of things for which you can construct URLs. Because we are interested only in HTTP and FTP URLs, the components of a URL, with the delimiters that separate them, are:


In the case of our example URL, the scheme is http, the server is, and the path is /news/bikeweek_day1.html.

This is an FTP URL:

The scheme is ftp, the host is, and the path is /rfc/rfc1808.txt. The scheme and the hostname are not case sensitive, but the rest is. That is, and fTp://ftp.Is.cO.ZA/rfc/rfc1808.txt are the same, but and are not, unless that server happens to forgive case differences in requests.

We're ignoring the URLs that don't designate things that a web client can retrieve. For example, telnet:// designates a host with which you can start a Telnet session, and designates an email address to which you can send.

The only characters allowed in the path portions of a URL are the US-ASCII characters A through Z, a through z, and 0-9 (but excluding extended ASCII characters such as ü and Unicode characters such as Ω or ⊆), and these permitted punctuation characters:

-     _     .     !     ~     *     '     ,
:     @     &     +     $     (     )     /

For a query component, the same rule holds, except that the only punctuation characters allowed are these:

-     _     .     !     ~     *     '     (     )

Any other characters must be URL encoded, i.e., expressed as a percent sign followed by the two hexadecimal digits for that character. So if you wanted to use a space in a URL, it would have to be expressed as %20, because space is character 32 in ASCII, and the number 32 expressed in hexadecimal is 20.

Incidentally, sometimes you might also see some of these characters in a URL:

{     }    |    \    ^    [    ]    `

But the document that defines URLs, RFC 2396, refers to the use of these as unreliable and "unwise." When in doubt, encode it!

The query portion of a URL assigns values to parameters:


There are three parameters in that query string: name, with the value "Hiram Veeblefeetzer" (the space has been encoded); age, with the value 35; and country, with the value "Madagascar".

The URI::Escape module provides the uri_escape( ) function to help you build URLs:

use URI::Escape;
encoded_string = uri_escape(raw_string);

For example, to build the name, age, and country query string:

$n = uri_escape("Hiram Veeblefeetzer");
$a = uri_escape(35);
$c = uri_escape("Madagascar");
$query = "name=$n&age=$a&country=$c";
print $query;