Classes


Namespace Crawler

An implementation of the Apache HTTP Client that can be used to make server-side HTTP requests.

Namespace Summary
Constructor Attributes Constructor Name and Description
 
Method Summary
Method Attributes Method Name and Description
<static>  
Crawler.get(url)
Perform an HTTP GET request on the specified URL and return the entire body of the response as a string.
<static>  
Crawler.getRSS(url)
Retrieves an RSS feed from a URL and converts it into a JavaScript XML object.
<static>  
Crawler.getURL(url)
Deprecated: Use Crawler.get instead.
<static>  
Crawler.getXML(url)
Retrieves an XML document from a URL and converts it into a Javascript XML object.
<static>  
Crawler.setCredentials(url, port, username, password)
Utilize HttpClient header authentication mechanism.
<static>  
Crawler.setHeader(name, value)
Sets a header in the Crawler request
Namespace Detail
Crawler
Method Detail
<static> {String} Crawler.get(url)
Perform an HTTP GET request on the specified URL and return the entire body of the response as a string.
<?
	var response = Crawler.get('http://google.com');
	if (response.indexOf('Error getting url') != 0) {
		print(response);
	}
?>
Parameters:
{String} url
A fully qualified URL to retrieve.
Returns:
A String containing the body of the document requested or a message describing an error that was encountered. The error error string will begin with "Error getting url" followed by the name and message of the wrapped Java exception.

<static> {XML} Crawler.getRSS(url)
Retrieves an RSS feed from a URL and converts it into a JavaScript XML object. This differs from getXML in that it can handle multiple different types and versions of RSS feeds, while returning back a consistent format.
Parameters:
{String} url
The URL of the RSS feed
Returns:
A JavaScript XML object containing the results of the translated feed.

<static> Crawler.getURL(url)
Parameters:
url
Deprecated:
Use Crawler.get instead.
See:
Crawler.get

<static> {XML} Crawler.getXML(url)
Retrieves an XML document from a URL and converts it into a Javascript XML object.
Parameters:
{String} url
The URL of a well-formed XML document
Returns:
A JavaScript XML object containing the results of the translated document.

<static> Crawler.setCredentials(url, port, username, password)
Utilize HttpClient header authentication mechanism. Pass url, port and authentication credentials to obtain access to the secured realm. The realm is set to ANY_REALM.
	var port = '80';
	var username = 'open';
	var password = 'QWxhZGRpbjpvcGV';
	Crawler.setCredentials(domain, port, username, password);
Parameters:
{String} url
Path to the page on the server that you want to access.
{String} port
The port you want to access, port 80 for non-secure and 443 for secure connections.
{String} username
{String} password

<static> Crawler.setHeader(name, value)
Sets a header in the Crawler request
Parameters:
{String} name
The name of the header, without ":"
{String} value
The value of the header

Documentation generated by JsDoc Toolkit 2.3.0 on Tue Jun 14 2011 05:41:50 GMT-0400 (EDT)