Class Scraper::Base
In: lib/scraper/base.rb
Parent: Object

Methods

array   collect   document   element   extractor   new   option   options   parser   parser_options   prepare   process   process_first   request   result   result   root_element   rules   scrape   scrape   selector   skip   stop   text  

Constants

PageInfo = Struct.new(:url, :original_url, :encoding, :last_modified, :etag)   Information about the HTML page scraped. A structure with the following attributes:
  • url — The URL of the document being scraped. Passed in the constructor but may have changed if the page was redirected.
  • original_url — The original URL of the document being scraped as passed in the constructor.
  • encoding — The encoding of the document.
  • last_modified — Value of the Last-Modified header returned from the server.
  • etag — Value of the Etag header returned from the server.
READER_OPTIONS = [:last_modified, :etag, :redirect_limit, :user_agent, :timeout]

Attributes

extracted  [RW]  Set to true when the first extractor returns true.
options  [RW]  Returns the options for this object.
page_info  [RW]  Information about the HTML page scraped. See PageInfo.

Public Class methods

Declares which accessors are arrays. You can declare the accessor here, or use "symbol[]" as the target.

For example:

  array :urls
  process "a[href]", :urls=>"@href"

Is equivalent to:

  process "a[href]", "urls[]"=>"@href"

Returns the element itself.

You can use this method from an extractor, e.g.:

  process "h1", :header=>:element

Creates an extractor that will extract values from the selected element and place them in instance variables of the scraper. You can pass the result to process.

Example

This example processes a document looking for an element with the class name article. It extracts the attribute id and stores it in the instance variable +@id+. It extracts the article node itself and puts it in the instance variable +@article+.

  class ArticleScraper < Scraper::Base
    process ".article", extractor(:id=>"@id", :article=>:element)
    attr_reader :id, :article
  end
  result = ArticleScraper.scrape(html)
  puts result.id
  puts result.article

Sources

Extractors operate on the selected element, and can extract the following values:

  • "elem_name" — Extracts the element itself if it matches the element name (e.g. "h2" will extract only level 2 header elements).
  • "attr_name" — Extracts the attribute value from the element if specified (e.g. "@id" will extract the id attribute).
  • "elem_name@attr_name" — Extracts the attribute value from the element if specified, but only if the element has the specified name (e.g. "h2@id").
  • :element — Extracts the element itself.
  • :text — Extracts the text value of the node.
  • Scraper — Using this class creates a scraper to process the current element and extract the result. This can be used for handling complex structure.

If you use an array of sources, the first source that matches anything is used. For example, ["attr@title", :text] extracts the value of the title attribute if the element is abbr, otherwise the text value of the element.

If you use a hash, you can extract multiple values at the same time. For example, {:id=>"@id", :class=>"@class"} extracts the id and class attribute values.

:element and :text are special cases of symbols. You can pass any symbol that matches a class method and that class method will be called to extract a value from the selected element. You can also pass a Proc or Method directly.

And it‘s always possible to pass a static value, quite useful for processing an element with more than one rule (:skip=>false).

Targets

Extractors assign the extracted value to an instance variable of the scraper. The instance variable contains the last value extracted.

Also creates an accessor for that instance variable. An accessor is created if no such method exists. For example, :title=>:text creates an accessor for title. However, :id=>"@id" does not create an accessor since each object already has a method called id.

If you want to extract multiple values into the same variables, use array to declare that accessor as an array.

Alternatively, you can append [] to the variable name. For example:

  process "*", "ids[]"=>"@id"
  result :ids

The special target :skip allows you to control whether other rules can apply to the same element. By default a processing rule without a block (or a block that returns true) will skip that element so no other processing rule sees it.

You can change this with :skip=>false.

Create a new scraper instance.

The argument source is a URL, string containing HTML, or HTML::Node. The optional argument options are options passed to the scraper. See Base#scrape for more details.

For example:

  # The page we want to scrape
  url = URI.parse("http://example.com")
  # Skip the header
  scraper = MyScraper.new(url, :root_element=>"body")
  result = scraper.scrape

Returns the options for this class.

Specifies which parser to use. The default is +:tidy+.

Options to pass to the parser.

For example, when using Tidy, you can use these options to tell Tidy how to clean up the HTML.

This method sets the option for the class. Classes inherit options from their parents. You can also pass options to the scraper object itself using the +:parser_options+ option.

Defines a processing rule. A processing rule consists of a selector that matches element, and an extractor that does something interesting with their value.

Symbol

Rules are processed in the order in which they are defined. Use rules if you need to change the order of processing.

Rules can be named or anonymous. If the first argument is a symbol, it is used as the rule name. You can use the rule name to position, remove or replace it.

Selector

The first argument is a selector. It selects elements from the document that are potential candidates for extraction. Each selected element is passed to the extractor.

The selector argument may be a string, an HTML::Selector object or any object that responds to the select method. Passing an Array (responds to select) will not do anything useful.

String selectors support value substitution, replacing question marks (?) in the selector expression with values from the method arguments. See HTML::Selector for more information.

Extractor

The last argument or block is the extractor. The extractor does something interested with the selected element, typically assigns it to an instance variable of the scraper.

Since the extractor is called on the scraper, it can also use the scraper to maintain state, e.g. this extractor counts how many div elements appear in the document:

  process "div" { |element| @count += 1 }

The extractor returns true if the element was processed and should not be passed to any other extractor (including any child elements).

The default implementation of result returns self only if at least one extractor returned true. However, you can override result and use extractors that return false.

A block extractor is called with a single element.

You can also use the extractor method to create extractors that assign elements, attributes and text values to instance variables, or pass a Hash as the last argument to process. See extractor for more information.

When using a block, the last statement is the response. Do not use return, use next if you want to return a value before the last statement. return does not do what you expect it to.

Example

  class ScrapePosts < Scraper::Base
    # Select the title of a post
    selector :select_title, "h2"

    # Select the body of a post
    selector :select_body, ".body"

    # All elements with class name post.
    process ".post" do |element|
      title = select_title(element)
      body = select_body(element)
      @posts << Post.new(title, body)
      true
    end

    attr_reader :posts
  end

  posts = ScrapePosts.scrape(html).posts

To process only a single element:

  class ScrapeTitle < Scraper::Base
    process "html>head>title", :title=>text
    result :title
  end

  puts ScrapeTitle.scrape(html)

Similar to process, but only extracts from the first selected element. Faster if you know the document contains only one applicable element, or only interested in processing the first one.

Modifies this scraper to return a single value or a structure. Use in combination with accessors.

When called with one symbol, scraping returns the result of calling that method (typically an accessor). When called with two or more symbols, scraping returns a structure of values, one for each symbol.

For example:

  class ScrapeTitle < Scraper::Base
    process_first "html>head>title", :title=>:text
    result :title
  end

  puts "Title: " + ScrapeTitle.scrape(html)

  class ScrapeDts < Scraper::Base
    process ".dtstart", :dtstart=>["abbr@title", :text]
    process ".dtend", :dtend=>["abbr@title", :text]
    result :dtstart, :dtend
  end

  dts = ScrapeDts.scrape(html)
  puts "Starts: #{dts.dtstart}"
  puts "Ends: #{dts.dtend}"

The root element to scrape.

The root element for an HTML document is html. However, if you want to scrape only the header or body, you can set the root_element to head or body.

This method sets the root element for the class. Classes inherit this option from their parents. You can also pass a root element to the scraper object itself using the +:root_element+ option.

Returns an array of rules defined for this class. You can use this array to change the order of rules.

Scrapes the document and returns the result.

The first argument provides the input document. It can be one of:

  • URI — Retrieve an HTML page from this URL and scrape it.
  • String — The HTML page as a string.
  • HTML::Node — An HTML node, can be a document or element.

You can specify options for the scraper class, or override these by passing options in the second argument. Some options only make sense in the constructor.

The following options are supported for reading HTML pages:

  • :last_modified — Last-Modified header used for caching.
  • :etag — ETag header used for caching.
  • :redirect_limit — Limits number of redirects to follow.
  • :user_agent — Value for User-Agent header.
  • :timeout — HTTP open connection/read timeouts (in second).

The following options are supported for parsing the HTML:

The result is returned by calling the result method. The default implementation returns self if any extractor returned true, nil otherwise.

For example:

  result = MyScraper.scrape(url, :root_element=>"body")

The method may raise any number of exceptions. HTTPError indicates it failed to retrieve the HTML page, and HTMLParseError that it failed to parse the page. Other exceptions come from extractors and the result method.

Create a selector method. You can call a selector method directly to select elements.

For example, define a selector:

  selector :five_divs, "div" { |elems| elems[0..4] }

And call it to retrieve the first five div elements:

  divs = five_divs(element)

Call a selector method with an element and it returns an array of elements that match the selector, beginning with the element argument itself. It returns an empty array if nothing matches.

If the selector is defined with a block, all selected elements are passed to the block and the result of the block is returned.

For convenience, a first_ method is also created that returns (and yields) only the first selected element. For example:

  selector :post, "#post"
  @post = first_post

Since the selector is defined with a block, both methods call that block with an array of elements.

The selector argument may be a string, an HTML::Selector object or any object that responds to the select method. Passing an Array (responds to select) will not do anything useful.

String selectors support value substitution, replacing question marks (?) in the selector expression with values from the method arguments. See HTML::Selector for more information.

When using a block, the last statement is the response. Do not use return, use next if you want to return a value before the last statement. return does not do what you expect it to.

Returns the text of the element.

You can use this method from an extractor, e.g.:

  process "title", :title=>:text

Public Instance methods

Called by scrape scraping the document, and before calling result. Typically used to run any validation, post-processing steps, resolving referenced elements, etc.

Returns the document being processed.

If the scraper was created with a URL, this method will attempt to retrieve the page and parse it.

If the scraper was created with a string, this method will attempt to parse the page.

Be advised that calling this method may raise an exception (HTTPError or HTMLParseError).

The document is parsed only the first time this method is called.

Returns the value of an option.

Returns the value of an option passed to the scraper on creation. If not specified, return the value of the option set for this scraper class. Options are inherited from the parent class.

Called by scrape after creating the document, but before running any processing rules.

You can override this method to do any preparation work.

Returns the result of a succcessful scrape.

This method is called by scrape after running all the rules on the document. You can also call it directly.

Override this method to return a specific object, perform post-scraping processing, validation, etc.

The default implementation returns self if any extractor returned true, nil otherwise.

If you override this method, implement your own logic to determine if anything was extracted and return nil otherwise. Also, make sure calling this method multiple times returns the same result.

Scrapes the document and returns the result.

If the scraper was created with a URL, retrieve the page and parse it. If the scraper was created with a string, parse the page.

The result is returned by calling the result method. The default implementation returns self if any extractor returned true, nil otherwise.

The method may raise any number of exceptions. HTTPError indicates it failed to retrieve the HTML page, and HTMLParseError that it failed to parse the page. Other exceptions come from extractors and the result method.

See also Base#scrape.

Skips processing the specified element(s).

If called with a single element, that element will not be processed.

If called with an array of elements, all the elements in the array are skipped.

If called with no element, skips processing the current element. This has the same effect as returning true.

For convenience this method always returns true. For example:

  process "h1" do |element|
    @header = element
    skip
  end

Stops processing this page. You can call this early on if you discover there is no interesting information on the page, or done extracting all useful information.

[Validate]