Coder Social home page Coder Social logo

libwww-perl / html-parser Goto Github PK

View Code? Open in Web Editor NEW
6.0 6.0 11.0 1.59 MB

The HTML-Parser distribution is is a collection of modules that parse and extract information from HTML documents.

License: Other

Perl 72.51% XS 4.88% HTML 1.26% C 21.35%
hacktoberfest

html-parser's Introduction

NAME

LWP::UserAgent - Web user agent class

SYNOPSIS

use strict;
use warnings;

use LWP::UserAgent ();

my $ua = LWP::UserAgent->new(timeout => 10);
$ua->env_proxy;

my $response = $ua->get('http://example.com');

if ($response->is_success) {
    print $response->decoded_content;
}
else {
    die $response->status_line;
}

Extra layers of security (note the cookie_jar and protocols_allowed):

use strict;
use warnings;

use HTTP::CookieJar::LWP ();
use LWP::UserAgent       ();

my $jar = HTTP::CookieJar::LWP->new;
my $ua  = LWP::UserAgent->new(
    cookie_jar        => $jar,
    protocols_allowed => ['http', 'https'],
    timeout           => 10,
);

$ua->env_proxy;

my $response = $ua->get('http://example.com');

if ($response->is_success) {
    print $response->decoded_content;
}
else {
    die $response->status_line;
}

DESCRIPTION

The LWP::UserAgent is a class implementing a web user agent. LWP::UserAgent objects can be used to dispatch web requests.

In normal use the application creates an LWP::UserAgent object, and then configures it with values for timeouts, proxies, name, etc. It then creates an instance of HTTP::Request for the request that needs to be performed. This request is then passed to one of the request method the UserAgent, which dispatches it using the relevant protocol, and returns a HTTP::Response object. There are convenience methods for sending the most common request types: "get" in LWP::UserAgent, "head" in LWP::UserAgent, "post" in LWP::UserAgent, "put" in LWP::UserAgent and "delete" in LWP::UserAgent. When using these methods, the creation of the request object is hidden as shown in the synopsis above.

The basic approach of the library is to use HTTP-style communication for all protocol schemes. This means that you will construct HTTP::Request objects and receive HTTP::Response objects even for non-HTTP resources like gopher and ftp. In order to achieve even more similarity to HTTP-style communications, gopher menus and file directories are converted to HTML documents.

CONSTRUCTOR METHODS

The following constructor methods are available:

clone

my $ua2 = $ua->clone;

Returns a copy of the LWP::UserAgent object.

CAVEAT: Please be aware that the clone method does not copy or clone your cookie_jar attribute. Due to the limited restrictions on what can be used for your cookie jar, there is no way to clone the attribute. The cookie_jar attribute will be undef in the new object instance.

new

my $ua = LWP::UserAgent->new( %options )

This method constructs a new LWP::UserAgent object and returns it. Key/value pair arguments may be provided to set up the initial state. The following options correspond to attribute methods described below:

KEY                     DEFAULT
-----------             --------------------
agent                   "libwww-perl/#.###"
conn_cache              undef
cookie_jar              undef
cookie_jar_class        HTTP::Cookies
default_headers         HTTP::Headers->new
from                    undef
local_address           undef
max_redirect            7
max_size                undef
no_proxy                []
parse_head              1
protocols_allowed       undef
protocols_forbidden     undef
proxy                   {}
requests_redirectable   ['GET', 'HEAD']
send_te                 1
show_progress           undef
ssl_opts                { verify_hostname => 1 }
timeout                 180

The following additional options are also accepted: If the env_proxy option is passed in with a true value, then proxy settings are read from environment variables (see "env_proxy" in LWP::UserAgent). If env_proxy isn't provided, the PERL_LWP_ENV_PROXY environment variable controls if "env_proxy" in LWP::UserAgent is called during initialization. If the keep_alive option value is defined and non-zero, then an LWP::ConnCache is set up (see "conn_cache" in LWP::UserAgent). The keep_alive value is passed on as the total_capacity for the connection cache.

proxy must be set as an arrayref of key/value pairs. no_proxy takes an arrayref of domains.

ATTRIBUTES

The settings of the configuration attributes modify the behaviour of the LWP::UserAgent when it dispatches requests. Most of these can also be initialized by options passed to the constructor method.

The following attribute methods are provided. The attribute value is left unchanged if no argument is given. The return value from each method is the old attribute value.

agent

my $agent = $ua->agent;
$ua->agent('Checkbot/0.4 ');    # append the default to the end
$ua->agent('Mozilla/5.0');
$ua->agent("");                 # don't identify

Get/set the product token that is used to identify the user agent on the network. The agent value is sent as the User-Agent header in the requests.

The default is a string of the form libwww-perl/#.###, where #.### is substituted with the version number of this library.

If the provided string ends with space, the default libwww-perl/#.### string is appended to it.

The user agent string should be one or more simple product identifiers with an optional version number separated by the / character.

conn_cache

my $cache_obj = $ua->conn_cache;
$ua->conn_cache( $cache_obj );

Get/set the LWP::ConnCache object to use. See LWP::ConnCache for details.

cookie_jar

my $jar = $ua->cookie_jar;
$ua->cookie_jar( $cookie_jar_obj );

Get/set the cookie jar object to use. The only requirement is that the cookie jar object must implement the extract_cookies($response) and add_cookie_header($request) methods. These methods will then be invoked by the user agent as requests are sent and responses are received. Normally this will be a HTTP::Cookies object or some subclass. You are, however, encouraged to use HTTP::CookieJar::LWP instead. See "BEST PRACTICES" for more information.

use HTTP::CookieJar::LWP ();

my $jar = HTTP::CookieJar::LWP->new;
my $ua = LWP::UserAgent->new( cookie_jar => $jar );

# or after object creation
$ua->cookie_jar( $cookie_jar );

The default is to have no cookie jar, i.e. never automatically add Cookie headers to the requests.

If $jar contains an unblessed hash reference, a new cookie jar object is created for you automatically. The object is of the class set with the cookie_jar_class constructor argument, which defaults to HTTP::Cookies.

$ua->cookie_jar({ file => "$ENV{HOME}/.cookies.txt" });

is really just a shortcut for:

require HTTP::Cookies;
$ua->cookie_jar(HTTP::Cookies->new(file => "$ENV{HOME}/.cookies.txt"));

As described above and in "BEST PRACTICES", you should set cookie_jar_class to "HTTP::CookieJar::LWP" to get a safer cookie jar.

my $ua = LWP::UserAgent->new( cookie_jar_class => 'HTTP::CookieJar::LWP' );
$ua->cookie_jar({}); # HTTP::CookieJar::LWP takes no args

These can also be combined into the constructor, so a jar is created at instantiation.

my $ua = LWP::UserAgent->new(
  cookie_jar_class => 'HTTP::CookieJar::LWP',
  cookie_jar       =>  {},
);

credentials

my $creds = $ua->credentials();
$ua->credentials( $netloc, $realm );
$ua->credentials( $netloc, $realm, $uname, $pass );
$ua->credentials("www.example.com:80", "Some Realm", "foo", "secret");

Get/set the user name and password to be used for a realm.

The $netloc is a string of the form <host>:<port>. The username and password will only be passed to this server.

default_header

$ua->default_header( $field );
$ua->default_header( $field => $value );
$ua->default_header('Accept-Encoding' => scalar HTTP::Message::decodable());
$ua->default_header('Accept-Language' => "no, en");

This is just a shortcut for $ua->default_headers->header( $field => $value ).

default_headers

my $headers = $ua->default_headers;
$ua->default_headers( $headers_obj );

Get/set the headers object that will provide default header values for any requests sent. By default this will be an empty HTTP::Headers object.

from

my $from = $ua->from;
$ua->from('[email protected]');

Get/set the email address for the human user who controls the requesting user agent. The address should be machine-usable, as defined in RFC2822. The from value is sent as the From header in the requests.

The default is to not send a From header. See "default_headers" in LWP::UserAgent for the more general interface that allow any header to be defaulted.

local_address

my $address = $ua->local_address;
$ua->local_address( $address );

Get/set the local interface to bind to for network connections. The interface can be specified as a hostname or an IP address. This value is passed as the LocalAddr argument to IO::Socket::INET.

max_redirect

my $max = $ua->max_redirect;
$ua->max_redirect( $n );

This reads or sets the object's limit of how many times it will obey redirection responses in a given request cycle.

By default, the value is 7. This means that if you call "request" in LWP::UserAgent and the response is a redirect elsewhere which is in turn a redirect, and so on seven times, then LWP gives up after that seventh request.

max_size

my $size = $ua->max_size;
$ua->max_size( $bytes );

Get/set the size limit for response content. The default is undef, which means that there is no limit. If the returned response content is only partial, because the size limit was exceeded, then a Client-Aborted header will be added to the response. The content might end up longer than max_size as we abort once appending a chunk of data makes the length exceed the limit. The Content-Length header, if present, will indicate the length of the full content and will normally not be the same as length($res->content).

parse_head

my $bool = $ua->parse_head;
$ua->parse_head( $boolean );

Get/set a value indicating whether we should initialize response headers from the <head> section of HTML documents. The default is true. Do not turn this off unless you know what you are doing.

protocols_allowed

my $aref = $ua->protocols_allowed;      # get allowed protocols
$ua->protocols_allowed( \@protocols );  # allow ONLY these
$ua->protocols_allowed(undef);          # delete the list
$ua->protocols_allowed(['http',]);      # ONLY allow http

By default, an object has neither a protocols_allowed list, nor a "protocols_forbidden" in LWP::UserAgent list.

This reads (or sets) this user agent's list of protocols that the request methods will exclusively allow. The protocol names are case insensitive.

For example: $ua->protocols_allowed( [ 'http', 'https'] ); means that this user agent will allow only those protocols, and attempts to use this user agent to access URLs with any other schemes (like ftp://...) will result in a 500 error.

Note that having a protocols_allowed list causes any "protocols_forbidden" in LWP::UserAgent list to be ignored.

protocols_forbidden

my $aref = $ua->protocols_forbidden;    # get the forbidden list
$ua->protocols_forbidden(\@protocols);  # do not allow these
$ua->protocols_forbidden(['http',]);    # All http reqs get a 500
$ua->protocols_forbidden(undef);        # delete the list

This reads (or sets) this user agent's list of protocols that the request method will not allow. The protocol names are case insensitive.

For example: $ua->protocols_forbidden( [ 'file', 'mailto'] ); means that this user agent will not allow those protocols, and attempts to use this user agent to access URLs with those schemes will result in a 500 error.

requests_redirectable

my $aref = $ua->requests_redirectable;
$ua->requests_redirectable( \@requests );
$ua->requests_redirectable(['GET', 'HEAD',]); # the default

This reads or sets the object's list of request names that "redirect_ok" in LWP::UserAgent will allow redirection for. By default, this is ['GET', 'HEAD'], as per RFC 2616. To change to include POST, consider:

push @{ $ua->requests_redirectable }, 'POST';

send_te

my $bool = $ua->send_te;
$ua->send_te( $boolean );

If true, will send a TE header along with the request. The default is true. Set it to false to disable the TE header for systems who can't handle it.

show_progress

my $bool = $ua->show_progress;
$ua->show_progress( $boolean );

Get/set a value indicating whether a progress bar should be displayed on the terminal as requests are processed. The default is false.

ssl_opts

my @keys = $ua->ssl_opts;
my $val = $ua->ssl_opts( $key );
$ua->ssl_opts( $key => $value );

Get/set the options for SSL connections. Without argument return the list of options keys currently set. With a single argument return the current value for the given option. With 2 arguments set the option value and return the old. Setting an option to the value undef removes this option.

The options that LWP relates to are:

  • verify_hostname => $bool

    When TRUE LWP will for secure protocol schemes ensure it connects to servers that have a valid certificate matching the expected hostname. If FALSE no checks are made and you can't be sure that you communicate with the expected peer. The no checks behaviour was the default for libwww-perl-5.837 and earlier releases.

    This option is initialized from the PERL_LWP_SSL_VERIFY_HOSTNAME environment variable. If this environment variable isn't set; then verify_hostname defaults to 1.

    Please note that that recently the overall effect of this option with regards to SSL handling has changed. As of version 6.11 of LWP::Protocol::https, which is an external module, SSL certificate verification was harmonized to behave in sync with IO::Socket::SSL. With this change, setting this option no longer disables all SSL certificate verification, only the hostname checks. To disable all verification, use the SSL_verify_mode option in the ssl_opts attribute. For example: $ua-ssl_opts(SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE);>

  • SSL_ca_file => $path

    The path to a file containing Certificate Authority certificates. A default setting for this option is provided by checking the environment variables PERL_LWP_SSL_CA_FILE and HTTPS_CA_FILE in order.

  • SSL_ca_path => $path

    The path to a directory containing files containing Certificate Authority certificates. A default setting for this option is provided by checking the environment variables PERL_LWP_SSL_CA_PATH and HTTPS_CA_DIR in order.

Other options can be set and are processed directly by the SSL Socket implementation in use. See IO::Socket::SSL or Net::SSL for details.

The libwww-perl core no longer bundles protocol plugins for SSL. You will need to install LWP::Protocol::https separately to enable support for processing https-URLs.

timeout

my $secs = $ua->timeout;
$ua->timeout( $secs );

Get/set the timeout value in seconds. The default value is 180 seconds, i.e. 3 minutes.

The request is aborted if no activity on the connection to the server is observed for timeout seconds. This means that the time it takes for the complete transaction and the "request" in LWP::UserAgent method to actually return might be longer.

When a request times out, a response object is still returned. The response will have a standard HTTP Status Code (500). This response will have the "Client-Warning" header set to the value of "Internal response". See the "get" in LWP::UserAgent method description below for further details.

PROXY ATTRIBUTES

The following methods set up when requests should be passed via a proxy server.

env_proxy

$ua->env_proxy;

Load proxy settings from *_proxy environment variables. You might specify proxies like this (sh-syntax):

gopher_proxy=http://proxy.my.place/
wais_proxy=http://proxy.my.place/
no_proxy="localhost,example.com"
export gopher_proxy wais_proxy no_proxy

csh or tcsh users should use the setenv command to define these environment variables.

On systems with case insensitive environment variables there exists a name clash between the CGI environment variables and the HTTP_PROXY environment variable normally picked up by env_proxy. Because of this HTTP_PROXY is not honored for CGI scripts. The CGI_HTTP_PROXY environment variable can be used instead.

no_proxy

$ua->no_proxy( @domains );
$ua->no_proxy('localhost', 'example.com');
$ua->no_proxy(); # clear the list

Do not proxy requests to the given domains, including subdomains. Calling no_proxy without any domains clears the list of domains.

proxy

$ua->proxy(\@schemes, $proxy_url)
$ua->proxy(['http', 'ftp'], 'http://proxy.sn.no:8001/');

# For a single scheme:
$ua->proxy($scheme, $proxy_url)
$ua->proxy('gopher', 'http://proxy.sn.no:8001/');

# To set multiple proxies at once:
$ua->proxy([
    ftp => 'http://ftp.example.com:8001/',
    [ 'http', 'https' ] => 'http://http.example.com:8001/',
]);

Set/retrieve proxy URL for a scheme.

The first form specifies that the URL is to be used as a proxy for access methods listed in the list in the first method argument, i.e. http and ftp.

The second form shows a shorthand form for specifying proxy URL for a single access scheme.

The third form demonstrates setting multiple proxies at once. This is also the only form accepted by the constructor.

HANDLERS

Handlers are code that injected at various phases during the processing of requests. The following methods are provided to manage the active handlers:

add_handler

$ua->add_handler( $phase => \&cb, %matchspec )

Add handler to be invoked in the given processing phase. For how to specify %matchspec see "Matching" in HTTP::Config.

The possible values $phase and the corresponding callback signatures are as follows. Note that the handlers are documented in the order in which they will be run, which is:

request_preprepare
request_prepare
request_send
response_header
response_data
response_done
response_redirect
  • request_preprepare => sub { my($request, $ua, $handler) = @_; ... }

    The handler is called before the request_prepare and other standard initialization of the request. This can be used to set up headers and attributes that the request_prepare handler depends on. Proxy initialization should take place here; but in general don't register handlers for this phase.

  • request_prepare => sub { my($request, $ua, $handler) = @_; ... }

    The handler is called before the request is sent and can modify the request any way it see fit. This can for instance be used to add certain headers to specific requests.

    The method can assign a new request object to $_[0] to replace the request that is sent fully.

    The return value from the callback is ignored. If an exception is raised it will abort the request and make the request method return a "400 Bad request" response.

  • request_send => sub { my($request, $ua, $handler) = @_; ... }

    This handler gets a chance of handling requests before they're sent to the protocol handlers. It should return an HTTP::Response object if it wishes to terminate the processing; otherwise it should return nothing.

    The response_header and response_data handlers will not be invoked for this response, but the response_done will be.

  • response_header => sub { my($response, $ua, $handler) = @_; ... }

    This handler is called right after the response headers have been received, but before any content data. The handler might set up handlers for data and might croak to abort the request.

    The handler might set the $response->{default_add_content} value to control if any received data should be added to the response object directly. This will initially be false if the $ua->request() method was called with a $content_file or $content_cb argument; otherwise true.

  • response_data => sub { my($response, $ua, $handler, $data) = @_; ... }

    This handler is called for each chunk of data received for the response. The handler might croak to abort the request.

    This handler needs to return a TRUE value to be called again for subsequent chunks for the same request.

  • response_done => sub { my($response, $ua, $handler) = @_; ... }

    The handler is called after the response has been fully received, but before any redirect handling is attempted. The handler can be used to extract information or modify the response.

  • response_redirect => sub { my($response, $ua, $handler) = @_; ... }

    The handler is called in $ua->request after response_done. If the handler returns an HTTP::Request object we'll start over with processing this request instead.

For all of these, $handler is a code reference to the handler that is currently being run.

get_my_handler

$ua->get_my_handler( $phase, %matchspec );
$ua->get_my_handler( $phase, %matchspec, $init );

Will retrieve the matching handler as hash ref.

If $init is passed as a true value, create and add the handler if it's not found. If $init is a subroutine reference, then it's called with the created handler hash as argument. This sub might populate the hash with extra fields; especially the callback. If $init is a hash reference, merge the hashes.

handlers

$ua->handlers( $phase, $request )
$ua->handlers( $phase, $response )

Returns the handlers that apply to the given request or response at the given processing phase.

remove_handler

$ua->remove_handler( undef, %matchspec );
$ua->remove_handler( $phase, %matchspec );
$ua->remove_handler(); # REMOVE ALL HANDLERS IN ALL PHASES

Remove handlers that match the given %matchspec. If $phase is not provided, remove handlers from all phases.

Be careful as calling this function with %matchspec that is not specific enough can remove handlers not owned by you. It's probably better to use the "set_my_handler" in LWP::UserAgent method instead.

The removed handlers are returned.

set_my_handler

$ua->set_my_handler( $phase, $cb, %matchspec );
$ua->set_my_handler($phase, undef); # remove handler for phase

Set handlers private to the executing subroutine. Works by defaulting an owner field to the %matchspec that holds the name of the called subroutine. You might pass an explicit owner to override this.

If $cb is passed as undef, remove the handler.

REQUEST METHODS

The methods described in this section are used to dispatch requests via the user agent. The following request methods are provided:

delete

my $res = $ua->delete( $url );
my $res = $ua->delete( $url, $field_name => $value, ... );

This method will dispatch a DELETE request on the given URL. Additional headers and content options are the same as for the "get" in LWP::UserAgent method.

This method will use the DELETE() function from HTTP::Request::Common to build the request. See HTTP::Request::Common for a details on how to pass form content and other advanced features.

get

my $res = $ua->get( $url );
my $res = $ua->get( $url , $field_name => $value, ... );

This method will dispatch a GET request on the given URL. Further arguments can be given to initialize the headers of the request. These are given as separate name/value pairs. The return value is a response object. See HTTP::Response for a description of the interface it provides.

There will still be a response object returned when LWP can't connect to the server specified in the URL or when other failures in protocol handlers occur. These internal responses use the standard HTTP status codes, so the responses can't be differentiated by testing the response status code alone. Error responses that LWP generates internally will have the "Client-Warning" header set to the value "Internal response". If you need to differentiate these internal responses from responses that a remote server actually generates, you need to test this header value.

Fields names that start with ":" are special. These will not initialize headers of the request but will determine how the response content is treated. The following special field names are recognized:

':content_file'   => $filename # or $filehandle
':content_cb'     => \&callback
':read_size_hint' => $bytes

If a $filename or $filehandle is provided with the :content_file option, then the response content will be saved here instead of in the response object. The $filehandle may also be an object with an open file descriptor, such as a File::Temp object. If a callback is provided with the :content_cb option then this function will be called for each chunk of the response content as it is received from the server. If neither of these options are given, then the response content will accumulate in the response object itself. This might not be suitable for very large response bodies. Only one of :content_file or :content_cb can be specified. The content of unsuccessful responses will always accumulate in the response object itself, regardless of the :content_file or :content_cb options passed in. Note that errors writing to the content file (for example due to permission denied or the filesystem being full) will be reported via the Client-Aborted or X-Died response headers, and not the is_success method.

The :read_size_hint option is passed to the protocol module which will try to read data from the server in chunks of this size. A smaller value for the :read_size_hint will result in a higher number of callback invocations.

The callback function is called with 3 arguments: a chunk of data, a reference to the response object, and a reference to the protocol object. The callback can abort the request by invoking die(). The exception message will show up as the "X-Died" header field in the response returned by the $ua->get() method.

head

my $res = $ua->head( $url );
my $res = $ua->head( $url , $field_name => $value, ... );

This method will dispatch a HEAD request on the given URL. Otherwise it works like the "get" in LWP::UserAgent method described above.

is_protocol_supported

my $bool = $ua->is_protocol_supported( $scheme );

You can use this method to test whether this user agent object supports the specified scheme. (The scheme might be a string (like http or ftp) or it might be an URI object reference.)

Whether a scheme is supported is determined by the user agent's protocols_allowed or protocols_forbidden lists (if any), and by the capabilities of LWP. I.e., this will return true only if LWP supports this protocol and it's permitted for this particular object.

is_online

my $bool = $ua->is_online;

Tries to determine if you have access to the Internet. Returns 1 (true) if the built-in heuristics determine that the user agent is able to access the Internet (over HTTP) or 0 (false).

See also LWP::Online.

mirror

my $res = $ua->mirror( $url, $filename );

This method will get the document identified by URL and store it in file called $filename. If the file already exists, then the request will contain an If-Modified-Since header matching the modification time of the file. If the document on the server has not changed since this time, then nothing happens. If the document has been updated, it will be downloaded again. The modification time of the file will be forced to match that of the server.

Uses "move" in File::Copy to attempt to atomically replace the $filename.

The return value is an HTTP::Response object.

patch

# Any version of HTTP::Message works with this form:
my $res = $ua->patch( $url, $field_name => $value, Content => $content );

# Using hash or array references requires HTTP::Message >= 6.12
use HTTP::Request 6.12;
my $res = $ua->patch( $url, \%form );
my $res = $ua->patch( $url, \@form );
my $res = $ua->patch( $url, \%form, $field_name => $value, ... );
my $res = $ua->patch( $url, $field_name => $value, Content => \%form );
my $res = $ua->patch( $url, $field_name => $value, Content => \@form );

This method will dispatch a PATCH request on the given URL, with %form or @form providing the key/value pairs for the fill-in form content. Additional headers and content options are the same as for the "get" in LWP::UserAgent method.

CAVEAT:

This method can only accept content that is in key-value pairs when using HTTP::Request::Common prior to version 6.12. Any use of hash or array references will result in an error prior to version 6.12.

This method will use the PATCH function from HTTP::Request::Common to build the request. See HTTP::Request::Common for a details on how to pass form content and other advanced features.

post

my $res = $ua->post( $url, \%form );
my $res = $ua->post( $url, \@form );
my $res = $ua->post( $url, \%form, $field_name => $value, ... );
my $res = $ua->post( $url, $field_name => $value, Content => \%form );
my $res = $ua->post( $url, $field_name => $value, Content => \@form );
my $res = $ua->post( $url, $field_name => $value, Content => $content );

This method will dispatch a POST request on the given URL, with %form or @form providing the key/value pairs for the fill-in form content. Additional headers and content options are the same as for the "get" in LWP::UserAgent method.

This method will use the POST function from HTTP::Request::Common to build the request. See HTTP::Request::Common for a details on how to pass form content and other advanced features.

put

# Any version of HTTP::Message works with this form:
my $res = $ua->put( $url, $field_name => $value, Content => $content );

# Using hash or array references requires HTTP::Message >= 6.07
use HTTP::Request 6.07;
my $res = $ua->put( $url, \%form );
my $res = $ua->put( $url, \@form );
my $res = $ua->put( $url, \%form, $field_name => $value, ... );
my $res = $ua->put( $url, $field_name => $value, Content => \%form );
my $res = $ua->put( $url, $field_name => $value, Content => \@form );

This method will dispatch a PUT request on the given URL, with %form or @form providing the key/value pairs for the fill-in form content. Additional headers and content options are the same as for the "get" in LWP::UserAgent method.

CAVEAT:

This method can only accept content that is in key-value pairs when using HTTP::Request::Common prior to version 6.07. Any use of hash or array references will result in an error prior to version 6.07.

This method will use the PUT function from HTTP::Request::Common to build the request. See HTTP::Request::Common for a details on how to pass form content and other advanced features.

request

my $res = $ua->request( $request );
my $res = $ua->request( $request, $content_file );
my $res = $ua->request( $request, $content_cb );
my $res = $ua->request( $request, $content_cb, $read_size_hint );

This method will dispatch the given $request object. Normally this will be an instance of the HTTP::Request class, but any object with a similar interface will do. The return value is an HTTP::Response object.

The request method will process redirects and authentication responses transparently. This means that it may actually send several simple requests via the "simple_request" in LWP::UserAgent method described below.

The request methods described above; "get" in LWP::UserAgent, "head" in LWP::UserAgent, "post" in LWP::UserAgent and "mirror" in LWP::UserAgent will all dispatch the request they build via this method. They are convenience methods that simply hide the creation of the request object for you.

The $content_file, $content_cb and $read_size_hint all correspond to options described with the "get" in LWP::UserAgent method above. Note that errors writing to the content file (for example due to permission denied or the filesystem being full) will be reported via the Client-Aborted or X-Died response headers, and not the is_success method.

You are allowed to use a CODE reference as content in the request object passed in. The content function should return the content when called. The content can be returned in chunks. The content function will be invoked repeatedly until it return an empty string to signal that there is no more content.

simple_request

my $request = HTTP::Request->new( ... );
my $res = $ua->simple_request( $request );
my $res = $ua->simple_request( $request, $content_file );
my $res = $ua->simple_request( $request, $content_cb );
my $res = $ua->simple_request( $request, $content_cb, $read_size_hint );

This method dispatches a single request and returns the response received. Arguments are the same as for the "request" in LWP::UserAgent described above.

The difference from "request" in LWP::UserAgent is that simple_request will not try to handle redirects or authentication responses. The "request" in LWP::UserAgent method will, in fact, invoke this method for each simple request it sends.

CALLBACK METHODS

The following methods will be invoked as requests are processed. These methods are documented here because subclasses of LWP::UserAgent might want to override their behaviour.

get_basic_credentials

# This checks wantarray and can either return an array:
my ($user, $pass) = $ua->get_basic_credentials( $realm, $uri, $isproxy );
# or a string that looks like "user:pass"
my $creds = $ua->get_basic_credentials($realm, $uri, $isproxy);

This is called by "request" in LWP::UserAgent to retrieve credentials for documents protected by Basic or Digest Authentication. The arguments passed in is the $realm provided by the server, the $uri requested and a boolean flag to indicate if this is authentication against a proxy server.

The method should return a username and password. It should return an empty list to abort the authentication resolution attempt. Subclasses can override this method to prompt the user for the information. An example of this can be found in lwp-request program distributed with this library.

The base implementation simply checks a set of pre-stored member variables, set up with the "credentials" in LWP::UserAgent method.

prepare_request

$request = $ua->prepare_request( $request );

This method is invoked by "simple_request" in LWP::UserAgent. Its task is to modify the given $request object by setting up various headers based on the attributes of the user agent. The return value should normally be the $request object passed in. If a different request object is returned it will be the one actually processed.

The headers affected by the base implementation are; User-Agent, From, Range and Cookie.

progress

my $prog = $ua->progress( $status, $request_or_response );

This is called frequently as the response is received regardless of how the content is processed. The method is called with $status "begin" at the start of processing the request and with $state "end" before the request method returns. In between these $status will be the fraction of the response currently received or the string "tick" if the fraction can't be calculated.

When $status is "begin" the second argument is the HTTP::Request object, otherwise it is the HTTP::Response object.

redirect_ok

my $bool = $ua->redirect_ok( $prospective_request, $response );

This method is called by "request" in LWP::UserAgent before it tries to follow a redirection to the request in $response. This should return a true value if this redirection is permissible. The $prospective_request will be the request to be sent if this method returns true.

The base implementation will return false unless the method is in the object's requests_redirectable list, false if the proposed redirection is to a file://... URL, and true otherwise.

BEST PRACTICES

The default settings can get you up and running quickly, but there are settings you can change in order to make your life easier.

Handling Cookies

You are encouraged to install Mozilla::PublicSuffix and use HTTP::CookieJar::LWP as your cookie jar. HTTP::CookieJar::LWP provides a better security model matching that of current Web browsers when Mozilla::PublicSuffix is installed.

use HTTP::CookieJar::LWP ();

my $jar = HTTP::CookieJar::LWP->new;
my $ua = LWP::UserAgent->new( cookie_jar => $jar );

See "cookie_jar" for more information.

Managing Protocols

protocols_allowed gives you the ability to allow arbitrary protocols.

my $ua = LWP::UserAgent->new(
    protocols_allowed => [ 'http', 'https' ]
);

This will prevent you from inadvertently following URLs like file:///etc/passwd. See "protocols_allowed".

protocols_forbidden gives you the ability to deny arbitrary protocols.

my $ua = LWP::UserAgent->new(
    protocols_forbidden => [ 'file', 'mailto', 'ssh', ]
);

This can also prevent you from inadvertently following URLs like file:///etc/passwd. See "protocols_forbidden".

SEE ALSO

See LWP for a complete overview of libwww-perl5. See lwpcook and the scripts lwp-request and lwp-download for examples of usage.

See HTTP::Request and HTTP::Response for a description of the message objects dispatched and received. See HTTP::Request::Common and HTML::Form for other ways to build request objects.

See WWW::Mechanize and WWW::Search for examples of more specialized user agents based on LWP::UserAgent.

COPYRIGHT AND LICENSE

Copyright 1995-2009 Gisle Aas.

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

html-parser's People

Contributors

aradici avatar atoomic avatar barbie avatar bulk88 avatar castaway avatar demerphq avatar dependabot[bot] avatar dsteinbrunner avatar fperrad avatar gisle avatar haarg avatar jacquesg avatar jonjensen avatar jraspass avatar michal-josef-spacek avatar msouth avatar nwc10 avatar oalders avatar real-dam avatar scop avatar toddr avatar yoshikazusawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

html-parser's Issues

Incorrect tokenization in HTML::Parser [rt.cpan.org #83570]

Migrated from rt.cpan.org#83570 (status was 'open')

Requestors:

Attachments:

From [email protected] on 2013-02-23 17:43:32
:

Hi Gisle,



First, thank you for all of your huge contributions to Perl over the years!



I've discovered a site (http://www.scotts.com/) that has HTML that HTML-Parser does not tokenize correctly.



Envs (tried on two machines, same results):

*         HTML::Parser (3.65 and 3.69)

*         Perl 5.14.2, and 5.10.1

*         'full_uname' => 'Linux 449876-app3.blosm.com 2.6.18-238.37.1.el5 #1 SMP Fri Apr 6 13:47:10 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux',

*         'os_distro' => 'Red Hat Enterprise Linux Server release 5.9 (Tikanga) Kernel \\r on an \\m<file:///\\m>',

*         'full_uname' => 'Linux idx02 2.6.43.5-2.fc15.x86_64 #1 SMP Tue May 8 11:09:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux',

*         'os_distro' => 'Fedora release 15',



I'm attaching a representative page. The page came from:
http://www.scotts.com/smg/templates/index.jsp?pageUrl=orthoLanding

The problem seems to occur around the HTML:
                <noscript>
                                <iframe height="0" width="0" style="display:none; visibility:hidden;"
                                                src="//www.googletagmanager.com/ns.html?id=GTM-PVLS"
                                                />
                </noscript>
                <script>

I've added some debugging to the HTML::TokeParser::get_tag sub so it looks like:
use Data::Dumper;
sub get_tag
{
    my $self = shift;
    my $token;
    while (1) {
    $token = $self->get_token || return undef;

        warn "Checking token: [".Dumper($token)."]";

    my $type = shift @$token;
    next unless $type eq "S" || $type eq "E";
    substr($token->[0], 0, 0) = "/" if $type eq "E";
    return $token unless @_;
    for (@_) {
        return $token if $token->[0] eq $_;
    }
    }
}

I've tried both version 3.65 and 3.69 of HTML::Parser, which both produce the same results. They produce output in the "output" attachment. You can see on like 290 of the output that it is tokenizing almost the entire page after the iframe as one big text blob.

Thanks again,

-Carl


Carl Eklof
CTO @ Blosm Inc.
blosm.com<http://blosm.com/>
424.888.4BEE
Confidentiality Note: This e-mail message and any attachments to it are intended only for the named recipients and may contain confidential information. If you are not one of the intended recipients, please do not duplicate or forward this e-mail message and immediately delete it from your computer.  By accepting and opening this email, recipient agrees to keep all information confidential and is not allowed to distribute to anyone outside their organization.



From [email protected] on 2015-01-04 01:23:37
:

I've been seeing this with some code I'm working on soon.

To summarize this very simply, it seems like HTML::TokeParser does something weird when a tag contains a self-closing slash. If the tag is written as "<hr/>" then the parser things the tag is "hr/". If it's written as "<hr />" then we end up with a "/" attribute.

From [email protected] on 2015-01-04 16:00:15
:

I cloned the repo with the intention of fixing this, but when I looked through the test cases I realized that this behavior is actually tested for.

Gisle, what's up with this? It's not documented, AFAICT, and it really doesn't make much sense.

From [email protected] on 2016-01-19 00:12:40
:

On Sun Jan 04 11:00:15 2015, DROLSKY wrote:
> I cloned the repo with the intention of fixing this, but when I looked
> through the test cases I realized that this behavior is actually
> tested for.
>
> Gisle, what's up with this? It's not documented, AFAICT, and it really
> doesn't make much sense.

Perhaps just based on my understanding of what status this had based on this
advice from the XHTML spec.


C.2. Empty Elements
-------------------

Include a space before the trailing / and > of empty elements, e.g. <br />, <hr
/> and <img src="karen.jpg" alt="Karen" />. Also, use the minimized tag syntax
for empty elements, e.g. <br />, as the alternative syntax <br></br> allowed by
XML gives uncertain results in many existing user agents.


From [email protected] on 2016-01-19 00:21:34
:

http://www.w3.org/TR/html5/syntax.html#tag-name-state seems clear on allowing
this, so feel free to change the tests


From [email protected] on 2016-01-19 17:34:19
:

Just turning on the "empty_element_tags" option might make the parser behave
the way you expect. It might be that we should just switch the default for this
option.


Attributes that have no value get their name as their value

When investigating libwww-perl/WWW-Mechanize#125 I noticed that the following HTML parses weirdly.

<input type="hidden" name="foo" value>

According to the HTML spec on an input element a value attribute that's not followed by an equals = should be empty, so we should be parsing it to an empty string.

Empty attribute syntax
Just the attribute name. The value is implicitly the empty string.

Instead of making it empty, we set it to "value".

I've looked into it, and got as far as that get_tag returns a data structure that contains the wrong value:

\ [
    [0] "input",
    [1] {
        /       "/",
        name    "foo",
        type    "hidden",
        value   "value"
    },
    [2] [
        [0] "type",
        [1] "name",
        [2] "value",
        [3] "/"
    ],
    [3] "<input type="hidden" name="foo" value />"
]

Unfortunately I am out of my depths with the actual C code for the parser. But I think, we should be returning an empty string for the value attribute, as well as all other empty attributes.


I wrote the following test to demonstrates the problem.

use strict;
use warnings;

use HTML::TokeParser ();
use Test::More;
use Data::Dumper;

ok(
    !get_tag(q{})->{value},
    'No value when there was no value'
);    # key does not exist

{
    # this fails because value is 'value'
    my $t = get_tag(q{value});
    ok(
        !$t->{value},
        'No value when value attr has no value'
    ) or diag Dumper $t;    
}

ok(
    !get_tag(q{value=""})->{value},
    'No value when value attr is an empty string'
);    # key is an empty string

is(
    get_tag(q{value="bar"})->{value}, 
    'bar', 
    'Value is bar'
);    # this obviously works

sub get_tag {
    my $attr = shift;
    return HTML::TokeParser->new(\qq{<input type="hidden" name="foo" $attr />})->get_tag->[1];
}

done_testing;

Lifeboat Foundation HTML::Entities Bug [rt.cpan.org #130765]

Migrated from rt.cpan.org#130765 (status was 'new')

Requestors:

From [email protected] on 2019-10-21 07:11:11
:

Gisle,

In HTML::Entities:

decode_entities("&emacr;") doesn't work.

The same goes for decode_entities("&Emacr;").

Everything else that I've tried has worked.

This is for version 3.69 which comes with the latest Perl (5.30.0).

Eric Klien
Lifeboat Foundation
+1 (775) 409-3122 Voice
+1 (775) 409-3123 Fax
Skype ID LifeboatHQ
Twitter  LifeboatHQ
https://lifeboat.com


HTML::TreeBuilder generates text nodes in a strange encoding [rt.cpan.org #14212]

Migrated from rt.cpan.org#14212 (status was 'open')

Requestors:

Attachments:

From on 2005-08-17 14:31:39
:

I am using perl-HTML-Tree-3.18. I have met the following problem:
When I use HTML::TreeBuilder to parse a tree, that contains the text like
"Geb&uuml;hr vor Ort von &euro; 30,- pro Woche" (without quotes), I will get the string in the strange encoding: &uuml; will be encoded as one char, &euro; will be encoded as two chars. I think, that is incorrect. 

From on 2005-08-17 14:42:14
:

In the above post the string should be read as:
"Geb&amp;uuml;hr vor Ort von &amp;euro; 30,- pro Woche"
Tree builder seems to decode the string entities via HTML::Entities. Is
it possible to extend the tree builder with an option, that allows to
skip encoding the HTML entities into chars? The only way out seems to
call encode again, but that is not pretty.

From on 2005-09-08 19:32:13
:

The problem seems to be solved, when upgraded form Perl v5.8.3 to v5.8.6.


From on 2005-10-06 18:37:33
:

The Debian stable folk have 5.8.4, and this bug is affecting their programs.

How could they go around this problem? I'm looking at the source code
and I'm tempted to comment out the HTML::Entities::encode line... but
would that then create other problems?

From [email protected] on 2006-11-11 23:13:43
:

Can't reproduce with 3.18 and up. Please resubmit with a test case if
you are still having this issue.

As an aside, I have added this case as a test in HTML-Tree 3.22, which
will be released as part of the Chicago Hackathon this weekend.

From [email protected] on 2006-11-13 09:50:27
:

Hello! Thanks that you've paid attention to the (possible) problem.

Finally, as I said above, some of perl installations work, some -- not,
and I've come to the conclusion, it's a core Perl bug with unicode
chars. What version of Perl do you use for testing?

Can you please, define more precisely the return value for
"HTML::Entity->as_text()"? Should it return the UTF-8 text? Localized
text? While investigating the problem, I've read
http://jerakeen.org/files/2005/perl-utf8.slides.pdf -- it has a very
nice chart. Consider for reading!

Actually, I've found this problem, while implemnting the HTML parser,
that stores the data to the MySQL DB, and this data is supposed to be
displayed as HTML again. So, in my case I used the following flow:

my $html_root = HTML::TreeBuilder->new_from_content($contents);

foreach ($html_root->guts())
{
  ...
  $dbh->prepare("insert into my_table (id, contents) values ($id,
?)")->execute(HTML::Entities::encode_entities($_->as_trimmed_text()));
}

so I used the "reverse" convertion for chars. Unfortunately, I still
don;t have any working example to store Unicode strings into MySQL 4.0.x
from Perl to be read later correctly from Java :( but that's out of the
scope of the problem, being discussed.

From [email protected] on 2006-11-13 16:29:30
:

On Mon Nov 13 04:50:27 2006, [email protected] wrote:
> Finally, as I said above, some of perl installations work, some -- not,
> and I've come to the conclusion, it's a core Perl bug with unicode
> chars. What version of Perl do you use for testing?

I use Apple's Perl (5.8.6 on OSX), Debian sarge's Perl (5.8.4), and a
custom Perl (5.8.2) for release testing.  I do have a 5.6 install
sitting around, and t/body.t fails on unicode escape tests.  (I should
skip those on that platform.)
 
> Can you please, define more precisely the return value for
> "HTML::Entity->as_text()"? Should it return the UTF-8 text? Localized
> text?

It returns the text exactly as it's contained in each HTML::Element (not
HTML::Entity) and children.  If that's UTF-8, Unicode, ISO-8859-1, or
whatever, that's been decided by HTML::Parser.  HTML::Element is just
the middleman, doing simple concatenation.

If you could give a test case that shows the broken behavior on your
platform, I would appreciate it.

From [email protected] on 2009-11-23 23:19:09
:

I'm not sure if this module is still being actively maintained but I am 
experiencing the same issues on perl 5.10. I don't know if the issue is 
with HTML::Element or an underlying module.

Here is a test case which fails on Fedora 9 platform:
==============================
#!/usr/bin/perl

use HTML::Element;
use Test::More tests => 2;

my $test_string = 'This is a test 漢�';

like( $test_string, qr/漢�/xms, 'Found chinese chars input string' );

my $h = HTML::Element->new( 'p' );
$h->push_content('This is a test 漢�');

like( $h->as_HTML, qr/漢�/xms, 'Found chinese chars in html output' );
========================

Running this on Fedora 9 produces the following output:

1..2
ok 1 - Found chinese chars input string
not ok 2 - Found chinese chars in html output
#   Failed test 'Found chinese chars in html output'
#   at ./test2.pl line 13.
#                   '<p>This is a test 
&aelig;&frac14;&cent;&egrave;&ordf;&#158;
# '
#     doesn't match '(?msx-i:漢�)'
# Looks like you failed 1 test of 2.

From [email protected] on 2009-11-23 23:28:17
:

Sorry, not sure if I was experiencing the same issue as described above, 
but it seemed the same. Just realized that passing empty string to 
as_HTML solves this issue. Updated test case, which passes:

===================================
#!/usr/bin/perl

use HTML::Element;
use Test::More tests => 2;

my $test_string = 'This is a test 漢�';

like( $test_string, qr/漢�/xms, 'Found chinese chars input string' );

my $h = HTML::Element->new( 'p' );
$h->push_content('This is a test 漢�');

like( $h->as_HTML( '' ), qr/漢�/xms, 'Found chinese chars in html 
output' );
===================================
1..2
ok 1 - Found chinese chars input string
ok 2 - Found chinese chars in html output


From [email protected] on 2009-11-24 10:38:26
:

Using as_HTML('') is funny, because in this case you tell HTML::Element
not to encode entities at all (the default should be '<>&').
Why do you expect that as_HTML() should return a non-HTML-encoded string
back? I would use as_text() for this case. Or you mean that as_HTML()
basically does incorrect HTML-encoding for Chinese characters? Try plain
with first argument, seems to be a bug but of the different nature.

From [email protected] on 2010-04-24 04:17:21
:

This is a bug in HTML::Entities, line 479 is encoding the Chinese
characters. Adding the following debug code to HTML/Entities.pm reveals
this:

print(STDERR "1: ref = $$ref\n");
	$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} ||
num_entity($1)/ge;
print(STDERR "2: ref = $$ref\n");

1: ref = This is a test 漢�
2: ref = This is a test &aelig;&frac14;&cent;&egrave;&ordf;&#158;

Cheers, Jeff.

From [email protected] on 2010-07-09 13:16:30
:

From you example I can't tell if the string you passed to HTML::Entities::encode() was a Unicode 
string or the decoded UTF-8 bytes.

Please try the attached test program.  It prints:

# encode-test.pl:4: "This is a test \x{6F22}\x{8A9E}"
# encode-test.pl:5: "This is a test &#x6F22;&#x8A9E;"

for me, so it seems correct.  If I comment out the 'use utf8;' line then the output becomes:

# encode-test.pl:4: "This is a test \xE6\xBC\xA2\xE8\xAA\x9E"
# encode-test.pl:5: "This is a test &aelig;&frac14;&cent;&egrave;&ordf;&#158;"

It you get different results, please tell me what version of perl and HTML::Parser you are using.  
If you get the result above then I don't consider this a bug.

From [email protected] on 2010-07-09 13:17:49
:

On Fri Jul 09 09:16:30 2010, GAAS wrote:
> Please try the attached test program.  It prints:

Of course, I forgot to attach the file :-(

HTML 5 [rt.cpan.org #53300]

Migrated from rt.cpan.org#53300 (status was 'new')

Requestors:

From [email protected] on 2010-01-02 21:51:23
:

HTML::Parser should provide a parsing mode that is fully compliant with HTML 5,
section 9.2 ("Parsing HTML documents",
http://dev.w3.org/html5/spec/Overview.html#parsing).

As this will probably differ significantly from current behaviour, it should be
optional.


Make iframe parsing configurable [rt.cpan.org #46099]

Migrated from rt.cpan.org#46099 (status was 'open')

Requestors:

From [email protected] on 2009-05-15 06:15:45
:

Since the latest versions of HMTL::Parser do not parse the content of
iframes, some of my applications using HTML::SimpleLinkExtor have
broken. The text between the iframe tags is what the browser displays
and is usually more HTML, and I need to be able to extract any links in
that text.

I'd like to at least be able to turn on parsing for iframes, even if it
is off by default.

From [email protected] on 2009-06-20 09:17:40
:

On Fri May 15 02:15:45 2009, BDFOY wrote:
> Since the latest versions of HMTL::Parser do not parse the content of
> iframes, some of my applications using HTML::SimpleLinkExtor have
> broken. The text between the iframe tags is what the browser displays
> and is usually more HTML, and I need to be able to extract any links in
> that text.

Browsers that support iframes are supposed to ignore everything inside the iframe.  They are 
supposed to render the HTML found at the 'src' location.

> I'd like to at least be able to turn on parsing for iframes, even if it
> is off by default.

I see the point if you need to emulate the behaviour of very old browsers.

A workaround is to invoke a subparser on the iframe content text.  I'll see if I find an easier 
way to do this.

From [email protected] on 2009-06-20 09:24:09
:

The TODO file has this entry:

- make literal tags configurable.  The current list is hardcoded to be "script", "style", "title", 
"iframe", "textarea", "xmp",  and "plaintext".

which would be my preferred way to fix this.

From [email protected] on 2011-09-20 17:20:09
:

Making literal tags configurable would also be useful for those doing
javascript templates with <script type="text/html"> tags.

From [email protected] on 2012-10-17 22:22:02
:

On Sat Jun 20 05:17:40 2009, GAAS wrote:
> > I'd like to at least be able to turn on parsing for iframes, even if
> it
> > is off by default.
> 
> I see the point if you need to emulate the behaviour of very old
> browsers.

What is the point of not parsing the content of iframes?  I can't find 
any justification, and it seems at odds both with the spec and user 
expectations.  Removing this special case would make HTML::Parser simpler 
and more uniform.

Andrew

From [email protected] on 2012-10-18 22:09:53
:

I explained the point just above the text you quoted. What's "the spec" you'r
refering to?


HTML::HeadParser t/threads.t fails on perl 5.10.0

Trying to build HTML::HeadParser fails with seg fault on multi-thread perl 5.10.0

t/threads.t .......... Failed 1/1 subtests 
t/tokeparser.t ....... ok     
t/uentities.t ........ ok     
t/unbroken-text.t .... ok   
t/unicode-bom.t ...... ok   
t/unicode.t .......... ok       
t/xml-mode.t ......... ok   

Test Summary Report
-------------------
t/threads.t        (Wstat: 11 (Signal: SEGV) Tests: 0 Failed: 0)
  Non-zero wait status: 11
  Parse errors: Bad plan.  You planned 1 tests but ran 0.
Files=48, Tests=439,  4 wallclock secs ( 0.24 usr  0.14 sys +  3.35 cusr  0.59 csys =  4.32 CPU)
Result: FAIL
Failed 1/48 test programs. 0/439 subtests failed.
make: *** [test_dynamic] Error 255
bash-4.2# uname -a 
Linux fe1dc8c24e7a 5.10.114-16024-gbdf1547bd4f4 #1 SMP PREEMPT Thu Jun 30 18:19:44 PDT 2022 x86_64 x86_64 x86_64 GNU/Linux
Summary of my perl5 (revision 5 version 10 subversion 0) configuration:
  Platform:
    osname=linux, osvers=5.10.114-16024-gbdf1547bd4f4, archname=x86_64-linux-thread-multi
    uname='linux 0cba3c1d43bb 5.10.114-16024-gbdf1547bd4f4 #1 smp preempt thu jun 30 18:19:44 pdt 2022 x86_64 x86_64 x86_64 gnulinux '
    config_args='-des -Dusethreads -Dprefix=/opt/perl-5.10.0 -Dman1dir=none -Dman3dir=none'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=define, use64bitall=define, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
    optimize='-O2',
    cppflags='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include'
    ccversion='', gccversion='7.3.1 20180712 (Red Hat 7.3.1-15)', gccosandvers=''
    intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
    ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=8, prototype=define
  Linker and Libraries:
    ld='cc', ldflags =' -L/usr/local/lib'
    libpth=/usr/local/lib /lib/../lib64 /usr/lib/../lib64 /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64
    libs=-lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
    perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
    libc=libc-2.26.so, so=so, useshrplib=false, libperl=libperl.a
    gnulibc_version='2.26'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
    cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib'


Characteristics of this binary (from libperl): 
  Compile-time options: MULTIPLICITY PERL_DONT_CREATE_GVSV
                        PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_64_BIT_ALL
                        USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
                        USE_PERLIO USE_REENTRANT_API
  Locally applied patches:
        Devel::PatchPerl 2.08
  Built under linux
  Compiled at Jul 22 2022 14:24:22
  @INC:
    /opt/perl-5.10.0/lib/5.10.0/x86_64-linux-thread-multi
    /opt/perl-5.10.0/lib/5.10.0
    /opt/perl-5.10.0/lib/site_perl/5.10.0/x86_64-linux-thread-multi
    /opt/perl-5.10.0/lib/site_perl/5.10.0

Edge case with trailing text unreported

With HTML::Parser v3.76.

Consider the following chunk of data:

Hello world! <span class="highlight">Isn't this wonderful</span> really?

Creating an object, such as:

# using curry and assuming this runs within a module that acts as a simple wrapper, nothing fancy.
my $p = HTML::Parser->new(
        api_version => 3, 
        start_h => [ $self->curry::add_start, 'self, tagname, attr, attrseq, text, column, line, offset, offset_end'],
        end_h   => [ $self->curry::add_end,   'self, tagname, attr, attrseq, text, column, line, offset, offset_end' ],
        marked_sections => 1,
        comment_h => [ $self->curry::add_comment, 'self, text, column, line, offset, offset_end'],
        declaration_h => [ $self->curry::add_declaration, 'self, text, column, line, offset, offset_end'],
        default_h => [ $self->curry::add_default, 'self, tagname, attr, attrseq, text, column, line, offset, offset_end'],
        text_h => [ $self->curry::add_text, 'self, text, column, line, offset, offset_end'],
        empty_element_tags => 1,
        end_document_h => [ $self->curry::end_document, 'self, skipped_text'],
);
$p->parse( $html );
sub add_text
{
    my $self = shift( @_ );
    print( "got '", $_[1], "'\n" );
}

And this would yield:

got 'Hello world! '
got 'Isn't this wonderful'

However, ' really?' is not being reported.
One has to explicitly call $p->eof to have the trailing text reported.
If this is an intended feature, then it ought to be made clear in the documentation. However, I think one should not have to call eof to get that last trailing text.

<title> tag triggers end event

The sentence "Here is a <title> tag" triggers an end event at the end of the sentence, despite there being no closing HTML tag.

As far as I can tell, this occurs when a block of text ends with a <title> tag followed by text.
I haven't found any tag other than <title> that causes this issue.

tested with HTML::Parser 3.71 & 3.76

inject method - API extension request [rt.cpan.org #5941]

Migrated from rt.cpan.org#5941 (status was 'open')

Requestors:

Attachments:

From on 2004-04-05 23:25:04
:

Perl version: v5.8.3 built for i386-linux-thread-multi
HTML::Parser version: 3.36
on Linux 2.4.25, Debian testing dist

I am working with emulation of web browsers and found I need to have some level of preprocessing in the HTML parser.  A primitive I could use for this is the ability to inject input immediately after the current parse token.

As best I can tell, when a browser hits a chunk of content such as:
<script>
document.write('<a href="http://www.perl.org/">the stuff</a>');
</script>
it essentially injects that text immediately after the </script> element in the input parse buffer.

The attached patch adds an ->inject(chunk) method to an HTML::Parser object, and is far from a clean patch, but shows my intent.

Here is a sample use of the inject method to do simple preprocessing:

#!/usr/bin/perl
use strict;
use warnings;
use lib 'blib/lib';
use lib 'blib/arch';
use HTML::Parser qw();
use URI::Escape qw();
use IO::String qw();
use IO::Handle qw();

my $h = <<EOF;
<deftag name="foo">bar</deftag>
<deftag name="navbar">
  <foo>
  <table>
  <tr><td><a href="http://www.perl.org/">perl</a>
  <tr><td><a href="http://www.apache.org/">apache</a>
  <tr><td><a href="http://www.mozilla.org/">mozilla</a>
  </table>
</deftag>
<html><head><title>foo</title></head><body>
<navbar>
Testing 1... 2... 3...
</body></html>
EOF

my %special = ();
my $cdt = undef;
my $p;
my @out = (\*STDOUT);
$p = new HTML::Parser(
    'start_h' => [ sub { my($tag, $attr, $txt) = @_;
        if(exists $special{$tag}) {
            $p->inject($special{$tag});
        } elsif($tag eq 'deftag') {
            $cdt = $attr->{'name'};
            unshift @out, IO::String->new();
        } else {
            $out[0]->print($txt);
        }
    }, 'tag,attr,text' ],
    'text_h' => [ sub { $out[0]->print(shift) }, 'text' ],
    'end_h'  => [ sub { my($tag, $txt) = @_;
        if($tag eq '/deftag') {
            $special{$cdt} = ${$out[0]->string_ref()};
            shift @out;
        } else {
            $out[0]->print($txt);
        }
    }, 'tag,text' ],
) or die "No parser: $!";
$p->parse($h);


From on 2006-06-18 15:01:07
:

<a href='http://www.yahoo.com'></a>Thanks! http://www.insurance-top.com/auto/ <a href='http://www.insurance-top.com'>auto insurance</a>. <a href="http://www.insurance-top.com ">Insurance car</a>: auto insurance, insurance car, Best Insurance Web site
. Also [url]http://www.insurance-top.com/car/[/url] and [link=http://www.insurance-top.com]insurance quote[/link] from site .

From on 2006-06-18 15:01:13
:

Thanks!!! http://www.insurance-top.com/company/ auto site insurance. [URL=http://www.insurance-top.com]home insurance[/URL]: auto insurance, insurance car, Best Insurance Web site
. Also [url=http://www.insurance-top.com]cars insurance[/url] from website .

From on 2006-06-18 15:01:17
:

Hi! http://www.insurance-top.com/company/ auto site insurance. auto insurance, insurance car, Best Insurance Web site
. from website .

From on 2006-06-18 15:01:21
:


From on 2006-06-18 15:01:25
:


examples lack explanation [rt.cpan.org #58016]

Migrated from rt.cpan.org#58016 (status was 'open')

Requestors:

Attachments:

From [email protected] on 2010-06-01 14:21:40
:

Hi!

In Debian we got the following wishlist bugreport. Some of the
examples in eg/* are missing a header that explains what the example
is doing, if possible could be such a small note be added to the
files?

(Further note that one file is missing an executable bit).

Many thanks for considering! And thanks for this module

Bests
Salvatore

----- Forwarded message from [email protected] -----

From: [email protected]
Resent-From: [email protected]
Reply-To: [email protected], [email protected]
Date: Tue, 01 Jun 2010 13:51:27 +0800
To: [email protected]
Subject: Bug#584088: examples lack explanation

Package: libhtml-parser-perl
Version: 3.65-1
Severity: wishlist
File: /usr/share/doc/libhtml-parser-perl/examples/

Some of the examples lack a header that explains just what they are
trying to do. P.S., some are chmod +x, some aren't.

_______________________________________________
pkg-perl-maintainers mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-perl-maintainers


----- End forwarded message -----



From [email protected] on 2010-09-26 00:41:57
:

On Tue Jun 01 10:21:40 2010, [email protected] wrote:
> Hi!
> 
> In Debian we got the following wishlist bugreport. Some of the
> examples in eg/* are missing a header that explains what the example
> is doing, if possible could be such a small note be added to the
> files?
> 
> (Further note that one file is missing an executable bit).
> 
> Many thanks for considering! And thanks for this module
> 
> Bests
> Salvatore
> 
> ----- Forwarded message from [email protected] -----
> 
> From: [email protected]
> Resent-From: [email protected]
> Reply-To: [email protected], [email protected]
> Date: Tue, 01 Jun 2010 13:51:27 +0800
> To: [email protected]
> Subject: Bug#584088: examples lack explanation
> 
> Package: libhtml-parser-perl
> Version: 3.65-1
> Severity: wishlist
> File: /usr/share/doc/libhtml-parser-perl/examples/
> 
> Some of the examples lack a header that explains just what they are
> trying to do. P.S., some are chmod +x, some aren't.
> 
> _______________________________________________
> pkg-perl-maintainers mailing list
> [email protected]
> http://lists.alioth.debian.org/mailman/listinfo/pkg-perl-maintainers
> 
> 
> ----- End forwarded message -----
> 


I have put together a patch file. There was also an issue that some of
the scripts failed when outputting wide characters.  That is also fixed
in the patch.

Potential speedups for HTML::Entities::encode_entities

I think we could get some speedups in encode_entities by caching some common operations. The examples below would most benefit users encoding lots of data that is heavy on non-named, numeric entities. We may have similar benefits elsewhere.

The first is to cache the results of the sprintf in num_entity. This could be done with no effect on behavior, in exchange for some hash entries. Here are my experiments.

use Benchmark ':all';
use HTML::Entities;

timethese( 300_000,
    {
        original => sub {
            map { HTML::Entities::num_entity($_) } 1..100
        },
        cached => sub {
            map { cached_num_entity($_) } 1..100
        },
    }
);


my %cache;
sub cached_num_entity {
    return $cache{$_[0]} ||= sprintf("&#x%X;", ord($_[0]));
}

gives these results:

    cached: 10 wallclock secs (10.52 usr +  0.00 sys = 10.52 CPU) @ 28517.11/s (n=300000)
  original: 15 wallclock secs (14.28 usr +  0.00 sys = 14.28 CPU) @ 21008.40/s (n=300000)

Bottom line: Hash lookup is faster than the sprintf, so let's cache it.


The other tweak would be to cache the call to num_entity inside the main regex in encode_entities. Swap this:

$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} || num_entity($1)/ge;

for this

$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} ||= num_entity($1)/ge;

This would have the side effect of modifying the %char2entity hash, which is visible to the outside world. If that wasn't OK, we could have a private copy of the hash specifically so it would be modifiable. The potential downside (or upside?) of that would be that if someone outside the module modified %char2entity, it would have no effect on encode_entities.

For benchmarking encode_entities, I used this:

my $ent = chr(8855);
my $num = chr(8854);
my $unencoded = "$ent$num" x 10;

my $text = <<"HTML";
text in "$unencoded"
HTML

timethese( 1_000_000,
    {
        encode => sub { my $x = encode_entities($text) },
    }
);

Results:

42,281/s for the original unmodified encode_entities.

52,746/s if the encode_entities used the caching num_entity first mentioned, but the main regex is unchanged.

64,769/s if the main conversion regex caches the results of calls to num_entity in %char2entity. Changing this to call the caching num_entity gave no noticeable improvement.


I hope these give some ideas. encode_entities is an absolute workhorse at my job (we generate everything with Template Toolkit), and I'm sure for many many others. Any speedup would have wide-ranging benefits.

option to skip headers found in NOSCRIPT tags [rt.cpan.org #119319]

Migrated from rt.cpan.org#119319 (status was 'new')

Requestors:

From [email protected] on 2016-12-20 10:44:07
:

I'm trying to emulate logic of a real browser and handle META redirects properly, but sometimes those redirects are contained in NOSCRIPT tags, and if I don't avoid those cases I can't emulate the javascript logic and access the webapp. I need a way to avoid setting headers found within the NOSCRIPT tags.

Doesn't compile on BeOS [rt.cpan.org #64401]

Migrated from rt.cpan.org#64401 (status was 'new')

Requestors:

From [email protected] on 2011-01-02 21:42:48
:

$ perl Makefile.PL
Writing Makefile for HTML::Parser
$ make
cp lib/HTML/PullParser.pm blib/lib/HTML/PullParser.pm
cp Parser.pm blib/lib/HTML/Parser.pm
cp lib/HTML/Entities.pm blib/lib/HTML/Entities.pm
cp lib/HTML/LinkExtor.pm blib/lib/HTML/LinkExtor.pm
cp lib/HTML/TokeParser.pm blib/lib/HTML/TokeParser.pm
cp lib/HTML/Filter.pm blib/lib/HTML/Filter.pm
cp lib/HTML/HeadParser.pm blib/lib/HTML/HeadParser.pm
/boot/home/config/bin/perl5.12.2
/boot/home/config/lib/perl5/5.12.2/ExtUtils/xsubpp  -typemap
/boot/home/config/lib/perl5/5.12.2/ExtUtils/typemap -typemap typemap 
Parser.xs > Parser.xsc && /boot/home/config/bin/perl5.12.2
-MExtUtils::Command -e 'mv' -- Parser.xsc Parser.c
/boot/home/config/bin/perl5.12.2 mkhctype >hctype.h
/boot/home/config/bin/perl5.12.2 mkpfunc >pfunc.h
cc -c   -I/boot/develop/headers/gnu -fno-strict-aliasing -pipe -O  
-DVERSION=\"3.68\" -DXS_VERSION=\"3.68\" -fpic
"-I/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE"  -DMARKED_SECTION
Parser.c
In file included from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:2598:
beos/beosish.h: No such file or directory
In file included from
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:4946,
                 from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/proto.h:297: parse
error before `*'
In file included from
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:4976,
                 from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:87: parse
error before `PL_statbuf'
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:87:
warning: data definition has no type or storage class
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:88: parse
error before `PL_statcache'
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:88:
warning: data definition has no type or storage class
make: *** [Parser.o] Error 1   
njh@gateway:~$ 


Installing Alpine 3.16.1 and Perl 5.34.1 fails on EXTERN.h

I try to install with cpanm HTML::Parser. I get the error in the logfile:

cc -c   -D_REENTRANT -D_GNU_SOURCE -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Os -fomit-frame-pointer   -DVERSION=\"3.78\" -DXS_VERSION=\"3.78\" -fPIC "-I/usr/lib/perl5/core_perl/CORE"  -DMARKED_SECTION Parser.c
Parser.xs:17:10: fatal error: EXTERN.h: No such file or directory
   17 | #include "EXTERN.h"
      |          ^~~~~~~~~~
compilation terminated.

Do you know how to solve this?

Wrong parse [rt.cpan.org #55629]

Migrated from rt.cpan.org#55629 (status was 'open')

Requestors:

From [email protected] on 2010-03-16 15:09:51
:

HTML:
<iframe/**/src="http://mail.ru"  name="poc iframe jacking"  width="100%"
height="100%" scrolling="auto" frameborder="no"></iframe>

$parser = HTML::Parser->new(
 api_version => 3,
 start_h => [ sub{
   my ($Self, $Text, $Tag, $Attr) = @_;
   print "Tag is: ".$Tag;
 }, "self, text, tagname, attr" ]
);
$parser->ignore_elements( qw( iframe ));
$parser->ignore_tags( qw( iframe ));

output:
Tag is: iframe/**/src="http://mail.ru"


From [email protected] on 2010-03-18 13:51:31
:

��� �а� 16 11:09:51 2010, NIKOLAS пи�ал:
> HTML:
> <iframe/**/src="http://mail.ru"  name="poc iframe jacking"  width="100%"
> height="100%" scrolling="auto" frameborder="no"></iframe>
> 
> $parser = HTML::Parser->new(
>  api_version => 3,
>  start_h => [ sub{
>    my ($Self, $Text, $Tag, $Attr) = @_;
>    print "Tag is: ".$Tag;
>  }, "self, text, tagname, attr" ]
> );
> $parser->ignore_elements( qw( iframe ));
> $parser->ignore_tags( qw( iframe ));
> 
> output:
> Tag is: iframe/**/src="http://mail.ru"

HTML: <script/src="ya.ru"> wrong parse same


From [email protected] on 2010-04-04 20:38:08
:

I don't understand what rules you propose that HTML::Parser should follow to parse this kind of 
bogus HTML.  You think it should treat "/**/" and "/" as whitespace?

From [email protected] on 2010-06-01 07:13:54
:

Here 3 regular expressions applied to the entrance text correct this
problems:
s{(/\*)}{ $1}g;
s{(\*/)}{$1 }g;
s{(<[^/\s<>]+)/}{$1 /}g;

Probably you will find more correct architectural decision.


Tokenizing bug. Some tokens are split into 2

This problem happens on a particular webpage
https://www.radiofrance.fr/franceinter/podcasts

This is my golfed script which shows the bug

#!/usr/bin/perl
package myparser;
use strict;
use warnings;
use v5.10;
use base qw(HTML::Parser);

sub text {
    my ($self, $text, $is_cdata) = @_;
    say "\"$text\"";
}

package main;
use strict;
use warnings;

my $p = myparser->new;
$p->parse_file(shift // exit);

Unfortunately, I can't post a golfed HTML snippet because when I try to reduce the size of the webpage, the bug disappear. So I will have to explain the exact steps I did to reproduce the bug.

In Chromium, go to https://www.radiofrance.fr/franceinter/podcasts.
Then load the entire webpage by going at the bottom, clicking on "VOIR PLUS DE PODCASTS" repetitively until everything is loaded.
Then save the webpage.

After that you just have to execute the script example with the downloaded page as argument.

The script prints all the text which is outside of any tag. Like this /tag>TEXT HERE<othertag

THE BUG
The bug is that some "text elements" are splitted in 2.
This happens for several podcast names. "Sur les épaules de Darwin" is one of those.

You can see that the script will output

"Sur les épaules de"
" Darwin"

instead of just "Sur les épaules de Darwin"
This also happens to "Sur Les routes de la musique" (just below) and a few others.

Now, I found that when deleting <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">, just at the top of <head></head>, the bug disappear. And It also happens when deleting just ; charset=UTF-8

The problem is that the bug also disappear when I leave the charset as is and when I delete a bunch of the stuff inside <head></head> or I delete a lot of the divs corresponding to the other podcasts entries of the index.

This is all the information that I have.

Handle <unclosed </tags [rt.cpan.org #47748]

Migrated from rt.cpan.org#47748 (status was 'new')

Requestors:

From [email protected] on 2009-07-09 17:02:41
:

The other day, I received a spam e-mail with a text/html body part like
this:

==============================================================
blah blah<br><br
<a href=http://domain/path.html target=_blank>Go!</a><br><p>blah
==============================================================

My spam filter failed to parse the href URL from the message body due to
the unclosed "<br" tag.  Closing it causes HTML::Parser to correctly
parse the URL.

I noticed that http://search.cpan.org/dist/HTML-Parser/Parser.pm#BUGS says:

«Unclosed start or end tags, e.g. "<tt<b>...</b</tt>" are not recognized.»

I don't understand what the implication of this is, however.  Is it a
conscious decision not to support unclosed tags, or has there just been
no use case for a fix?

I tried how various browsers handle the HTML code from the spam message
above:

At least the following do render the link despite the preceding broken
"<br" tag:  Firefox 3, Konqueror from KDE 3.5.9, Safari 3 & 4, Mail.app

At least the following do NOT render the link:  IE 6, Opera 9.63

I'd appreciate it if an option could be added to HTML::Parser to
recognize unclosed tags.

<meta name="twitter:card"...> (and similar) entities trip up the parser [rt.cpan.org #132922]

Migrated from rt.cpan.org#132922 (status was 'new')

Requestors:

From [email protected] on 2020-07-01 15:53:28
:

Look alike it calls HTTP::Headers->push_header() with a constructed header name of X-Meta-Twitter:card, which obviously fails. 

From [email protected] on 2020-07-01 15:55:58
:


> Looks like it calls HTTP::Headers->push_header() with a constructed header name of X-Meta-Twitter:card, which obviously fails. 

Where 'it', obviously, its HTML::HeadParser - sorry for being unclear.

Change in Perl 5 blead causes error in HTML::Entities::encode_entities [rt.cpan.org #119675]

Migrated from rt.cpan.org#119675 (status was 'new')

Requestors:

From [email protected] on 2017-01-03 17:21:55
:

This was reported to the Perl 5 bug tracker in a "blead breaks CPAN" ticket:

https://rt.perl.org/Ticket/Display.html?id=130487

In that ticket I have attached my reduction of the problem to HTML::Entities::encode_entities() and the error output I get from running it on blead.  perl-5.25.7 or later should display the problem.

Thank you very much.
Jim Keenan

[PATCH] Protect active parser from being freed. [rt.cpan.org #115034]

Migrated from rt.cpan.org#115034 (status was 'open')

Requestors:

  • 'spro^^%^6ut#@&$%*c

Attachments:

From $_ = 'spro^^%^6ut#@&$�>#!^!#&!pan.org'; y/a-z.@//cd; print on 2016-06-03 13:33:47
:


From [email protected] on 2020-08-24 16:17:42
:

On Fri Jun 03 09:33:47 2016, SPROUT wrote:
> This is a follow-up to
> <http://www.nntp.perl.org/group/perl.libwww/;msgid=7A27BD12-BA67-40B7-
> [email protected]>, from 6 years ago, but this time with a
> patch!
> 
> If you try to free the parser from a callback, it crashes, because
> pstate gets freed while things in the parser are still referring to
> it.


opened PR with this change and unit test as this is still an issue and generating a SEGV
view: https://github.com/gisle/html-parser/pull/13

Feature request: Strict mode for decode_entities()

Since 2004 decode_entities() supports the merging of surrogate pairs. See http://rt.cpan.org/Ticket/Display.html?id=7785 . This means that for example &#xD83D;&#xDE01; will be decoded into a single code point. My understanding that this not covered in any spec.

I therefore propose to add a function decode_entities_strict() that does the same as decode_entities() but rejects surrogate pairs.

Attached is a sample script that shows the effect-
surrogate_pair.pl.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.