Coder Social home page Coder Social logo

libwww-perl / html-parser Goto Github PK

View Code? Open in Web Editor NEW
6.0 6.0 11.0 1.59 MB

The HTML-Parser distribution is is a collection of modules that parse and extract information from HTML documents.

License: Other

Perl 72.51% XS 4.88% HTML 1.26% C 21.35%
hacktoberfest

html-parser's Issues

<title> tag triggers end event

The sentence "Here is a <title> tag" triggers an end event at the end of the sentence, despite there being no closing HTML tag.

As far as I can tell, this occurs when a block of text ends with a <title> tag followed by text.
I haven't found any tag other than <title> that causes this issue.

tested with HTML::Parser 3.71 & 3.76

Doesn't compile on BeOS [rt.cpan.org #64401]

Migrated from rt.cpan.org#64401 (status was 'new')

Requestors:

From [email protected] on 2011-01-02 21:42:48
:

$ perl Makefile.PL
Writing Makefile for HTML::Parser
$ make
cp lib/HTML/PullParser.pm blib/lib/HTML/PullParser.pm
cp Parser.pm blib/lib/HTML/Parser.pm
cp lib/HTML/Entities.pm blib/lib/HTML/Entities.pm
cp lib/HTML/LinkExtor.pm blib/lib/HTML/LinkExtor.pm
cp lib/HTML/TokeParser.pm blib/lib/HTML/TokeParser.pm
cp lib/HTML/Filter.pm blib/lib/HTML/Filter.pm
cp lib/HTML/HeadParser.pm blib/lib/HTML/HeadParser.pm
/boot/home/config/bin/perl5.12.2
/boot/home/config/lib/perl5/5.12.2/ExtUtils/xsubpp  -typemap
/boot/home/config/lib/perl5/5.12.2/ExtUtils/typemap -typemap typemap 
Parser.xs > Parser.xsc && /boot/home/config/bin/perl5.12.2
-MExtUtils::Command -e 'mv' -- Parser.xsc Parser.c
/boot/home/config/bin/perl5.12.2 mkhctype >hctype.h
/boot/home/config/bin/perl5.12.2 mkpfunc >pfunc.h
cc -c   -I/boot/develop/headers/gnu -fno-strict-aliasing -pipe -O  
-DVERSION=\"3.68\" -DXS_VERSION=\"3.68\" -fpic
"-I/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE"  -DMARKED_SECTION
Parser.c
In file included from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:2598:
beos/beosish.h: No such file or directory
In file included from
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:4946,
                 from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/proto.h:297: parse
error before `*'
In file included from
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/perl.h:4976,
                 from /boot/home/Downloads/HTML-Parser-3.68/Parser.xs:18:
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:87: parse
error before `PL_statbuf'
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:87:
warning: data definition has no type or storage class
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:88: parse
error before `PL_statcache'
/boot/home/config/lib/perl5/5.12.2/BePC-beos/CORE/intrpvar.h:88:
warning: data definition has no type or storage class
make: *** [Parser.o] Error 1   
njh@gateway:~$ 


Make iframe parsing configurable [rt.cpan.org #46099]

Migrated from rt.cpan.org#46099 (status was 'open')

Requestors:

From [email protected] on 2009-05-15 06:15:45
:

Since the latest versions of HMTL::Parser do not parse the content of
iframes, some of my applications using HTML::SimpleLinkExtor have
broken. The text between the iframe tags is what the browser displays
and is usually more HTML, and I need to be able to extract any links in
that text.

I'd like to at least be able to turn on parsing for iframes, even if it
is off by default.

From [email protected] on 2009-06-20 09:17:40
:

On Fri May 15 02:15:45 2009, BDFOY wrote:
> Since the latest versions of HMTL::Parser do not parse the content of
> iframes, some of my applications using HTML::SimpleLinkExtor have
> broken. The text between the iframe tags is what the browser displays
> and is usually more HTML, and I need to be able to extract any links in
> that text.

Browsers that support iframes are supposed to ignore everything inside the iframe.  They are 
supposed to render the HTML found at the 'src' location.

> I'd like to at least be able to turn on parsing for iframes, even if it
> is off by default.

I see the point if you need to emulate the behaviour of very old browsers.

A workaround is to invoke a subparser on the iframe content text.  I'll see if I find an easier 
way to do this.

From [email protected] on 2009-06-20 09:24:09
:

The TODO file has this entry:

- make literal tags configurable.  The current list is hardcoded to be "script", "style", "title", 
"iframe", "textarea", "xmp",  and "plaintext".

which would be my preferred way to fix this.

From [email protected] on 2011-09-20 17:20:09
:

Making literal tags configurable would also be useful for those doing
javascript templates with <script type="text/html"> tags.

From [email protected] on 2012-10-17 22:22:02
:

On Sat Jun 20 05:17:40 2009, GAAS wrote:
> > I'd like to at least be able to turn on parsing for iframes, even if
> it
> > is off by default.
> 
> I see the point if you need to emulate the behaviour of very old
> browsers.

What is the point of not parsing the content of iframes?  I can't find 
any justification, and it seems at odds both with the spec and user 
expectations.  Removing this special case would make HTML::Parser simpler 
and more uniform.

Andrew

From [email protected] on 2012-10-18 22:09:53
:

I explained the point just above the text you quoted. What's "the spec" you'r
refering to?


examples lack explanation [rt.cpan.org #58016]

Migrated from rt.cpan.org#58016 (status was 'open')

Requestors:

Attachments:

From [email protected] on 2010-06-01 14:21:40
:

Hi!

In Debian we got the following wishlist bugreport. Some of the
examples in eg/* are missing a header that explains what the example
is doing, if possible could be such a small note be added to the
files?

(Further note that one file is missing an executable bit).

Many thanks for considering! And thanks for this module

Bests
Salvatore

----- Forwarded message from [email protected] -----

From: [email protected]
Resent-From: [email protected]
Reply-To: [email protected], [email protected]
Date: Tue, 01 Jun 2010 13:51:27 +0800
To: [email protected]
Subject: Bug#584088: examples lack explanation

Package: libhtml-parser-perl
Version: 3.65-1
Severity: wishlist
File: /usr/share/doc/libhtml-parser-perl/examples/

Some of the examples lack a header that explains just what they are
trying to do. P.S., some are chmod +x, some aren't.

_______________________________________________
pkg-perl-maintainers mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-perl-maintainers


----- End forwarded message -----



From [email protected] on 2010-09-26 00:41:57
:

On Tue Jun 01 10:21:40 2010, [email protected] wrote:
> Hi!
> 
> In Debian we got the following wishlist bugreport. Some of the
> examples in eg/* are missing a header that explains what the example
> is doing, if possible could be such a small note be added to the
> files?
> 
> (Further note that one file is missing an executable bit).
> 
> Many thanks for considering! And thanks for this module
> 
> Bests
> Salvatore
> 
> ----- Forwarded message from [email protected] -----
> 
> From: [email protected]
> Resent-From: [email protected]
> Reply-To: [email protected], [email protected]
> Date: Tue, 01 Jun 2010 13:51:27 +0800
> To: [email protected]
> Subject: Bug#584088: examples lack explanation
> 
> Package: libhtml-parser-perl
> Version: 3.65-1
> Severity: wishlist
> File: /usr/share/doc/libhtml-parser-perl/examples/
> 
> Some of the examples lack a header that explains just what they are
> trying to do. P.S., some are chmod +x, some aren't.
> 
> _______________________________________________
> pkg-perl-maintainers mailing list
> [email protected]
> http://lists.alioth.debian.org/mailman/listinfo/pkg-perl-maintainers
> 
> 
> ----- End forwarded message -----
> 


I have put together a patch file. There was also an issue that some of
the scripts failed when outputting wide characters.  That is also fixed
in the patch.

HTML::TreeBuilder generates text nodes in a strange encoding [rt.cpan.org #14212]

Migrated from rt.cpan.org#14212 (status was 'open')

Requestors:

Attachments:

From on 2005-08-17 14:31:39
:

I am using perl-HTML-Tree-3.18. I have met the following problem:
When I use HTML::TreeBuilder to parse a tree, that contains the text like
"Geb&uuml;hr vor Ort von &euro; 30,- pro Woche" (without quotes), I will get the string in the strange encoding: &uuml; will be encoded as one char, &euro; will be encoded as two chars. I think, that is incorrect. 

From on 2005-08-17 14:42:14
:

In the above post the string should be read as:
"Geb&amp;uuml;hr vor Ort von &amp;euro; 30,- pro Woche"
Tree builder seems to decode the string entities via HTML::Entities. Is
it possible to extend the tree builder with an option, that allows to
skip encoding the HTML entities into chars? The only way out seems to
call encode again, but that is not pretty.

From on 2005-09-08 19:32:13
:

The problem seems to be solved, when upgraded form Perl v5.8.3 to v5.8.6.


From on 2005-10-06 18:37:33
:

The Debian stable folk have 5.8.4, and this bug is affecting their programs.

How could they go around this problem? I'm looking at the source code
and I'm tempted to comment out the HTML::Entities::encode line... but
would that then create other problems?

From [email protected] on 2006-11-11 23:13:43
:

Can't reproduce with 3.18 and up. Please resubmit with a test case if
you are still having this issue.

As an aside, I have added this case as a test in HTML-Tree 3.22, which
will be released as part of the Chicago Hackathon this weekend.

From [email protected] on 2006-11-13 09:50:27
:

Hello! Thanks that you've paid attention to the (possible) problem.

Finally, as I said above, some of perl installations work, some -- not,
and I've come to the conclusion, it's a core Perl bug with unicode
chars. What version of Perl do you use for testing?

Can you please, define more precisely the return value for
"HTML::Entity->as_text()"? Should it return the UTF-8 text? Localized
text? While investigating the problem, I've read
http://jerakeen.org/files/2005/perl-utf8.slides.pdf -- it has a very
nice chart. Consider for reading!

Actually, I've found this problem, while implemnting the HTML parser,
that stores the data to the MySQL DB, and this data is supposed to be
displayed as HTML again. So, in my case I used the following flow:

my $html_root = HTML::TreeBuilder->new_from_content($contents);

foreach ($html_root->guts())
{
  ...
  $dbh->prepare("insert into my_table (id, contents) values ($id,
?)")->execute(HTML::Entities::encode_entities($_->as_trimmed_text()));
}

so I used the "reverse" convertion for chars. Unfortunately, I still
don;t have any working example to store Unicode strings into MySQL 4.0.x
from Perl to be read later correctly from Java :( but that's out of the
scope of the problem, being discussed.

From [email protected] on 2006-11-13 16:29:30
:

On Mon Nov 13 04:50:27 2006, [email protected] wrote:
> Finally, as I said above, some of perl installations work, some -- not,
> and I've come to the conclusion, it's a core Perl bug with unicode
> chars. What version of Perl do you use for testing?

I use Apple's Perl (5.8.6 on OSX), Debian sarge's Perl (5.8.4), and a
custom Perl (5.8.2) for release testing.  I do have a 5.6 install
sitting around, and t/body.t fails on unicode escape tests.  (I should
skip those on that platform.)
 
> Can you please, define more precisely the return value for
> "HTML::Entity->as_text()"? Should it return the UTF-8 text? Localized
> text?

It returns the text exactly as it's contained in each HTML::Element (not
HTML::Entity) and children.  If that's UTF-8, Unicode, ISO-8859-1, or
whatever, that's been decided by HTML::Parser.  HTML::Element is just
the middleman, doing simple concatenation.

If you could give a test case that shows the broken behavior on your
platform, I would appreciate it.

From [email protected] on 2009-11-23 23:19:09
:

I'm not sure if this module is still being actively maintained but I am 
experiencing the same issues on perl 5.10. I don't know if the issue is 
with HTML::Element or an underlying module.

Here is a test case which fails on Fedora 9 platform:
==============================
#!/usr/bin/perl

use HTML::Element;
use Test::More tests => 2;

my $test_string = 'This is a test 漢�';

like( $test_string, qr/漢�/xms, 'Found chinese chars input string' );

my $h = HTML::Element->new( 'p' );
$h->push_content('This is a test 漢�');

like( $h->as_HTML, qr/漢�/xms, 'Found chinese chars in html output' );
========================

Running this on Fedora 9 produces the following output:

1..2
ok 1 - Found chinese chars input string
not ok 2 - Found chinese chars in html output
#   Failed test 'Found chinese chars in html output'
#   at ./test2.pl line 13.
#                   '<p>This is a test 
&aelig;&frac14;&cent;&egrave;&ordf;&#158;
# '
#     doesn't match '(?msx-i:漢�)'
# Looks like you failed 1 test of 2.

From [email protected] on 2009-11-23 23:28:17
:

Sorry, not sure if I was experiencing the same issue as described above, 
but it seemed the same. Just realized that passing empty string to 
as_HTML solves this issue. Updated test case, which passes:

===================================
#!/usr/bin/perl

use HTML::Element;
use Test::More tests => 2;

my $test_string = 'This is a test 漢�';

like( $test_string, qr/漢�/xms, 'Found chinese chars input string' );

my $h = HTML::Element->new( 'p' );
$h->push_content('This is a test 漢�');

like( $h->as_HTML( '' ), qr/漢�/xms, 'Found chinese chars in html 
output' );
===================================
1..2
ok 1 - Found chinese chars input string
ok 2 - Found chinese chars in html output


From [email protected] on 2009-11-24 10:38:26
:

Using as_HTML('') is funny, because in this case you tell HTML::Element
not to encode entities at all (the default should be '<>&').
Why do you expect that as_HTML() should return a non-HTML-encoded string
back? I would use as_text() for this case. Or you mean that as_HTML()
basically does incorrect HTML-encoding for Chinese characters? Try plain
with first argument, seems to be a bug but of the different nature.

From [email protected] on 2010-04-24 04:17:21
:

This is a bug in HTML::Entities, line 479 is encoding the Chinese
characters. Adding the following debug code to HTML/Entities.pm reveals
this:

print(STDERR "1: ref = $$ref\n");
	$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} ||
num_entity($1)/ge;
print(STDERR "2: ref = $$ref\n");

1: ref = This is a test 漢�
2: ref = This is a test &aelig;&frac14;&cent;&egrave;&ordf;&#158;

Cheers, Jeff.

From [email protected] on 2010-07-09 13:16:30
:

From you example I can't tell if the string you passed to HTML::Entities::encode() was a Unicode 
string or the decoded UTF-8 bytes.

Please try the attached test program.  It prints:

# encode-test.pl:4: "This is a test \x{6F22}\x{8A9E}"
# encode-test.pl:5: "This is a test &#x6F22;&#x8A9E;"

for me, so it seems correct.  If I comment out the 'use utf8;' line then the output becomes:

# encode-test.pl:4: "This is a test \xE6\xBC\xA2\xE8\xAA\x9E"
# encode-test.pl:5: "This is a test &aelig;&frac14;&cent;&egrave;&ordf;&#158;"

It you get different results, please tell me what version of perl and HTML::Parser you are using.  
If you get the result above then I don't consider this a bug.

From [email protected] on 2010-07-09 13:17:49
:

On Fri Jul 09 09:16:30 2010, GAAS wrote:
> Please try the attached test program.  It prints:

Of course, I forgot to attach the file :-(

Edge case with trailing text unreported

With HTML::Parser v3.76.

Consider the following chunk of data:

Hello world! <span class="highlight">Isn't this wonderful</span> really?

Creating an object, such as:

# using curry and assuming this runs within a module that acts as a simple wrapper, nothing fancy.
my $p = HTML::Parser->new(
        api_version => 3, 
        start_h => [ $self->curry::add_start, 'self, tagname, attr, attrseq, text, column, line, offset, offset_end'],
        end_h   => [ $self->curry::add_end,   'self, tagname, attr, attrseq, text, column, line, offset, offset_end' ],
        marked_sections => 1,
        comment_h => [ $self->curry::add_comment, 'self, text, column, line, offset, offset_end'],
        declaration_h => [ $self->curry::add_declaration, 'self, text, column, line, offset, offset_end'],
        default_h => [ $self->curry::add_default, 'self, tagname, attr, attrseq, text, column, line, offset, offset_end'],
        text_h => [ $self->curry::add_text, 'self, text, column, line, offset, offset_end'],
        empty_element_tags => 1,
        end_document_h => [ $self->curry::end_document, 'self, skipped_text'],
);
$p->parse( $html );
sub add_text
{
    my $self = shift( @_ );
    print( "got '", $_[1], "'\n" );
}

And this would yield:

got 'Hello world! '
got 'Isn't this wonderful'

However, ' really?' is not being reported.
One has to explicitly call $p->eof to have the trailing text reported.
If this is an intended feature, then it ought to be made clear in the documentation. However, I think one should not have to call eof to get that last trailing text.

Installing Alpine 3.16.1 and Perl 5.34.1 fails on EXTERN.h

I try to install with cpanm HTML::Parser. I get the error in the logfile:

cc -c   -D_REENTRANT -D_GNU_SOURCE -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Os -fomit-frame-pointer   -DVERSION=\"3.78\" -DXS_VERSION=\"3.78\" -fPIC "-I/usr/lib/perl5/core_perl/CORE"  -DMARKED_SECTION Parser.c
Parser.xs:17:10: fatal error: EXTERN.h: No such file or directory
   17 | #include "EXTERN.h"
      |          ^~~~~~~~~~
compilation terminated.

Do you know how to solve this?

Tokenizing bug. Some tokens are split into 2

This problem happens on a particular webpage
https://www.radiofrance.fr/franceinter/podcasts

This is my golfed script which shows the bug

#!/usr/bin/perl
package myparser;
use strict;
use warnings;
use v5.10;
use base qw(HTML::Parser);

sub text {
    my ($self, $text, $is_cdata) = @_;
    say "\"$text\"";
}

package main;
use strict;
use warnings;

my $p = myparser->new;
$p->parse_file(shift // exit);

Unfortunately, I can't post a golfed HTML snippet because when I try to reduce the size of the webpage, the bug disappear. So I will have to explain the exact steps I did to reproduce the bug.

In Chromium, go to https://www.radiofrance.fr/franceinter/podcasts.
Then load the entire webpage by going at the bottom, clicking on "VOIR PLUS DE PODCASTS" repetitively until everything is loaded.
Then save the webpage.

After that you just have to execute the script example with the downloaded page as argument.

The script prints all the text which is outside of any tag. Like this /tag>TEXT HERE<othertag

THE BUG
The bug is that some "text elements" are splitted in 2.
This happens for several podcast names. "Sur les épaules de Darwin" is one of those.

You can see that the script will output

"Sur les épaules de"
" Darwin"

instead of just "Sur les épaules de Darwin"
This also happens to "Sur Les routes de la musique" (just below) and a few others.

Now, I found that when deleting <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">, just at the top of <head></head>, the bug disappear. And It also happens when deleting just ; charset=UTF-8

The problem is that the bug also disappear when I leave the charset as is and when I delete a bunch of the stuff inside <head></head> or I delete a lot of the divs corresponding to the other podcasts entries of the index.

This is all the information that I have.

Lifeboat Foundation HTML::Entities Bug [rt.cpan.org #130765]

Migrated from rt.cpan.org#130765 (status was 'new')

Requestors:

From [email protected] on 2019-10-21 07:11:11
:

Gisle,

In HTML::Entities:

decode_entities("&emacr;") doesn't work.

The same goes for decode_entities("&Emacr;").

Everything else that I've tried has worked.

This is for version 3.69 which comes with the latest Perl (5.30.0).

Eric Klien
Lifeboat Foundation
+1 (775) 409-3122 Voice
+1 (775) 409-3123 Fax
Skype ID LifeboatHQ
Twitter  LifeboatHQ
https://lifeboat.com


HTML 5 [rt.cpan.org #53300]

Migrated from rt.cpan.org#53300 (status was 'new')

Requestors:

From [email protected] on 2010-01-02 21:51:23
:

HTML::Parser should provide a parsing mode that is fully compliant with HTML 5,
section 9.2 ("Parsing HTML documents",
http://dev.w3.org/html5/spec/Overview.html#parsing).

As this will probably differ significantly from current behaviour, it should be
optional.


inject method - API extension request [rt.cpan.org #5941]

Migrated from rt.cpan.org#5941 (status was 'open')

Requestors:

Attachments:

From on 2004-04-05 23:25:04
:

Perl version: v5.8.3 built for i386-linux-thread-multi
HTML::Parser version: 3.36
on Linux 2.4.25, Debian testing dist

I am working with emulation of web browsers and found I need to have some level of preprocessing in the HTML parser.  A primitive I could use for this is the ability to inject input immediately after the current parse token.

As best I can tell, when a browser hits a chunk of content such as:
<script>
document.write('<a href="http://www.perl.org/">the stuff</a>');
</script>
it essentially injects that text immediately after the </script> element in the input parse buffer.

The attached patch adds an ->inject(chunk) method to an HTML::Parser object, and is far from a clean patch, but shows my intent.

Here is a sample use of the inject method to do simple preprocessing:

#!/usr/bin/perl
use strict;
use warnings;
use lib 'blib/lib';
use lib 'blib/arch';
use HTML::Parser qw();
use URI::Escape qw();
use IO::String qw();
use IO::Handle qw();

my $h = <<EOF;
<deftag name="foo">bar</deftag>
<deftag name="navbar">
  <foo>
  <table>
  <tr><td><a href="http://www.perl.org/">perl</a>
  <tr><td><a href="http://www.apache.org/">apache</a>
  <tr><td><a href="http://www.mozilla.org/">mozilla</a>
  </table>
</deftag>
<html><head><title>foo</title></head><body>
<navbar>
Testing 1... 2... 3...
</body></html>
EOF

my %special = ();
my $cdt = undef;
my $p;
my @out = (\*STDOUT);
$p = new HTML::Parser(
    'start_h' => [ sub { my($tag, $attr, $txt) = @_;
        if(exists $special{$tag}) {
            $p->inject($special{$tag});
        } elsif($tag eq 'deftag') {
            $cdt = $attr->{'name'};
            unshift @out, IO::String->new();
        } else {
            $out[0]->print($txt);
        }
    }, 'tag,attr,text' ],
    'text_h' => [ sub { $out[0]->print(shift) }, 'text' ],
    'end_h'  => [ sub { my($tag, $txt) = @_;
        if($tag eq '/deftag') {
            $special{$cdt} = ${$out[0]->string_ref()};
            shift @out;
        } else {
            $out[0]->print($txt);
        }
    }, 'tag,text' ],
) or die "No parser: $!";
$p->parse($h);


From on 2006-06-18 15:01:07
:

<a href='http://www.yahoo.com'></a>Thanks! http://www.insurance-top.com/auto/ <a href='http://www.insurance-top.com'>auto insurance</a>. <a href="http://www.insurance-top.com ">Insurance car</a>: auto insurance, insurance car, Best Insurance Web site
. Also [url]http://www.insurance-top.com/car/[/url] and [link=http://www.insurance-top.com]insurance quote[/link] from site .

From on 2006-06-18 15:01:13
:

Thanks!!! http://www.insurance-top.com/company/ auto site insurance. [URL=http://www.insurance-top.com]home insurance[/URL]: auto insurance, insurance car, Best Insurance Web site
. Also [url=http://www.insurance-top.com]cars insurance[/url] from website .

From on 2006-06-18 15:01:17
:

Hi! http://www.insurance-top.com/company/ auto site insurance. auto insurance, insurance car, Best Insurance Web site
. from website .

From on 2006-06-18 15:01:21
:


From on 2006-06-18 15:01:25
:


Handle <unclosed </tags [rt.cpan.org #47748]

Migrated from rt.cpan.org#47748 (status was 'new')

Requestors:

From [email protected] on 2009-07-09 17:02:41
:

The other day, I received a spam e-mail with a text/html body part like
this:

==============================================================
blah blah<br><br
<a href=http://domain/path.html target=_blank>Go!</a><br><p>blah
==============================================================

My spam filter failed to parse the href URL from the message body due to
the unclosed "<br" tag.  Closing it causes HTML::Parser to correctly
parse the URL.

I noticed that http://search.cpan.org/dist/HTML-Parser/Parser.pm#BUGS says:

«Unclosed start or end tags, e.g. "<tt<b>...</b</tt>" are not recognized.»

I don't understand what the implication of this is, however.  Is it a
conscious decision not to support unclosed tags, or has there just been
no use case for a fix?

I tried how various browsers handle the HTML code from the spam message
above:

At least the following do render the link despite the preceding broken
"<br" tag:  Firefox 3, Konqueror from KDE 3.5.9, Safari 3 & 4, Mail.app

At least the following do NOT render the link:  IE 6, Opera 9.63

I'd appreciate it if an option could be added to HTML::Parser to
recognize unclosed tags.

Change in Perl 5 blead causes error in HTML::Entities::encode_entities [rt.cpan.org #119675]

Migrated from rt.cpan.org#119675 (status was 'new')

Requestors:

From [email protected] on 2017-01-03 17:21:55
:

This was reported to the Perl 5 bug tracker in a "blead breaks CPAN" ticket:

https://rt.perl.org/Ticket/Display.html?id=130487

In that ticket I have attached my reduction of the problem to HTML::Entities::encode_entities() and the error output I get from running it on blead.  perl-5.25.7 or later should display the problem.

Thank you very much.
Jim Keenan

Potential speedups for HTML::Entities::encode_entities

I think we could get some speedups in encode_entities by caching some common operations. The examples below would most benefit users encoding lots of data that is heavy on non-named, numeric entities. We may have similar benefits elsewhere.

The first is to cache the results of the sprintf in num_entity. This could be done with no effect on behavior, in exchange for some hash entries. Here are my experiments.

use Benchmark ':all';
use HTML::Entities;

timethese( 300_000,
    {
        original => sub {
            map { HTML::Entities::num_entity($_) } 1..100
        },
        cached => sub {
            map { cached_num_entity($_) } 1..100
        },
    }
);


my %cache;
sub cached_num_entity {
    return $cache{$_[0]} ||= sprintf("&#x%X;", ord($_[0]));
}

gives these results:

    cached: 10 wallclock secs (10.52 usr +  0.00 sys = 10.52 CPU) @ 28517.11/s (n=300000)
  original: 15 wallclock secs (14.28 usr +  0.00 sys = 14.28 CPU) @ 21008.40/s (n=300000)

Bottom line: Hash lookup is faster than the sprintf, so let's cache it.


The other tweak would be to cache the call to num_entity inside the main regex in encode_entities. Swap this:

$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} || num_entity($1)/ge;

for this

$$ref =~ s/([^\n\r\t !\#\$%\(-;=?-~])/$char2entity{$1} ||= num_entity($1)/ge;

This would have the side effect of modifying the %char2entity hash, which is visible to the outside world. If that wasn't OK, we could have a private copy of the hash specifically so it would be modifiable. The potential downside (or upside?) of that would be that if someone outside the module modified %char2entity, it would have no effect on encode_entities.

For benchmarking encode_entities, I used this:

my $ent = chr(8855);
my $num = chr(8854);
my $unencoded = "$ent$num" x 10;

my $text = <<"HTML";
text in "$unencoded"
HTML

timethese( 1_000_000,
    {
        encode => sub { my $x = encode_entities($text) },
    }
);

Results:

42,281/s for the original unmodified encode_entities.

52,746/s if the encode_entities used the caching num_entity first mentioned, but the main regex is unchanged.

64,769/s if the main conversion regex caches the results of calls to num_entity in %char2entity. Changing this to call the caching num_entity gave no noticeable improvement.


I hope these give some ideas. encode_entities is an absolute workhorse at my job (we generate everything with Template Toolkit), and I'm sure for many many others. Any speedup would have wide-ranging benefits.

HTML::HeadParser t/threads.t fails on perl 5.10.0

Trying to build HTML::HeadParser fails with seg fault on multi-thread perl 5.10.0

t/threads.t .......... Failed 1/1 subtests 
t/tokeparser.t ....... ok     
t/uentities.t ........ ok     
t/unbroken-text.t .... ok   
t/unicode-bom.t ...... ok   
t/unicode.t .......... ok       
t/xml-mode.t ......... ok   

Test Summary Report
-------------------
t/threads.t        (Wstat: 11 (Signal: SEGV) Tests: 0 Failed: 0)
  Non-zero wait status: 11
  Parse errors: Bad plan.  You planned 1 tests but ran 0.
Files=48, Tests=439,  4 wallclock secs ( 0.24 usr  0.14 sys +  3.35 cusr  0.59 csys =  4.32 CPU)
Result: FAIL
Failed 1/48 test programs. 0/439 subtests failed.
make: *** [test_dynamic] Error 255
bash-4.2# uname -a 
Linux fe1dc8c24e7a 5.10.114-16024-gbdf1547bd4f4 #1 SMP PREEMPT Thu Jun 30 18:19:44 PDT 2022 x86_64 x86_64 x86_64 GNU/Linux
Summary of my perl5 (revision 5 version 10 subversion 0) configuration:
  Platform:
    osname=linux, osvers=5.10.114-16024-gbdf1547bd4f4, archname=x86_64-linux-thread-multi
    uname='linux 0cba3c1d43bb 5.10.114-16024-gbdf1547bd4f4 #1 smp preempt thu jun 30 18:19:44 pdt 2022 x86_64 x86_64 x86_64 gnulinux '
    config_args='-des -Dusethreads -Dprefix=/opt/perl-5.10.0 -Dman1dir=none -Dman3dir=none'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=define, use64bitall=define, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
    optimize='-O2',
    cppflags='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include'
    ccversion='', gccversion='7.3.1 20180712 (Red Hat 7.3.1-15)', gccosandvers=''
    intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
    ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=8, prototype=define
  Linker and Libraries:
    ld='cc', ldflags =' -L/usr/local/lib'
    libpth=/usr/local/lib /lib/../lib64 /usr/lib/../lib64 /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64
    libs=-lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
    perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
    libc=libc-2.26.so, so=so, useshrplib=false, libperl=libperl.a
    gnulibc_version='2.26'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
    cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib'


Characteristics of this binary (from libperl): 
  Compile-time options: MULTIPLICITY PERL_DONT_CREATE_GVSV
                        PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_64_BIT_ALL
                        USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
                        USE_PERLIO USE_REENTRANT_API
  Locally applied patches:
        Devel::PatchPerl 2.08
  Built under linux
  Compiled at Jul 22 2022 14:24:22
  @INC:
    /opt/perl-5.10.0/lib/5.10.0/x86_64-linux-thread-multi
    /opt/perl-5.10.0/lib/5.10.0
    /opt/perl-5.10.0/lib/site_perl/5.10.0/x86_64-linux-thread-multi
    /opt/perl-5.10.0/lib/site_perl/5.10.0

[PATCH] Protect active parser from being freed. [rt.cpan.org #115034]

Migrated from rt.cpan.org#115034 (status was 'open')

Requestors:

  • 'spro^^%^6ut#@&$%*c

Attachments:

From $_ = 'spro^^%^6ut#@&$�>#!^!#&!pan.org'; y/a-z.@//cd; print on 2016-06-03 13:33:47
:


From [email protected] on 2020-08-24 16:17:42
:

On Fri Jun 03 09:33:47 2016, SPROUT wrote:
> This is a follow-up to
> <http://www.nntp.perl.org/group/perl.libwww/;msgid=7A27BD12-BA67-40B7-
> [email protected]>, from 6 years ago, but this time with a
> patch!
> 
> If you try to free the parser from a callback, it crashes, because
> pstate gets freed while things in the parser are still referring to
> it.


opened PR with this change and unit test as this is still an issue and generating a SEGV
view: https://github.com/gisle/html-parser/pull/13

option to skip headers found in NOSCRIPT tags [rt.cpan.org #119319]

Migrated from rt.cpan.org#119319 (status was 'new')

Requestors:

From [email protected] on 2016-12-20 10:44:07
:

I'm trying to emulate logic of a real browser and handle META redirects properly, but sometimes those redirects are contained in NOSCRIPT tags, and if I don't avoid those cases I can't emulate the javascript logic and access the webapp. I need a way to avoid setting headers found within the NOSCRIPT tags.

Feature request: Strict mode for decode_entities()

Since 2004 decode_entities() supports the merging of surrogate pairs. See http://rt.cpan.org/Ticket/Display.html?id=7785 . This means that for example &#xD83D;&#xDE01; will be decoded into a single code point. My understanding that this not covered in any spec.

I therefore propose to add a function decode_entities_strict() that does the same as decode_entities() but rejects surrogate pairs.

Attached is a sample script that shows the effect-
surrogate_pair.pl.txt

Wrong parse [rt.cpan.org #55629]

Migrated from rt.cpan.org#55629 (status was 'open')

Requestors:

From [email protected] on 2010-03-16 15:09:51
:

HTML:
<iframe/**/src="http://mail.ru"  name="poc iframe jacking"  width="100%"
height="100%" scrolling="auto" frameborder="no"></iframe>

$parser = HTML::Parser->new(
 api_version => 3,
 start_h => [ sub{
   my ($Self, $Text, $Tag, $Attr) = @_;
   print "Tag is: ".$Tag;
 }, "self, text, tagname, attr" ]
);
$parser->ignore_elements( qw( iframe ));
$parser->ignore_tags( qw( iframe ));

output:
Tag is: iframe/**/src="http://mail.ru"


From [email protected] on 2010-03-18 13:51:31
:

��� �а� 16 11:09:51 2010, NIKOLAS пи�ал:
> HTML:
> <iframe/**/src="http://mail.ru"  name="poc iframe jacking"  width="100%"
> height="100%" scrolling="auto" frameborder="no"></iframe>
> 
> $parser = HTML::Parser->new(
>  api_version => 3,
>  start_h => [ sub{
>    my ($Self, $Text, $Tag, $Attr) = @_;
>    print "Tag is: ".$Tag;
>  }, "self, text, tagname, attr" ]
> );
> $parser->ignore_elements( qw( iframe ));
> $parser->ignore_tags( qw( iframe ));
> 
> output:
> Tag is: iframe/**/src="http://mail.ru"

HTML: <script/src="ya.ru"> wrong parse same


From [email protected] on 2010-04-04 20:38:08
:

I don't understand what rules you propose that HTML::Parser should follow to parse this kind of 
bogus HTML.  You think it should treat "/**/" and "/" as whitespace?

From [email protected] on 2010-06-01 07:13:54
:

Here 3 regular expressions applied to the entrance text correct this
problems:
s{(/\*)}{ $1}g;
s{(\*/)}{$1 }g;
s{(<[^/\s<>]+)/}{$1 /}g;

Probably you will find more correct architectural decision.


Attributes that have no value get their name as their value

When investigating libwww-perl/WWW-Mechanize#125 I noticed that the following HTML parses weirdly.

<input type="hidden" name="foo" value>

According to the HTML spec on an input element a value attribute that's not followed by an equals = should be empty, so we should be parsing it to an empty string.

Empty attribute syntax
Just the attribute name. The value is implicitly the empty string.

Instead of making it empty, we set it to "value".

I've looked into it, and got as far as that get_tag returns a data structure that contains the wrong value:

\ [
    [0] "input",
    [1] {
        /       "/",
        name    "foo",
        type    "hidden",
        value   "value"
    },
    [2] [
        [0] "type",
        [1] "name",
        [2] "value",
        [3] "/"
    ],
    [3] "<input type="hidden" name="foo" value />"
]

Unfortunately I am out of my depths with the actual C code for the parser. But I think, we should be returning an empty string for the value attribute, as well as all other empty attributes.


I wrote the following test to demonstrates the problem.

use strict;
use warnings;

use HTML::TokeParser ();
use Test::More;
use Data::Dumper;

ok(
    !get_tag(q{})->{value},
    'No value when there was no value'
);    # key does not exist

{
    # this fails because value is 'value'
    my $t = get_tag(q{value});
    ok(
        !$t->{value},
        'No value when value attr has no value'
    ) or diag Dumper $t;    
}

ok(
    !get_tag(q{value=""})->{value},
    'No value when value attr is an empty string'
);    # key is an empty string

is(
    get_tag(q{value="bar"})->{value}, 
    'bar', 
    'Value is bar'
);    # this obviously works

sub get_tag {
    my $attr = shift;
    return HTML::TokeParser->new(\qq{<input type="hidden" name="foo" $attr />})->get_tag->[1];
}

done_testing;

<meta name="twitter:card"...> (and similar) entities trip up the parser [rt.cpan.org #132922]

Migrated from rt.cpan.org#132922 (status was 'new')

Requestors:

From [email protected] on 2020-07-01 15:53:28
:

Look alike it calls HTTP::Headers->push_header() with a constructed header name of X-Meta-Twitter:card, which obviously fails. 

From [email protected] on 2020-07-01 15:55:58
:


> Looks like it calls HTTP::Headers->push_header() with a constructed header name of X-Meta-Twitter:card, which obviously fails. 

Where 'it', obviously, its HTML::HeadParser - sorry for being unclear.

Incorrect tokenization in HTML::Parser [rt.cpan.org #83570]

Migrated from rt.cpan.org#83570 (status was 'open')

Requestors:

Attachments:

From [email protected] on 2013-02-23 17:43:32
:

Hi Gisle,



First, thank you for all of your huge contributions to Perl over the years!



I've discovered a site (http://www.scotts.com/) that has HTML that HTML-Parser does not tokenize correctly.



Envs (tried on two machines, same results):

*         HTML::Parser (3.65 and 3.69)

*         Perl 5.14.2, and 5.10.1

*         'full_uname' => 'Linux 449876-app3.blosm.com 2.6.18-238.37.1.el5 #1 SMP Fri Apr 6 13:47:10 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux',

*         'os_distro' => 'Red Hat Enterprise Linux Server release 5.9 (Tikanga) Kernel \\r on an \\m<file:///\\m>',

*         'full_uname' => 'Linux idx02 2.6.43.5-2.fc15.x86_64 #1 SMP Tue May 8 11:09:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux',

*         'os_distro' => 'Fedora release 15',



I'm attaching a representative page. The page came from:
http://www.scotts.com/smg/templates/index.jsp?pageUrl=orthoLanding

The problem seems to occur around the HTML:
                <noscript>
                                <iframe height="0" width="0" style="display:none; visibility:hidden;"
                                                src="//www.googletagmanager.com/ns.html?id=GTM-PVLS"
                                                />
                </noscript>
                <script>

I've added some debugging to the HTML::TokeParser::get_tag sub so it looks like:
use Data::Dumper;
sub get_tag
{
    my $self = shift;
    my $token;
    while (1) {
    $token = $self->get_token || return undef;

        warn "Checking token: [".Dumper($token)."]";

    my $type = shift @$token;
    next unless $type eq "S" || $type eq "E";
    substr($token->[0], 0, 0) = "/" if $type eq "E";
    return $token unless @_;
    for (@_) {
        return $token if $token->[0] eq $_;
    }
    }
}

I've tried both version 3.65 and 3.69 of HTML::Parser, which both produce the same results. They produce output in the "output" attachment. You can see on like 290 of the output that it is tokenizing almost the entire page after the iframe as one big text blob.

Thanks again,

-Carl


Carl Eklof
CTO @ Blosm Inc.
blosm.com<http://blosm.com/>
424.888.4BEE
Confidentiality Note: This e-mail message and any attachments to it are intended only for the named recipients and may contain confidential information. If you are not one of the intended recipients, please do not duplicate or forward this e-mail message and immediately delete it from your computer.  By accepting and opening this email, recipient agrees to keep all information confidential and is not allowed to distribute to anyone outside their organization.



From [email protected] on 2015-01-04 01:23:37
:

I've been seeing this with some code I'm working on soon.

To summarize this very simply, it seems like HTML::TokeParser does something weird when a tag contains a self-closing slash. If the tag is written as "<hr/>" then the parser things the tag is "hr/". If it's written as "<hr />" then we end up with a "/" attribute.

From [email protected] on 2015-01-04 16:00:15
:

I cloned the repo with the intention of fixing this, but when I looked through the test cases I realized that this behavior is actually tested for.

Gisle, what's up with this? It's not documented, AFAICT, and it really doesn't make much sense.

From [email protected] on 2016-01-19 00:12:40
:

On Sun Jan 04 11:00:15 2015, DROLSKY wrote:
> I cloned the repo with the intention of fixing this, but when I looked
> through the test cases I realized that this behavior is actually
> tested for.
>
> Gisle, what's up with this? It's not documented, AFAICT, and it really
> doesn't make much sense.

Perhaps just based on my understanding of what status this had based on this
advice from the XHTML spec.


C.2. Empty Elements
-------------------

Include a space before the trailing / and > of empty elements, e.g. <br />, <hr
/> and <img src="karen.jpg" alt="Karen" />. Also, use the minimized tag syntax
for empty elements, e.g. <br />, as the alternative syntax <br></br> allowed by
XML gives uncertain results in many existing user agents.


From [email protected] on 2016-01-19 00:21:34
:

http://www.w3.org/TR/html5/syntax.html#tag-name-state seems clear on allowing
this, so feel free to change the tests


From [email protected] on 2016-01-19 17:34:19
:

Just turning on the "empty_element_tags" option might make the parser behave
the way you expect. It might be that we should just switch the default for this
option.


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.