Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Pondering Portals

by hacker (Priest)
on Apr 30, 2005 at 03:21 UTC ( [id://452790]=perlquestion: print w/replies, xml ) Need Help??

hacker has asked for the wisdom of the Perl Monks concerning the following question:

I've recently been tasked with building a replacement for a commercial system run by two industry competitors who are gouging entrepeneurs and developers at 50% of their net sales revenue.

The system I will be building will allow developers to log in, check/update/maintain their software listings (as well as their own profile and preferences), and give them a spiffy page + screenshot for each of their applications.

This means I'll have to accept and process some minimal forms of markup. Herein lies my philosophical paradox...

I've been building portals, web-like CMS systems and other things for years, and for the most part have limtied the input accepted to plain text or a very small subset of acceptable markup. This system can't allow that level of inflexibility.

What is the best approach towards allowing specific tags through (<p>, <br />, <a ...>, <img..>, but disallowing the use of all the others (<iframe>, <script>, <style>, etc.).

I also have to take into consideration the dozens of ways to get xSS through, and protect against those.

Deny all, allow some? Filter all? Strip all and rewrap with allowed tags? Some other combination? I'd rather not have to run the HTML through a series of complicated subs to strip, massage, and de-fang the tags they're using, if possible.

I realize that PerlMonks and Slashdot and other large portal-like systems are doing this already. What approaches and techniques are best towards achieving this goal, while still retaining a good level of customization for the developer creating their own "listing" page?

Replies are listed 'Best First'.
Re: Pondering Portals
by bmann (Priest) on Apr 30, 2005 at 04:47 UTC
    HTML::Scrubber uses HTML::Parser under the hood, and makes it extremely simple to allow/disallow tags. As tanktalus says above, you will be better off choosing what tags to allow.

    From the pod:

    #!/usr/bin/perl -w use HTML::Scrubber; use strict; my $html = q[ <style type="text/css"> BAD { background: #666; color: #666;} </style> <script language="javascript"> alert("Hello, I am EVIL!"); </script> <HR> a => <a href=1>link </a> br => <br> b => <B> bold </B> u => <U> UNDERLINE </U> ]; my $scrubber = HTML::Scrubber->new( allow => [ qw[ p b i u hr br ] ] ) +; print $scrubber->scrub($html); $scrubber->deny( qw[ p b i u hr br ] ); print $scrubber->scrub($html); __END__ Output: <hr> a =&gt; link br =&gt; <br> b =&gt; <b> bold </b> u =&gt; <u> UNDERLINE </u> a =&gt; link br =&gt; b =&gt; bold u =&gt; UNDERLINE

      I use HTML::Scrubber on one of my sites, the only problem I have with it (which I was vaguely thinking of posting as a new question only yesterday) is that I see no way to enforce attribute inclusion.

      Say the user submits:

      <a href="http://example.com">text</a>

      I would like to automatically insert, or mandate, the xrel="nofollow" attribute and value - I can't see a simple way of doing this short of re-using the HTML::Parser, or a fragile regexp.

      That's the only shortcoming I see with HTML::Scrubber.

      Steve
      ---
      steve.org.uk
        Have you considered subclassing HTML::Scrubber? Below, I inject the xrel attribute into each anchor before validation.

        $ cat XREL.pm package XREL; use strict; use base 'HTML::Scrubber'; sub _validate { my ($self, $t, $r, $a, $as) = @_; if ( $t eq 'a' ) { $$a{ rel } = 'nofollow'; push @$as, 'rel' unless grep { /rel/ } @$as; } $self->SUPER::_validate( $t, $r, $a, $as ); } 1;
        $ cat scrub.pl #!/usr/bin/perl use warnings; use strict; use XREL; my $scrubber = XREL->new( allow => [ qw[ a p b i u hr br ] ] ); $scrubber->rules( a => { href => 1, rel => qr/^nofollow$/i, '*' => 0, } ); my $html = q[<a href="http://perlmonks.org">link </a>]; print $scrubber->scrub($html), $/; $html = q[<a href="http://perlmonks.org" rel="nofollow">link </a>]; print $scrubber->scrub($html), $/; $html = q[<a href="http://perlmonks.org" rel="xxx">link </a>]; print $scrubber->scrub($html), $/; $html = q[<a href="http://perlmonks.org" rel="xnofollow">link </a>]; print $scrubber->scrub($html), $/; __END__ output: <a href="http://perlmonks.org" rel="nofollow">link </a> <a href="http://perlmonks.org" rel="nofollow">link </a> <a href="http://perlmonks.org" rel="nofollow">link </a> <a href="http://perlmonks.org" rel="nofollow">link </a>

        update:changed xrel="nofollow" to rel="nofollow"

Re: Pondering Portals
by Tanktalus (Canon) on Apr 30, 2005 at 04:04 UTC

    Off the top of my head, I would probably throw the thing at HTML::Parser, and check for any tag that doesn't match your list of acceptable tags. Any bad tags would prompt an error and refusal to accept.

    It does, like any other detainting procedure, need to use a list of acceptable tags, rather than a list of unacceptable tags. It's much easier to add to a list of acceptable tags ("Hey, my favourite tag, dl, isn't working! Add it!") than to maintain a list of unacceptable tags ("Darn, look what this idiot just did!").

    Option 2: grab the source to Everything, and steal it. ;-)

Re: Pondering Portals
by cbrandtbuffalo (Deacon) on Apr 30, 2005 at 12:39 UTC
    You mentioned that the users will log-in. This usually figures into our reasoning when we think about how much effort to put into rules. If the tool is for a defined group of users, you can put out an acceptable use policy. If/when someone does something bad you can follow some procedure for telling them. Repeated offenses can lead to revoking the service.

    There are some arguments against that. All of that stuff takes effort (the warning) and that effort could have gone into making the system safer. You also need to consider the aptitude of the users. Are they likely to accidentally make mistakes in input? If so, the validation has another function which is making the system more usable.

    So I guess I'm saying there are cases where you can leave it wide open, but it's a small subset of cases.

    That leads to your other question, which is are there some subsets of acceptable tags that people have already defined? I'm very interested in the answers because I think you're right that this problem gets solved repeatedly all the time.

Re: Pondering Portals
by demerphq (Chancellor) on Apr 30, 2005 at 18:17 UTC

    Do the same thing that most wikis use and create a new notation with well defined rules. Then convert that to HTML. Assuming your new notation is well designed that would be the end of the problem. HTML is not suited to this as it has too many edge cases where things can happen that you may not anticipate, and filtering those constructs out is difficult if you intend to allow some of them to pass through, requiring you to handle parsing html, which is expensive.

    Also if go with HTML-like markup you need to consider carefully where you handle task like filtering. Putting on the submit has different ramifications from putting at the fetch.

    ---
    demerphq

Re: Pondering Portals
by kwaping (Priest) on May 01, 2005 at 03:58 UTC
    What I recommend is to get familiar with the major forum / message board packages as an end user. Seeing as you're posting here, you are probably already familiar with at least one or two of them. I've noticed that they generally use a type of pseudo-HTML written with brackets instead of <tags>. It takes a little acclimatization, but after a short while the tags are as natural as HTML.

    I suggest using this system because:
    1. It works.
    2. Your users will probably already be familiar with a similar system.
    3. You will have complete control over the tags, including the ability to create custom tags that combine multiple HTML elements.
    Here are some examples of existing systems:
Re: Pondering Portals
by mattr (Curate) on Apr 30, 2005 at 15:48 UTC
    > This means I'll have to

    I'm not sure I understand why. You could do what you say is necessary by providing a dhtml wysiwig html editor for example, certainly you don't need full xhtml to do the simple things you mention. Or maybe there is something else?

    Also I have not used it but you probably know Bricolage and other CMS solutions, at any rate this seems like the kind of thing where either you use a very elegant solution, skip providing a solution, or end up making one and tweaking it forever. Anyway, a "listing" page doesn't need more than maybe boldface/italics and hrefs, no?

Re: Pondering Portals
by eXile (Priest) on May 01, 2005 at 00:55 UTC
    For a simple solution I'd encode all < and > signs, and then very selectively deencode for a selective subset. Don't know how all modules proposed above do it (I guess they do it like this), but that way you act in accordance with a 'default denial stance' that is commonplace in security-land.
Re: Pondering Portals
by TedPride (Priest) on Apr 30, 2005 at 18:22 UTC
    Cut out the tags you want to allow, leaving markers to tell where they're supposed to be and storing the tags in an array. <p> might become <?M23?> temporarily (23 being the array subscript). Perform strict validation on each stored tag, remove all remaining tags from the page, reinsert the stored tags. Presto, you're safe from all major abuses. You still have to worry about people linking to or including images from porn or other unallowed URLs, but given that yours is a user-based system, that shouldn't be a significant problem assuming your login system is uncrackable. If someone does something naughty, just ban them and keep the rest of the month's payment.
sanitizing and balancing
by khkramer (Scribe) on May 02, 2005 at 16:17 UTC

    The modules available to filter HTML -- HTML::Scrubber, HTML::Sanitizer -- are missing what, to me, is a very important feature: enforcing tag balance. Those of us writing portal-style snippet-editing stuff often need to make sure that open tags match close tags, at the very least. Ideally, proper nesting would be enforced (and munged in by the filter itself).

    Tag balance and proper nesting matter for two reasons: browsers often do odd, ugly and non-intuitive things to layout in the presence of unbalanced tags; and it is sometimes convenient to store snippets in x(ht)ml contexts, and forcing proper tag semantics on input removes the need to escape data in these kinds of systems. I have a hacked-up filter/balancer based on HTML::TreeBuilder. It works okay, is in the XML::Comma svn repository, and could be released with our next production version. But it would perhaps be nice to not impose yet-another-HTML-filter on the world, and to extend one of the existing offerings in this direction. So my question is, why don't other folks seem to care about this feature as much as we do -- what are we missing?

    I do notice that perlmonks does some auto-balancing in posts (although not quite the same way I would). I did a bit of poking around at the Everything site, and it looks like the filter modules are not separate CPAN entities.

      Tag balancing is a feature (requirement) of XML, not HTML. You want an XML module if you are parsing XML, but if you're parsing HTML (as in, the HTML that you may get from an arbitrary site over which you have no control), you need the HTML modules.

      If your documents are well-formed, then I would suggest XML::Twig. I use XML::Twig for doing things such as taking HTML tables, copying the header, and reinserting it every 5th or 10th or whatever-th row such that, in long tables, you don't need to go back to the top of the screen to find it. And to alternate background colours on rows (setting the class attribute to "odd" or "even", and letting CSS actually do the colouring).

      But, if a user over which you have no control will send you text in a form, you're probably better off assuming that they may not balance their tags.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://452790]
Approved by moot
Front-paged by tlm
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (3)
As of 2024-04-26 00:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found