MediaWiki VisualEditor Parsoid on Windows Server 2012

The steps to get Parsoid working on Windows are here.

The MediaWiki project has been working on a visual text editor. It’s the default editor for the main namespace at, and is in an early trial at They’ve done a great job, I really like it. It also has some serious challenges to overcome, as outlined in a blog post by project lead Gabriel Wicke. Their solution is a project called parsoid which stands between the VisualEditor and the Wikitext which powers the project.

Parsoid is a nodeJS project. I needed to get it running on a Windows server, and I figured it would be pretty easy (node runs on Windows). I followed the instructions and quickly ran up against some red errors. The discussion page for the project had numerous complaints about it not working on Windows, and Google was no help. I installed it on a Linux box and grepped the entire tree for “windows”, and lo, the last result revealed unto me the truth:

### Windows

* A recent copy of the *x86* version of [Node.js for Windows](, *not* the x64 version.
* A copy of [Visual C++ 2010 Express](
* A copy of [Python 2.7](, installed in the default location of `C:\Python27`.

So that was it, with the dependencies satisfied Postoid installed correctly and ran normally. It turns out that Posoid has a deeply nested dependency on a module called “contextify” (parsoid → html5 → jsdom → contextify). Contextify has to compile something (honestly I have no idea what) and is expecting Python and a C compiler. These are standard tools on a Linux system, but not on Windows.


Building dashboards with splunk, twig and bootstrap

There won’t be much code with this one because it was an internal project, but it’s been interesting enough that I wanted to do a post. We’re using a monitoring package called AppManager by Zoho corp. It uses a hub-and-spoke architecture with remote “managed servers” rolling data up to a “management server”. Monitors are configured on “managed servers” to monitor systems at their local site. Individual monitors are placed into groups, and those groups can be further grouped, so that alarm states bubble up through the management groups.

My challenge was to create a simple stoplight dashboard showing the ships and the status of various monitor groups. Management wanted it to look like the Twillio service status dashboard. The software has a decent REST API returning JSON or XML. The initial dashboard was a snap, crawl the JSON returned and populate an HTML table. This was so easy that I decided it was time to learn Twig.

Twig is actually pretty slick, and I can easily see the benefits of using a template engine. I re-worked my code to populate an array of data, and passed that data into the twig render function. Twig lets you nest templates inside other templates, and pass data down to the “child”. It’s pretty slick, and I think going forward all my future work will be run through it.

I also wanted to play with Bootstrap, since my UIs are usually pretty bad. Bootstrap is super easy, looks great and is well documented. I’ve officially said goodbye to jQuery UI and hello to bootstrap.

The problem with AppManager is that the management server doesn’t keep monitor data, only monitor status. The other problem is that their dashboards aren’t very pretty. We already have a Splunk installation, so I figured this was a good time to play with Splunk.

I installed nodeJS on every ship, along with the splunk forwarder. At regular intervals the nodeJS script runs against the local AppManager server, gets details about specific monitors (anything in a group called ‘splunkforward’) and writes the JSON data out to the filesystem. The Splunk universal forwarder then pics up those files and sends them to the indexer. Splunk parses JSON data pretty well, and we have a Splunk wizard on site to help carve it up. Splunk also has great graphing features which did a lot of my work for me.

Finally, I wanted to get the monitor details back into my dashboard. Splunk has a PHP SDK which lets you easily retrieve saved queries and re-execute them. The query returns a job ID, and you can either go into a polling loop checking it’s status, or just execute it in blocking mode. Since the data I fed into Splunk initially was just JSON data, this is what I get back out. Those JSON documents can then be parsed with json_decode.

The Splunk bit was also really exciting to me. The alternative would be to parse the data and write it out to some RDBMS and query it out with SQL. I’m learning the data as I work through the project, am changing it, adding fields on the fly, and dealing with some differences in the JSON layout from AppManager (based on monitor type). Using Splunk has freed me from having to battle with SQL. I just feed it JSON files, query them out later and re-parse them before feeding them back into Twig. Fun!

This whole thing might seem overly complex, but consider that it’s expanded out for 24 ships, and each ship is connected with a high latency, low bandwidth satellite link which will occasionally fail. Splunk provided guaranteed delivery of data, and a convenient way to store and access it.

The finished product!

The finished product!


The path for getting AppManager data from the ship to HQ and finally a dashboard.

The path for getting AppManager data from the ship to HQ and finally a dashboard.


Showing a higher level and to ships sending data. In reality it's 24.

Showing a higher level and to ships sending data. In reality it’s 24.