Large-Scale Web Development Using TypeScript

Or, Re-building The World For The Post-Flash Era

This article discusses the technology decisions that we made in developing our latest desktop site, and how those choices allowed us to deliver the site in the face of pressures outside our control.

Background

In 2014, our flagship desktop sports product had reached the pinnacle of what we could do using Adobe’s Flash platform. Even so, we knew that Flash was reaching the end of it’s natural life as a web platform. Ever since Steve Jobs had penned his “Thoughts on Flash” in 2010 the technology industry (specifically browser vendors) had begun to move away from flash and strengthen the capabilities of HTML, JavaScript and CSS.

Shifting Sands…

In June 2015 Google sent a shot across the bows of Flash with the announcement of a new feature to “Detect and run important plugin content”, designed to reduce power consumption by limiting the amount of flash movies active on the page:

When you’re on a webpage that runs Flash, we’ll intelligently pause content (like Flash animations) that aren’t central to the webpage, while keeping central content (like a video) playing without interruption. If we accidentally pause something you were interested in, you can just click it to resume playback. This update significantly reduces power consumption, allowing you to surf the web longer before having to hunt for a power outlet.

Better battery life for your laptop – Chrome Blog – 4th June 2015

This had a direct affect on our advertising banners system which in this case would get picked-up as non-important plugin content. We had to migrate in short order from Flash assets for our banners to HTML/JS/CSS assets, which we achieved at a rapid pace ahead of the release of this change to the stable version of Google Chrome.

We also began to plan for our next generation desktop sports site to work on top of these technologies. We started work in September 2015 with a modest team working away to re-write the existing Flash site. We spend some time to analyse the existing site, break it up into functional components and building up a backlog of work items that we could progress through to deliver a site without Flash.

The Big One

Everything was ticking along nicely. We had a nicely defined backlog and were working through it in fortnightly sprints. Then this happened:

Intent to implement: HTML5 by Default

Target Q4 2016

Later this year we plan to change how Chromium hints to websites about the presence of Flash Player, by changing the default response of Navigator.plugins and Navigator.mimeTypes. If a site offers an HTML5 experience, this change will make that the primary experience. We will continue to ship Flash Player with Chrome, and if a site truly requires Flash, a prompt will appear at the top of the page when the user first visits that site, giving them the option of allowing it to run for that site.

Intent to implement: HTML5 by Default – Chromium-dev mail group – 9th May 2016

Chromium (and Google Chrome) were going to start asking users to allow all Flash content to run. We knew that any security warnings presented to our customers would have an impact on our business, and that we had to act. At this point we knew we had to scale up our development to hit that Q4 deadline.

Fortunately, we had already made the correct technology decisions to allow us to do that…

TypeScript – The Language Of Choice

We chose TypeScript to be the language we would build our new site in. We had used TypeScript to great success as part of our Mobile Sports site and so we knew that it would allow us to build applications in the way we wanted.

There were plenty of reasons that TypeScript was the right language for us:

  • TypeScript has syntactical similarities to ActionScript which meant that we were not going to lose the skills of our existing development team.
  • As a super-set of JavaScript, using TypeScript meant that developers with JavaScript knowledge did not have to learn a new language entirely from scratch.
  • TypeScript also provides the safety net of static types which allowed us to assert the correctness of our code in the developers editor, minimising or eliminating entire classes of errors that would otherwise be potential issues if we used Vanilla JavaScript.
  • The ability to specify the ECMAScript output version so that we could support older browsers. We had to support Internet Explorer 8 as there were still a significant number of our customers using that browser, which meant that we had to target ECMAScript 3. We could still write almost entirely modern TypeScript safe in the knowledge that it would transpile down to something that would work in older browsers.

However, the language did not provide all of the rules we wanted (such as disallowing the use of the any type), so had to find something else to fill that role.

TypeScript Linting

We soon realised that, although TypeScript brought a multitude of benefits out-of-the-box, we wanted to add extra checks to enforce that other common pitfalls would not befall us.

We decided that TSLint was the best tool for our needs, and allowed us to enforce the rules that we wanted. This included:

  • Disallowing the use of any as a type.
  • Banning certain functions and keywords that were problematic in browsers we supported (such as IE8), or caused performance or other issues.
  • No unused variables.
  • Enforcing the use of semi-colons.
  • No using a variable before it’s declaration.
  • Plus others to ensure conformance to the design goals of our internal UI framework.

We created our own custom linting rules on top of those provided with TSLint, and continue to review and add more as needed. This allows us to ensure that we are producing a maintainable code base, and also ensures that a large number of potential issues get resolved before they enter the version control system.

CSS – PostCSS

We decided to use PostCSS as our CSS processor, building on the approach we had taken when working on our Mobile site discussed in the earlier article Improving CSS at bet365.

This allowed us to use PostCSS plugins such as:

  • autoprefixer to automatically add vendor pre-fixes where needed
  • Standard variable substitution for
    • Colours
    • Fonts (Sizes, Stacks)
    • Widths and Breakpoints
  • stylelint to ensure adherence to conventions and avoid common errors

Also, we wholeheartedly adopted the ECSS approach for organising our CSS structure. Through the application of linting rules for both TypeScript and CSS we could enforce the application of ECSS principles to the code-base.

CSS Linting

We have created custom CSS linting rules on top of those provided with stylelint.

These enforced rules such as:

  • Filenames must match component name
  • Components must be in the correct name-space
  • Class names must match ECSS naming convention
  • Restrict complexity of selectors

The introduction of the CSS linting rules took a little while to get used to, but once everyone was on-board then a large number of potential issues were identified at the earliest opportunity.

Modular Design

Part of the good design in the existing Flash site was breaking functional areas up into independent modules, and then loading the specific modules necessary for the features needed for the current view and no more.

We wanted to keep this approach for the new site, and so we enforced this approach as part of the run-time framework and build system.

There would be two types of code packages loaded into the browser to run the site: what we defined as Modules and Libraries.

  • Modules would contain UI components and be the primary containers for the site
  • Libraries would contain code used by modules and be dependencies for modules, but not contain UI components.

The site would request to load modules that provide specific functionality. Dependency information would be tracked alongside the assets so that any dependent libraries would also be loaded.

Modules are deliberately kept isolated from each other by means of the build system generating the outer module code, causing the module code to load in it’s own separate context via a JavaScript closure. This ensure that there are no accidental or deliberate dependencies created outside those that are explicitly defined. If a module contained code that was beneficial to another module, then that code would have to be moved to a library in order to be used.

Build System

We decided that we would use Gulp as the backbone of our build system. This would be the glue that bound together:

  • TypeScript Linting
    • Linting errors would cause the build to fail!
  • TypeScript transpile to JavaScript
  • JavaScript Minification using UglifyJS
    • Including Dead Code removal
  • CSS processing using PostCSS
  • CSS Minification
  • Module building
  • Browser testing

In fact, the build system got large enough that it represented a significantly complex code-base in itself. Originally written in JavaScript, it soon became difficult to maintain. We took the decision to convert it to TypeScript and break it up into components that could be worked on in isolation.

Client Asset Bundling – “Blob”

From our early investigation into optimising JavaScript and CSS file loading in the client, we realised that the most efficient way that we could load files would be to bundle the code relating to a module up into a single HTTP response, and then break it back up on the client. This meant that JavaScript and CSS were returned in a single response, and that JavaScript running on the client would chop these responses up into their relevant components and dynamically inject them into the <head> of the page for processing.

That was when we decided to develop a system called “blob”. Blob understands the dependencies for our modules and libraries of JavaScript and CSS and the order that they need to be loaded onto the client. Blob also understands the versioning of the individual assets and which versions are dependant on each other.

In order to leverage browser caching, we set the Cache-Control header to a lifetime of 24 hours. Combined with the Age header indicating how old the response was, this meant that this would be saved in the browsers local cache for a day resulting in faster load times on page reload for that period.

In addition to the server-side aspect of blob, on the client we wrote a component called “blob loader” which handled the unbundling of these assets and also knew about dependencies and which dependencies had been loaded. This meant that we would not even request modules, libraries or their dependencies if we already had them loaded.

This meant that from the client code we could make a simple request for a specific module, safe in the knowledge that everything that we needed to successfully load and use that module would be returned in a single HTTP response which could be loaded and processed in-order on the client in the most efficient manner.

BrowserSync

As part of the Gulp Build System we also included BrowserSync as a tiny embedded web server that allowed developers to quickly get productive when working locally. From checking out the source code to getting a build up and running was a matter of minutes. BrowserSync also allowed automatic refreshing of the site as files were saved, saving further development time.

We also developed our own BrowserSync Middleware in order to handle requests for our “blob” combined assets, allowing us to simulate the intelligent loading of blob assets on developers machines seamlessly, in addition to handling other requests that did need to be routed to other off-developers-box system, such as authentication.

Dead Code Removal

UglifyJS has an option where it can remove dead code – which refers to code that is unreachable by all control paths.

Here’s an example illustrating dead code:

  function deadCodeExample() {
    if (false) {
      console.log("Never gets here!");
    } else {
      console.log("Always gets here!");
    }
  }

UglifyJS will (as well as minifying) strip the unreachable code as well, resulting in something like:

function deadCodeExample() {
  console.log("Always gets here!");
}

We realised that we could leverage this to include debugging constructs such as log(message) and assert(condition, message) that would contain code that ran in debug builds, but was automatically removed in release builds keeping our production libraries as light as possible.

Testing

Test Harness

We built a custom test harness to allow us to test individual modules outside the main container of the site. This meant that we could test these modules separately with stubbed data, and also provide controls in the harness to allow someone testing to be able to change aspects of the site, such as logged-in status, language, environment, plus many others.

This proved invaluable, especially in the earlier stages of development, in allowing developers and testers to work in isolation without being concerned with wider changes that may have resulted in disruption.

The stubbed data features allowed us to script the data changing for a page, which made it useful for finding data related issues. It also meant that we could agree a data contract with the back-end development teams and start developing the front-end at the same time, speeding up the delivery process.

We created stubbed data pages with data that could drive all of our layouts in a small set of views, which meant that visually checking the layouts was straightforward and that we could be confident that any changes had not broken any previously completed work.

Unit Tests

For unit tests we used Jasmine, with PhantomJS as the test runner. We made a conscious decision early on to only write unit tests for a limited subset of key functionality such as mathematical calculations and other non-UI based code. For UI components the focus would be more on automated testing in the browser.

We made the decision to constrain the number of unit tests because we had previously seen issues with the over-adoption of the approach which then resulted in excessive time spent in maintenance of these tests, and with a rapidly changing code-base tests would eat up a disproportionate amount of development time.

If any bugs were reported in the core library functions covered with unit tests we extended them to ensure that the error case was covered by a specific test, providing some insurance against regression issues.

Automated Tests

We built a custom automated testing system that used Selenium to drive automated user acceptance tests in target browsers. This meant that we could drive the site in a way similar to a real customer, but in an automated system that would scale to handle the level of testing coverage we needed.

The automated tests could drive either the test harness or the full site in the same way that a human could. The automated tests ran overnight with results sent to developers in the morning for review.

Performance Testing

For performance testing we used Chrome DevTools for testing on Developer machines, and also a Private Instance of WebPageTest set up with multiple test agents running versions of the main browsers we would be targeting.

Chrome DevTools features a performance timeline view which allows the developer to trace the page as it is being loaded, and identify potential bottlenecks and assist in resolving them.

In addition to WebPageTest we also used WebPageTest Monitor to set up scheduled jobs that ran at a defined interval, allowing us to track progress over time.

A problem we quickly ran into with this approach however was one of storage space – each performance test run in each browser on each operating system generated a sizeable test report, and we had scheduled hundreds of these to run per day.

The simple solution to this was to purge the test results on a fixed schedule. We had sufficient storage space for 28 days of tests which allowed us to track trends in that window, which was sufficient for our purposes. A simple Cron job to clear down the test results gave us what we needed.

The performance testing platform gave us the insight we needed to build systems such as Blob safe in the knowledge that the performance gains we got from them were present across all our major supported browsers and platforms.

Connectivity

The existing Flash site delivered it’s time sensitive data using Flash TCP Sockets to establish a direct TCP socket connection to our data servers. These could then send updates directly to the browser where they would be processed by our custom Flash UI framework and rendered to the screen as quickly as possible.

We needed to find a way to achieve something as close as possible to Flash Sockets in the browser without using plugins like Flash. After looking at all the possible options we settled on Websockets.

Websockets are discussed in more detail in the article Bi-Directional Communications for the Web

Video Streaming

Flash offers the best low-latency live streaming format at present in the form of RTMP Live Streams – so much so that it was chosen as the protocol to underpin Facebook’s Live Streaming. If your goal is to show video in as close to real-time as possible, then RTMP is the weapon of choice.

The plugin-free alternatives for live streaming are currently HLS and MPEG-DASH. However, these are both based on delivering HTTP segments to the browser, which means that the video must be processed up-front into those segments. HLS mandates a minimum chunk size of 2 seconds, and a minimum buffer size before playback of 3 segments, resulting in at least 6 seconds latency before taking into account encoding and transport. MPEG-DASH shares similar limitations.

We settled on a hybrid approach where we would show HLS or MPEG-DASH in scenarios where the extra latency would be acceptable as a trade-off to the display of any prompts to the end-user regarding the use of Flash. In scenarios where the latency would be a problem, such as the In-Play section of the site, then we would revert back to the RTMP streams and display a Flash video player element.

Animations and Scrolling

One of the main selling points of Flash is how it deals with animation and user interactions. For many years it was the best and only way of achieving smooth and attractive animations in a browser – evident by the number of websites using flash for just this kind of presentation in the first decade of the 21st Century. Our previous desktop site leveraged many of the capabilities of Flash to smoothly animate the UI and scrolling content sections.

However, over time the support and capabilities for these kinds of interactions improved using the native capabilities of the browsers. Support for new standards such as HTML5 and CSS3 proliferated, and the dominance of Flash waned.

We knew that we did not want to take a step backwards in terms of animations and user interactions from where our existing Flash site was, therefore we had to use all the tools available in the browser to reproduce the same or similar user interactions as were capable through Flash.

It quickly became apparent however that there were a number of issues with attempting to reproduce the same look and feel that had been present in our Flash site. Support for CSS animations varied wildly across browsers, as did support for some of the JavaScript API’s that would be needed to take the place of the CSS animations.

In general, we took the following pragmatic approach:

  1. If the browser supported the animation in CSS, then we would use CSS.
  2. If the browser did not support the animation in CSS, but it could be achieved reasonably in JavaScript, we would use JavaScript.
  3. If neither of the above applied, then we would not use the animation at all. Customers using these browsers would not see the animations.

In the case of point 1, we found that in some cases we needed to know when the animation had completed in JavaScript in order to chain these animations together. In order to support point 2, we developed our own animation library which would facilitate these animations.

However, we soon ran into a problem with the animations that we had put in place – the CSS transition events would not fire reliably in all cases, even in the latest versions of our supported browsers. We soon realised that in order to ensure that we could ensure that multiple stage animations fired more reliably we had to use JavaScript timers to time the animation durations and use the callbacks from those timers to trigger the next stage in the animation.

Conclusion

In following the approaches detailed in this article, we were able to rapidly scale up our team and deliver the best desktop site we could. The key technologies we used to achieve this were: