Show posts from the last:

News stories from Thursday 08 December, 2016

Favicon for Symfony Blog 09:15 New in Symfony 3.3: Added support for formaction and formmethod attributes » Post from Symfony Blog Visit off-site link

Contributed by
Christophe Coevoet
in #20467.

The DomCrawler component eases DOM navigation for HTML and XML documents, making it very useful for functional tests and web scrapers. One of its most popular features allows to fill in and submit forms. But first, you must obtain the object that represents the form via one of its buttons:

use Symfony\Component\DomCrawler\Crawler;

$html = '<html> ... </html>';
$crawler = new Crawler($html);

$form = $crawler->selectButton('Save Changes')->form();
// fill in and submit the form...

However, starting from HTML5, buttons of type "submit" can define several attributes to override the original form action, target, method, etc.

<form action="/save" method="GET">
    <!-- ... -->

    <input type="submit" value="Save Changes"
           formaction="/save-and-close" formmethod="POST">
    <input type="submit" value="Save and Add Another"
           formaction="/save-and-add" formmethod="POST">

In Symfony 3.3 we added support for the formaction and formmethod attributes. Therefore, you'll always get the right action and method when getting the form via one of its buttons:

// ...
$form = $crawler->selectButton('Save Changes')->form();
// $form->getUri() -> '/save-and-close'
$form = $crawler->selectButton('Save and Add Another')->form();
// $form->getUri() -> '/save-and-add'

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Tuesday 06 December, 2016

Favicon for A List Apart: The Full Feed 16:00 Accessibility Whack-A-Mole » Post from A List Apart: The Full Feed Visit off-site link

I don’t believe in perfection. Perfection is the opiate of the design community.

Designers sometimes like to say that design is about problem-solving. But defining design as problem-solving is of course itself problematic, which is perhaps nowhere more evident than in the realm of accessibility. After all, problems don’t come in neat black-and-white boxes—they’re inextricably tangled up with other problems and needs. That’s what makes design so fascinating: experimentation, compromise, and the thrill of chasing an elusive sweet spot.

Having said that, deep down I’m a closet idealist. I want everything to work well for everyone, and that’s what drives my obsession with accessibility.

Whose accessibility, though?

Accessibility doesn’t just involve improving access for people with visual, auditory, physical, speech, cognitive, language, learning, and neurological difficulties—it impacts us all. Remember that in addition to those permanently affected, many more people experience temporary difficulties because of injury or environmental effects. Accessibility isn’t a niche issue; it’s an everyone issue.

There are lots of helpful accessibility guidelines in Web Content Accessibility Guidelines (WCAG) 2.0, but although the W3C is working to better meet the complex needs of neurodiverse users, there are no easy solutions. How do we deal with accessibility needs for which there are no definitive answers? And what if a fix for one group of people breaks things for another group?

That’s a big question, and it’s close to my heart. I’m dyslexic, and one of the recommendations for reducing visual stress that I’ve found tremendously helpful is low contrast between text and background color. This, though, often means failing to meet accessibility requirements for people who are visually impaired. Once you start really looking, you notice accessibility conflicts large and small cropping up everywhere. Consider:

  • Designing for one-handed mobile use raises problems because right-handedness is the default—but 10 percent of the population is left handed.
  • Giving users a magnified detailed view on hover can create a mobile hover trap that obscures other content.
  • Links must use something other than color to denote their “linkyness.” Underlines are used most often and are easily understood, but they can interfere with descenders and make it harder for people to recognize word shapes.

You might assume that people experiencing temporary or long-term impairment would avail themselves of the same browser accessibility features—but you’d be wrong. Users with minor or infrequent difficulties may not have even discovered those workarounds.

With every change we make, we need to continually check that it doesn’t impair someone else’s experience. To drive this point home, let me tell you a story about fonts.

A new font for a new brand

At Wellcome, we were simultaneously developing a new brand and redesigning our website. The new brand needed to reflect the amazing stuff we do at Wellcome, a large charitable organization that supports scientists and researchers. We wanted to paint a picture of an energetic organization that seeks new talent and represents broad contemporary research. And, of course, we had to do all of this without compromising accessibility. How could we best approach a rebrand through the lens of inclusivity?

To that end, we decided to make our design process as transparent as possible. Design is not a dark art; it’s a series of decisions. Sharing early and often brings the benefit of feedback and allows us to see work from different perspectives. It also offers the opportunity to document and communicate design decisions.

When we started showing people the new website, some of them had very specific feedback about the typeface we had chosen. That’s when we learned that our new headline font, Progress Two, might be less than ideal for readers with dyslexia. My heart sank. As a fellow dyslexic, I felt like I was letting my side down.

My entire career had been geared toward fostering accessibility, legibility, and readability. I’d been working on the site redevelopment for over a year. With clarity and simplicity as our guiding principles, we were binning jargon, tiny unreadable text, and decorative molecules.

And now this. Were we really going to choose a typeface that undid all of our hard work and made it difficult for some people to read? After a brief panic, I got down to some research.

So what makes type legible?

The short answer is: there is no right answer. A baffling and often contradictory range of research papers exists, as do, I discovered, companies trying to sell “reasonably priced” (read: extortionate) solutions that don’t necessarily solve anything.

Thomas Bohm offers a helpful overview of characters that are easily misrecognized, and the British Dyslexia Association (BDA) has published a list of guidelines for dyslexia-friendly type. The BDA guidelines on letterforms pretty much ruled out all of the fonts on our short list. Even popular faces like Arial and Helvetica fail to tick all the boxes on the BDA list, although familiar sans serifs do tend to test well, according to some studies (PDF).

And it’s not just dyslexia that is sensitive to typography; we recently had a usability testing participant who explained that some people on the autism spectrum struggle with certain fonts, too. And therein lies the problem: there’s a great deal of diversity within neurodiversity. What works for me doesn’t work for everyone with dyslexia; not everyone on the autism spectrum gives a flip about fonts, but some really do.

At first my research discouraged and overwhelmed me. The nice thing about guidelines, though, is that they give you a place to start.


Some people find fonts specifically designed for dyslexia helpful, but there is no one-size-fits-all solution. Personally, I find a font like Open Dyslexic tricky to read; since our goal was to be as inclusive as possible, we ultimately decided that Open Dyslexic wasn’t the right choice for Wellcome. The most practical (and universal) approach would be to build a standards-compliant site that would allow users to override styles with their own preferred fonts and/or colors. And indeed, users should always be able to override styles. But although customization is great if you know what works for you, in my experience (as someone who was diagnosed with dyslexia quite late), I didn’t always know why something was hard, let alone what might help. I wanted to see if there was more we could do for our users.

Mariane Dear, our senior graphic designer, was already negotiating with the type designer (Gareth Hague of Alias) about modifying some aspects of Progress Two. What if we could incorporate some of the BDA’s recommendations? What if we could create something that felt unique and memorable, but was also more dyslexia friendly? That would be cool. So that’s what we set out to do.

Welcome, Wellcome Bold

When I first saw Progress Two, I wasn’t particularly keen on it—but I had to admit it met the confident, energetic aspirations of our rebranding project. And even though I didn’t initially love it, I think our new customized version, Wellcome Bold, has “grown up” without losing its unique personality. I’ve come to love what it has evolved into.

We used the BDA’s checklist as a starting point to analyze and address the legibility of the letterforms and how they might be improved.

Illusion number 1

If uppercase I, lowercase l, and numeral 1 look too similar, some readers might get confused. We found that the capital I and lowercase l of Progress Two weren’t distinct enough, so Hague added a little hook to the bottom of the l.

Illustration showing examples of capital ‘I’, lowercase ‘l’, and numeral ‘1’Capital I, lowercase l, and numeral 1 show how Progress Two metamorphosed into Wellcome Bold. (All glyph illustrations by Eleanor Ratliff.)

Modern modem

In some typefaces, particularly if not set well, r and n can run together to appear to form an mmodern may be read as modem, for example. Breaking the flow between the two shapes differentiates them better.

Illustration showing how lowercase ‘r’ and ‘n’ were modified to prevent the two glyphs from running together when set next to each otherFrom Progress Two to Wellcome Bold: lowercase r and n were tweaked to prevent the two glyphs from running together when set next to each other.


Counters are the openings in the middle of letterforms. Generally speaking, the bigger the counters, the more distinct the letters.

Illustration showing counters in ‘b’, ‘a’, ‘e’, ‘o’, and ‘q’ in Wellcome BoldHighlighted counters in Wellcome Bold’s lowercase b, a, e, o, and q.


Because some people with dyslexia perceive letters as flipped or mirrored, the BDA recommends that b and d, and p and q, be easily distinguishable.

Illustration showing how lowercase ‘d’ and ‘b’ were modified to make them more easily distinguishable in Wellcome BoldLowercase d and b were modified to make them more easily distinguishable in Wellcome Bold.

Word shapes

Most readers don’t read letter by letter, but by organizing letterforms into familiar word shapes. We modified Progress Two not just to make things easier for readers who are dyslexic; we did it as part of a wider inclusive design process. We wanted to make accessibility a central part of our design principles so that we could create an easier experience for everyone.

Test, test, and test again

In the course of our usability testing, we had the good fortune to be able to work with participants with accessibility needs in each round, including individuals with dyslexia, those on the autism spectrum, and users of screen readers.

Once we started introducing changes, we were anxious to make sure we were heading in the right direction. Nancy Willacy, our lead user experience practitioner, suggested that a good way to uncover any urgent issues would be to ask a large number of respondents to participate in a survey. The media team helped us out by tweeting our survey to a number of charities focused on dyslexia, dyspraxia, autism, and ADHD, and the charities were kind enough to retweet us to their followers.

Although we realize that our test was of the quick-and-dirty variety, we got no feedback indicating any critical issues, which reassured us that we were probably on the right track. Respondents to the survey had a slight preference for the adjusted version of Progress Two over Helvetica (we chose a familiar sans serif as a baseline); the unadjusted version came in last.

Anyone can do it

Even if you don’t have a friendly type designer you can collaborate with to tailor your chosen fonts, you can still do a lot to be typographically accessible.


When selecting a typeface, look for letterforms that are clear and distinct.

  • Look closely and critically. Keeping the checklists we’ve mentioned in mind, watch for details that could potentially trip readers up, like shapes that aren’t well differentiated enough or counters that are too closed.
  • To serif or not to serif? Some research has shown that sans serifs are easier to read on screen, since, especially at lower resolutions, serifs can get muddy, make shapes less distinct, or even disappear altogether. If your existing brand includes a typeface with fine serifs or ornamental details, use it sparingly and make sure you test it with a range of users and devices.
  • Use bold for emphasis. Some research has shown that italics and all-caps text reduce reading speed. Try using bold for emphasis instead.
  • Underline with care. Underlines are great for links, but a standard text-decoration underline obscures descenders. In the future, the text-decoration-skip property may be able to help with that; in the meantime, consider alternatives to the default.


Think carefully about spaces between, around, and within letterforms and clusters of words.


The words you use are just as important as what you do with them.

  • Keep it short. Avoid long sentences. Keep headings clear and concise.
  • Avoid jargon. Write for your audience and cut the jargon unless it’s absolutely necessary. Acronyms and academic terms that might be appropriate for a team of specialists would be totally out of place in a more general article, for example.

So everything’s fixed, right?


There is no perfect typeface. Although we worked hard to improve the experience of the Wellcome site, some people will still struggle with our customized headline font, and with the Helvetica, Arial, sans-serif font stack we’re using for body text. However hard we try, some people may need to override defaults and choose the fonts and colors that work best for them. We can respect that by building sites that allow modification without breaking.

Pragmatic perfection

The trouble with expecting perfection in one go is that it can be tempting to take the safe route, to go with the tried and tested. But giving ourselves room to test and refine also gives us the freedom to take risks and try original approaches.

Putting ourselves out there can feel uncomfortable, but Wellcome wants to fund researchers that have the big ideas and the chutzpah to take big risks. So shouldn’t those of us building the site be willing to do the same? Yes, maybe we’ll make mistakes, but we’ll learn from them. If we had chosen a safe typeface for our headline font, we wouldn’t be having these conversations; we wouldn’t have done the research that led us to make changes; we wouldn’t discover new issues that failed to come up in any of our research.

The process sparked much debate at Wellcome, which opened doors to some intriguing opportunities. In the future, I won’t be so reticent about daring to try new things.

Additional resources

Favicon for Symfony Blog 09:06 New in Symfony 3.3: JSON authentication » Post from Symfony Blog Visit off-site link

Symfony 3.2 was released just a few days ago, but we've already started working on Symfony 3.3, which will be released at the end of May 2017. This is the first article of the "New in Symfony 3.3" series where we'll showcase the most relevant new features of this version.

Contributed by
Kévin Dunglas
in #18952.

The Symfony Security component provides out-of-the-box support for several authentication mechanisms, such as form logins and HTTP. In Symfony 3.3 we added a new mechanism based on JSON. It's similar to the traditional form login, but it takes a JSON document as entry and is convenient for APIs, especially used in combination with JWT.

In practice, first you need to add the json_login option to your firewall and define the URL used to log in users:

# app/config/security.yml
    # ...
            # ...
                check_path: /login

Then, create an empty controller associated with that URL. The controller must be empty because Symfony intercepts and handles this request (it checks the credentials, authenticates the user, throws an error if needed, etc.):

// src/AppBundle/Controller/SecurityController.php
use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route;
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
use Symfony\Component\HttpFoundation\Request;

class SecurityController extends Controller
     * @Route("/login", name="login")
    public function loginAction(Request $request)

And that's all. You can now log in users sending a JSON document like the following to the /login URL:

{ "username": "dunglas", "password": "foo1234" }

You can read the new How to Build a JSON Authentication Endpoint article for more details and to learn about its customization options.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris
Favicon for Joel on Software 03:23 Oh look, a new site! » Post from Joel on Software Visit off-site link

I’ve moved to WordPress. There may be some bugs!

News stories from Monday 05 December, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: ENVATO ELEMENTS » Post from A List Apart: The Full Feed Visit off-site link

ENVATO ELEMENTS, the only subscription made with designers in mind. 9000+ quality fonts, graphics, templates and more. Get started today.

News stories from Sunday 04 December, 2016

Favicon for Symfony Blog 09:08 A week of symfony #518 (28 November - 4 December 2016) » Post from Symfony Blog Visit off-site link

This week Symfony 3.2.0 was published after six months of intense development activity and including more than 150 new features. Meanwhile, the SymfonyCon Berlin 2016 conference took place and a new project called "Symfony Flex" was announced to improve the way you work with Symfony.

Symfony development highlights

2.7 changelog:

  • 6d01ad2: [WebProfilerBundle] don't display menu items for empty profiler panels
  • ec937cb: [Validator] ensure the proper context for nested validations
  • 49addbe: [ClassLoader] use only forward slashes in generated classmap
  • c360a22: [Config] ConfigCache::isFresh() should return false when unserialize() fails
  • 5d7f4e1: [TwigBundle] don't register the twig loader twice
  • 5f4d8e9: [Console] fixed wrong handling of multiline arg/opt descriptions
  • 4a7fbdd: [DependencyInjection] PhpDumper::hasReference() shouldn't search references in lazy service
  • fe15381: [Form] fixed "empty_value" option deprecation
  • 7ef0951: [Form] fixed FileType when using the "multiple" option

2.8 changelog:

  • 03c79be: [WebProfilerBundle] increased the maximum allowed width for the dump panel

3.1 changelog:

  • 6459349: [Serializer] removed unused GetSetMethodNormalizer::denormalize
  • e4a2c17: [FrameworkBundle] mark alias as private during creation
  • b699e4b: [Yaml] fixed the inline level for dumped multi-line strings

3.2 changelog:

  • 821e7bb: [WebProfilerBundle] don't use request attributes in RouterController
  • 8c2a77b: [Routing] fail properly when a route parameter name cannot be used as a PCRE subpattern name
  • e62b602: [FrameworkBundle] improved performance of ControllerNameParser
  • 24c40e0: [FrameworkBundle] don't rely on any parent definition for "cache.annotations"
  • ee4ae55: [VarDumper] enhance performance of ExceptionCaster and DataCollector
  • c500a3e: [FrameworkBundle] forbid env parameters in routing configuration
  • 0c1e9ab: [Form] removed unused var cloner property
  • f8b2a18: [WebProfilerBundle] use the VarDumper in the Translator panel

Master changelog:

  • 6a0ee38: [WebProfilerBundle] updated the "Symfony Config" panel in the profiler
  • fe454e4: [Serializer] allowed to specify a single value in @Groups
  • 122fae8: [DomCrawler] added support for formaction and formmethod attributes
  • d6e8937: [Security] added a JSON authentication listener

Newest issues and pull requests

They talked about us

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Wednesday 30 November, 2016

Favicon for Symfony Blog 10:33 Symfony 3.2.0 released » Post from Symfony Blog Visit off-site link

Symfony 3.2.0 has just been released. As for any other Symfony minor release, our backward compatibility promise applies and this means that you should be able to upgrade easily without changing anything in your code.

We've already blogged about some great new 3.2 features, but here is a curated list of the most relevant changes (we have 150+ new features in this releases in total):

New Component

  • Workflow: Symfony 3.2 introduces a new Workflow component (fabpot, lyrixx, Nyholm) (11882 and 19629)


  • reduced drastically the number of mandatory dependencies (fabpot) (Doctrine Annotations lib in 20097, Security Core and Security CSRF components in 20075, Templating component in 20072, Translation component in 20070, Asset component in 20067)
  • added new cache warmers (tgalopin) (annotations in 18533, validator in 19485, serializer in 19507)
  • added support for prioritizing form type extension tags (dmaicher) (19790)
  • added CachePoolClearerPass for weak cache pool refs in cache clearers (nicolas-grekas) (19900)
  • added cache:pool:clear command (nicolas-grekas) (19891)
  • changed paths to relative ones in templates paths cache (tgalopin) (19687)
  • allowed to specify a domain when updating translations (antograssiot) (19325)
  • changed server:run logs to be displayed by default (nicolas-grekas) (19174)
  • added file helper to Controller (dfridrich) (18502)
  • moved YamlLintCommand to the Yaml component (chalasr) (19139)
  • added phpstorm ide (hason) (20019)
  • added path argument to dump a specific option in debug:config (chalasr) (18940)


Twig minimum version for all Symfony supported versions is now 1.28.

  • made Twig cache independent of the project root directory (fabpot) (20285)
  • refactored Twig extensions to decouple definitions from implementations (fabpot) (20093)
  • added Twig runtimes for "critical" Twig extensions (fabpot) (20094)
  • added a Twig runtime loader (fabpot) (20092)
  • added access to token from twig AppVariable (HeahDude) (19991)


  • added a CSV encoder (dunglas) (19197)
  • added a YAML encoder (dunglas) (19326)
  • added support for specifying format for DateTimeNormalizer::denormalize (teohhanhui) (20217)
  • allowed to easily use static constructors (Ener-Getick) (19137)
  • deprecated SerializerAwareEncoder (JhonnyL) (18483)


  • improved support for one command apps (lyrixx) (16906)
  • added errors display in quiet mode (multi-io) (18781)
  • allowed multiple options to be set (SpacePossum) (19495)
  • added ability to regress the ProgressBar (jameshalsall) (19824)
  • added Lockable trait (geoffrey-brier) (18471)
  • added ConsoleLogger::hasErrored() (nicolas-grekas) (19090)
  • simplified simulation of user inputs in CommandTester (chalasr) (18710)
  • centralized input stream in base Input class (chalasr) (18999)
  • added aliases in command description instead of in different lines in application description (juanmirod) (18790)
  • added support for hidden commands (jwdeitch, Jordan Deitch) (20029)


  • made cache PSR6 compliant (Alexandre GESLIN) (19741)
  • added a way to hook on each node when dumping the AST (nicolas-grekas) (19060)
  • added a way to dump the AST (lyrixx) (19013)


  • allowed injecting ENV parameters at runtime using %env(MY_ENV_VAR)% (nicolas-grekas) (19681)
  • added automatic detection of definition classes when possible (Ener-Getick) (19191)
  • added priority support for CompilerPass classes (Ener-Getick) (18022)
  • deprecated access to private shared services. (hhamon) (19146)
  • added support for short services configurators syntax (voronkovich) (19190)
  • fixed ini file values conversion (fabpot) (20232)
  • added a trait to sort tagged services (iltar) (18482)


  • added a SecurityUserValueResolver for controllers (iltar) (18510, tweaked in 19452)
  • introduced a FirewallConfig class accessible from FirewallContext (chalasr) (19398)
  • allowed runtime configuration of hash algorithm (nicolas-grekas) (19843)
  • exposed the required roles in AccessDeniedException (Nicofuma) (19473)


  • added PDO and Doctrine DBAL adapters (nicolas-grekas) (19519)
  • added tags based invalidation (nicolas-grekas) (19047)
  • added NullAdapter to disable cache (tgalopin) (18825)
  • added PhpArrayAdapter to use shared memory on PHP 7.0 (tgalopin) (18823)
  • added PhpFilesAdapter (trakos, nicolas-grekas) (18894)
  • added generic TagAwareAdapter wrapper (replaces TagAwareRedisAdapter) (nicolas-grekas) (19524)


  • added support for unicode requirements (nicolas-grekas) (19604)
  • added support for appending a document fragment (rodnaph) (12979)
  • added support for array values in route defaults (xabbuh) (11394)
  • fixed URL generation to be compliant with PHP_QUERY_RFC3986 (jameshalsall) (19639)


  • moved YamlLintCommand to the Yaml component (chalasr) (19139)
  • fixed parsing multi-line mapping values (xabbuh) (19304)
  • added Yaml::PARSE_EXCEPTION_ON_DUPLICATE to throw exceptions on duplicates (Alex Pott) (19529)
  • deprecated missing space after mapping key colon (xabbuh) (19504)
  • added support for parsing PHP constants (HeahDude) (18626)
  • deprecated comma separators in floats (xabbuh) (18785)
  • allowed using _ in some numeric notations (Taluu) (18486)


  • Add support for XmlReader (Taluu) (19151)
  • Add support for Redis (nicolas-grekas) (18675)
  • made exception dumps more compact (nicolas-grekas) (19289)
  • added line in trace indexes (nicolas-grekas) (19657)
  • handled attributes in Data clones for more semantic dumps (nicolas-grekas) (19797)
  • allowed dumping subparts of cloned Data structures (nicolas-grekas) (19672)
  • added $dumper->dump(..., true); (nicolas-grekas) (19755)
  • added ClassStub for clickable & shorter PHP identifiers (nicolas-grekas) (19826)
  • added LinkStub to create links in HTML dumps (nicolas-grekas) (19816)
  • made the line clickable to toggle dumps (nicolas-grekas) (19796)

WebProfiler Bundle

  • switch to VarDumper when displaying data in the profiler (wouterj, nicolas-grekas) (19614)
  • added support for Content-Security-Policy context (romainneutron) (18568)
  • added a default ide file link web view (jeremyFreeAgent) (19973)
  • added expansion of form nodes that contains children with errors (yceruto) (19339)
  • added current firewall information in Profiler (chalasr) (19490)
  • added support for window.fetch calls in ajax section (ivoba) (19576)

PhpUnit Bridge

  • replaced ErrorAssert by @expectedDeprecation (nicolas-grekas) (20255)
  • allowed configuring removed deps and phpunit versions (nicolas-grekas) (20256)
  • added a triggered errors assertion helper (xabbuh) (18880)
  • added bin/simple-phpunit wrapper (=phpunit - yaml - prophecy) (nicolas-grekas) (19915)
  • added support for native E_DEPRECATED (nicolas-grekas) (20040)


  • added support for egulias/email-validator 2.x (xabbuh) (19153)
  • allowed validating multiple groups in one GroupSequence step (enumag) (19982)
  • added context object method callback to choice validator (Peter Bouwdewijn) (19745)
  • made strict the default option for choice validation (peterrehm) (19257)


  • changed FormTypeGuesserChain to accept Traversable (enumag) (20047)
  • added a DateInterval form type (MisatoTremor) (16809)
  • deprecated using Form::isValid() with an unsubmitted form (Ener-Getick) (17644)
  • added CallbackChoiceLoader (HeahDude) (18332)


  • added Request::isMethodIdempotent method (dunglas) (19322)
  • added support for the SameSite attribute in cookies. (iangcarroll) (19104)
  • added private by default when setting Cache-Control to no-cache (fabpot) (19143)
  • removed default cache headers for 301 redirects (e-moe) (18220)


  • [Process] allowed inheriting env vars instead of replacing them (nicolas-grekas) (19053)
  • [Filesystem] added a cross-platform readlink method (tgalopin) (17498)
  • [Filesystem] added a feature to create hardlinks for files (andrerom) (15458)
  • [DoctrineBridge] added a way to select the repository used by the UniqueEntity validator (ogizanagi) (15002)
  • [HttpKernel] allowed bundles to declare classes and annotated classes to compile using patterns (tgalopin) (19205)
  • [HttpKernel] add convenient method ArgumentResolver:: getDefaultArgumentValueResolvers (romainneutron) (19011)
  • [Translation] replaced %count% with a given number out of the box (bocharsky-bw) (19795)
  • [Config] added ExprBuilder::ifEmpty() (ogizanagi) (19764)
  • [PropertyInfo] extracted logic for converting a php doc to a Type (Ener-Getick) (19484)
  • [PropertyInfo] added support for singular adder and remover (dunglas) (18337)
  • [DomCrawler] added support for XPath expression evaluation (jakzal) (19430)
  • [ClassLoader] added ClassCollectionLoader::inline() to generate inlined-classes files (nicolas-grekas) (19276)
  • [PropertyAccess] added PSR-6 cache (dunglas) (16838)
  • [Monolog] added DebugProcessor (nicolas-grekas) (20416)
  • added cache reload when new files are added (fabpot) (20121)

You can read more about this new version by reading the Living on the Edge articles on this blog. Also read the UPGRADE guide for Symfony 3.2.

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Monday 28 November, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: O’REILLY DESIGN CONFERENCE » Post from A List Apart: The Full Feed Visit off-site link

O’REILLY DESIGN CONFERENCE - get the skills and insights you need to design the products of the future. Save 20% with code ALIST

News stories from Sunday 27 November, 2016

Favicon for Symfony Blog 11:41 A week of symfony #517 (21-27 November 2016) » Post from Symfony Blog Visit off-site link

This week Symfony released the 2.7.21, 2.8.14 and 3.1.7 maintenance versions. In addition, it published 3.2.0 Release Candidate 2, which will be the last version before the final 3.2.0 release in a few days. Lastly, next week the SymfonyCon Berlin 2016 conference will gather the entire community for the world's biggest Symfony event.

Symfony development highlights

2.7 changelog:

  • 7047e4d: [Process] do feature test before enabling TTY mode
  • 30d161c: [Form] added support for large integers
  • 34b9fd6, bbddeec: [HttpKernel] reverted BC breaking change of Request::isMethodSafe()
  • af9c279: [DependencyInjection] aliases should preserve the aliased invalid behavior
  • e62a390: [DependencyInjection] initialized properties before method calls
  • 821e7bb: [WebProfilerBundle] don't use request attributes in RouterController
  • 8c2a77b: [Routing] fail properly when a route parameter name cannot be used as a PCRE subpattern name
  • e62b602: [FrameworkBundle] improved performance of ControllerNameParser

3.1 changelog:

  • bd34b67: [FrameworkBundle] mark cache.default_*_provider services private
  • ed3fc9f: [Process] fixed process continuing after reached timeout using getIterator()

3.2 changelog:

  • 59f9949: [FrameworkBundle] avoid warming up the validator cache for non-existent class
  • 3c1361c: [DependencyInjection] allow null as default env value
  • 6f138b8: [SecurityBundle] fixed FirewallConfig nullable arguments
  • 808cc22: [WebProfilerBundle] fixed deprecated uses of profiler_dump
  • cabc225: [FrameworkBundle] added framework.cache.prefix_seed for predictable cache key prefixes
  • 85033ff: [HttpKernel] deprecate checking for cacheable HTTP methods in Request::isMethodSafe()
  • 380c268: [HttpKernel] fix exception when serializing request attributes
  • 215208e: [Workflow] fixed graphviz dumper for state machine
  • 5e19c51: [Doctrine Bridge] use cache.prefix.seed parameter for generating cache namespace
  • 24c40e0: [FrameworkBundle] don't rely on any parent definition for "cache.annotations"

Newest issues and pull requests

Twig development highlights

Master changelog:

  • e7aa8e5: fixed unconsistent behavior with "get" and "is" methods
  • 717365d: deprecated support for mbstring.func_overload != 0

They talked about us

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris
Favicon for Symfony Blog 06:21 Symfony 3.2.0-RC2 released » Post from Symfony Blog Visit off-site link

Symfony 3.2.0-RC2 has just been released. Here is a list of the most important changes:

  • bug #20601 [FrameworkBundle] Don't rely on any parent definition for "cache.annotations" (nicolas-grekas)
  • bug #20638 Fix legacy tests that do not trigger any depreciation (julienfalque)
  • bug #20374 [FrameworkBundle] Improve performance of ControllerNameParser (enumag)
  • bug #20474 [Routing] Fail properly when a route parameter name cannot be used as a PCRE subpattern name (fancyweb)
  • bug #20616 [Bridge/Doctrine] Use cache.prefix.seed parameter for generating cache namespace (nicolas-grekas)
  • bug #20566 [DI] Initialize properties before method calls (ro0NL)
  • bug #20583 [Workflow] Fixed graphviz dumper for state machine (lyrixx)
  • bug #20621 [HttpKernel] Fix exception when serializing request attributes (nicolas-grekas)
  • bug #20609 [DI] Fixed custom services definition BC break introduced in ec7e70fb… (kiler129)
  • bug #20598 [DI] Aliases should preserve the aliased invalid behavior (nicolas-grekas)
  • bug #20600 [Process] Fix process continuing after reached timeout using getIterator() (chalasr)
  • bug #20603 [HttpKernel] Deprecate checking for cacheable HTTP methods in Request::isMethodSafe() (nicolas-grekas)
  • bug #20602 [HttpKernel] Revert BC breaking change of Request::isMethodSafe() (nicolas-grekas)
  • bug #20610 [FrameworkBundle] Add framework.cache.prefix_seed for predictible cache key prefixes (nicolas-grekas)
  • bug #20595 [WebProfilerBundle] Fix deprecated uses of profiler_dump (nicolas-grekas)
  • bug #20589 [SecurityBundle] Fix FirewallConfig nullable arguments (ogizanagi)
  • bug #20590 [DI] Allow null as default env value (sroze)
  • bug #20499 [Doctrine][Form] support large integers (xabbuh)
  • bug #20559 [FrameworkBundle] Avoid warming up the validator cache for non-existent class (Seldaek)
  • bug #20576 [Process] Do feat test before enabling TTY mode (nicolas-grekas)
  • bug #20577 [FrameworkBundle] Mark cache.default_*_provider services private (nicolas-grekas)
  • bug #20550 [YAML] Fix processing timestamp strings with timezone (myesain)
  • bug #20543 [DI] Fix error when trying to resolve a DefinitionDecorator (nicolas-grekas)
  • bug #20544 [PhpUnitBridge] Fix time-sensitive tests that use data providers (julienfalque)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Thursday 24 November, 2016

Favicon for Symfony Blog 13:20 New in Symfony 3.2: Misc. improvements » Post from Symfony Blog Visit off-site link

This is the last article in the "New in Symfony 3.2" series. Symfony 3.2 will be released at the end of this month after six months of work and several hundreds of pull requests (more than 200 of them labeled as "new features").

VarDumper improvements

Contributed by
Nicolas Grekas.

The VarDumper component gained lots of new features and improvements in Symfony 3.2. One of the most interesting additions is the option to return the dumped contents instead of outputting them. This allows to store the dump into a string when using the component methods instead of the Twig dump() function:

use Symfony\Component\VarDumper\Cloner\VarCloner;
use Symfony\Component\VarDumper\Dumper\CliDumper;

$cloner = new VarCloner();
$dumper = new CliDumper();

// Before: dump contents

// After: store the dumped contents in a string
$result = $dumper->dump($cloner->cloneVar($variable), true);

Other interesting new features are the maxDepth and maxStringLength display options (see #18948) and the possibility to dump subparts of cloned data structures (see #19672).

Allow to compile classes that use annotations

Contributed by
Titouan Galopin
in #19205.

A simple way to improve the performance of Symfony applications is to use the addClassesToCompile() method in your bundles to add some of your classes to the boostrap file generated by Symfony to lower the I/O file operations.

However, a caveat of this method is that you can't compile classes that use annotations. In Symfony 3.2, we added a new method called addAnnotatedClassesToCompile() to allow caching those classes too. An added bonus of compiling the classes with annotations is that the annotation reader caches are warmed up too.

Lastly, both addClassesToCompile() and addAnnotatedClassesToCompile() now support declaring classes using wildcards:

    // classes defined using wildcards
    // class defined explicitly using its FQCN

Removed dependencies from the FrameworkBundle

Contributed by
Fabien Potencier.

The Symfony FrameworkBundle turns the decoupled Symfony Components into a web framework. In the previous Symfony versions, this bundle defined lots of hard dependencies with those components.

In Symfony 3.2, we've eliminated lots of hard dependencies, so these components won't be installed in your application if you don't use them: Templating component, Translation component, Asset component, Security Core and Security CSRF components and the Doctrine annotations library.

Added an AST dumper for ExpressionLanguage

Contributed by
Grégoire Pineau
in #19013.

In Symfony 3.2, the ExpressionLanguage component added a way to dump the AST (Abstract Syntax Tree) of expressions. This will allow to analyze the expressions statically (to validate them, optimize them, etc.) and even to modify those expressions dynamically.

Refactored Twig extensions

Contributed by
Fabien Potencier
in #20093, #20094.

Starting from Twig 1.26, the implementation of filters, functions and tests can use a different class than the extension they belong to. In Symfony 3.2, the most critical Twig extensions have been refactored to implement this feature, such as HttpKernelExtension, which defines the render() and controller() Twig functions. In addition, some optimizations have been introduced to not load Twig extensions when their associated component is not installed.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Tuesday 22 November, 2016

Favicon for A List Apart: The Full Feed 16:00 Insisting on Core Development Principles » Post from A List Apart: The Full Feed Visit off-site link

The web community talks a lot about best practices in design and development: methodologies that are key to reaching and retaining users, considerate design habits, and areas that we as a community should focus on.

But let’s be honest—there are a lot of areas to focus on. We need to put users first, content first, and mobile first. We need to design for accessibility, performance, and empathy. We need to tune and test our work across many devices and browsers. Our content needs to grab user attention, speak inclusively, and employ appropriate keywords for SEO optimization. We should write semantic markup and comment our code for the developers who come after us.

Along with the web landscape, the expectations for our work have matured significantly over the last couple of decades. It’s a lot to keep track of, whether you’ve been working on the web for 20 years or only 20 months.

If those expectations feel daunting to those of us who live and breathe web development every day, imagine how foreign all of these concepts are for the clients who hire us to build a site or an app. They rely on us to be the experts who prioritize these best practices. But time and again, we fail our clients.

I’ve been working closely with development vendor partners and other industry professionals for a number of years. As I speak with development shops and ask about their code standards, workflows, and methods for maintaining consistency and best practices across distributed development teams, I’m continually astonished to hear that often, most of the best practices I listed in the first paragraph are not part of any development project unless the client specifically asks for them.

Think about that.

Development shops are relying on the communications team at a finance agency to know that they should request their code be optimized for performance or accessibility. I’m going to go out on a limb here and say that shouldn’t be the client’s job. We’re the experts; we understand web strategy and best practices—and it’s time we act like it. It’s time for us to stop talking about each of these principles in a blue-sky way and start implementing them as our core practices. Every time. By default.

Whether you work in an internal dev shop or for outside clients, you likely have clients whose focus is on achieving business goals. Clients come to you, the technical expert, to help them achieve their business goals in the best possible way. They may know a bit of web jargon that they can use to get the conversation started, but often they will focus on the superficial elements of the project. Just about every client will worry more about their hero images and color palette than about any other piece of their project. That’s not going to change. That’s okay. It’s okay because they are not the web experts. That’s not their job. That’s your job.

If I want to build a house, I’m going to hire experts to design and build that house. I will have to rely on architects, builders, and contractors to know what material to use for the foundation, where to construct load-bearing walls, and where to put the plumbing and electricity. I don’t know the building codes and requirements to ensure that my house will withstand a storm. I don’t even know what questions I would need to ask to find out. I need to rely on experts to design and build a structure that won’t fall down—and then I’ll spend my time picking out paint colors and finding a rug to tie the room together.

This analogy applies perfectly to web professionals. When our clients hire us, they count on us to architect something stable that meets industry standards and best practices. Our business clients won’t know what questions to ask or how to look into the code to confirm that it adheres to best practices. It’s up to us as web professionals to uphold design and development principles that will have a strong impact on the final product, yet are invisible to our clients. It’s those elements that our clients expect us to prioritize, and they don’t even know it. Just as we rely on architects and builders to construct houses on a solid foundation with a firm structure, so should we design our sites on a solid foundation of code.

If our work doesn’t follow these principles by default, we fail our clients

So what do we prioritize, and how do we get there? If everything is critical, then nothing is. While our clients concentrate on colors and images (and, if we’re lucky, content), we need to concentrate on building a solid foundation that will deliver that content to end users beautifully, reliably, and efficiently. How should we go about developing that solid foundation? Our best bet is to prioritize a foundation of code that will help our message reach the broadest audience, across the majority of use cases. To get to the crux of a user-first development philosophy, we need to find the principles that have the most impact, but aren’t yet implicit in our process.

At a minimum, all code written for general audiences should be:

  • responsive
  • accessible
  • performant

More specifically, it’s not enough to pay lip service to those catch phrases to present yourself as a “serious” dev shop and stop there. Our responsive designs shouldn’t simply adjust the flow and size of elements depending on device width—they also need to consider loading different image sizes and background variants based on device needs. Accessible coding standards should be based on the more recent WCAG 2.0 (Level AA) standards, with the understanding that coding for universal access benefits all users, not just a small percentage (coupled with the understanding that companies whose sites don’t meet those standards are being sued for noncompliance). Performance optimization should think about how image sizes, scripts, and caching can improve page-load speed and decrease the total file size downloaded in every interaction.

Do each of these take time? Sure they do. Development teams may even need additional training, and large teams will need to be prescriptive about how that can be integrated into established workflows. But the more these principles are built into the core functions of all of our products, the less time they will take, and the better all of our services will be.

How do we get there?

In the long run, we need to adjust our workflows so that both front-end and backend developers build these best practices into their default coding processes and methodologies. They should be part of our company cultures, our interview screenings, our value statements, our QA testing scripts, and our code validations. Just like no one would think of building a website layout using tables and 1px spacer images anymore (shout out to all the old-school webmasters out there), we should reach a point where it’s laughable to think of designing a fixed-width website, or creating an image upload prompt without an alt text field.

If you’re a freelance developer or a small agency, this change in philosophy or focus should be easier to achieve than if you are part of a larger agency. As with any time you and your team expand and mature your skillsets, you will want to evaluate how many extra hours you need to build into the initial learning curves of new practices. But again, each of these principles becomes faster and easier to achieve once they’re built into the workflow.

There is a wealth of books, blogs, checklists, and how-tos you can turn to for reference on designing responsively, making sites accessible, and tuning for performance. Existing responsive frameworks can act as a starting point for responsive development. After developing the overarching layout and flow, the main speed bumps for responsive content arise in the treatment of tables, images, and multimedia elements. You will need to plan to review and think through how your layouts will be presented at different breakpoints. A tool like can speed the process for external content embeds.

Many accessibility gaps can be filled by using semantic markup instead of making every element a div or a span. None of the accessible code requirements should be time hogs once a developer becomes familiar with them. The a11y Project’s Web Accessibility Checklist provides an easy way for front-end developers to review their overall code style and learn how to adjust it to be more accessible by default. In fact, writing truly semantic markup should speed CSS design time when it’s easier to target the elements you’re truly focused on.

The more you focus on meeting each of these principles in the early stages of new projects, the faster they will become your default way of developing, and the time spent on them will become a default part of the process.

Maintaining focus

It’s one thing to tell your team that you want all the code they develop to be responsive, accessible, and performant. It’s another thing entirely to make sure it gets there. Whether you’re a solo developer or manage a team of developers, you will need systems in place to maintain focus. Make sure your developers have the knowledge required to implement the code and techniques that address these needs, and supplement with training when they don’t.

Write value statements. Post lists. Ask at every stage what can be added to the process to make sure these core principles are considered. When you hire new talent, you can add questions into the interview process to make sure your new team members are already up to speed and have the same values and commitment to quality from day one.

Include checkpoints within each stage of the design and development process to ensure your work continues to build toward a fully responsive, accessible, and performant end product. For example, you can adjust the design process to start with mobile wireframes to change team mindsets away from designing for desktop and then trying to backfill mobile and tablet layouts. Another checkpoint should be added when determining color palettes to test foreground and background color sets for accessible color contrast. Add in a step to run image files through a compressor before uploading any graphic assets. Ask designers to use webfonts responsibly, not reflexively. Set a performance budget, and build in steps for performance checks along the way. Soon, your team will simply “know” which features or practices tend to be performance hogs and which are lean. You will need to make sure testing and code reviews look for these things, too.

Nothing worth doing happens by accident. Every time we overlook our responsibilities as designers and developers because it’s faster to cut corners, our products suffer and our industry as a whole suffers. As web professionals, how we work and what we prioritize when no one’s looking make a difference in thousands of little ways to thousands of people we will never meet. Remember that. Our clients and our users are counting on us.


News stories from Monday 21 November, 2016

Favicon for Symfony Blog 04:14 Symfony 3.1.7 released » Post from Symfony Blog Visit off-site link

Symfony 3.1.7 has just been released. Here is a list of the most important changes:

  • bug #20550 [YAML] Fix processing timestamp strings with timezone (myesain)
  • bug #20543 [DI] Fix error when trying to resolve a DefinitionDecorator (nicolas-grekas)
  • bug #20544 [PhpUnitBridge] Fix time-sensitive tests that use data providers (julienfalque)
  • bug #20484 bumped min version of Twig to 1.28 (fabpot)
  • bug #20519 [Debug] Remove GLOBALS from exception context to avoid endless recursion (Seldaek)
  • bug #20455 [ClassLoader] Fix ClassCollectionLoader inlining with __halt_compiler (giosh94mhz)
  • bug #20307 [Form] Fix DateTimeType marked as invalid on request with single_text and zero seconds (LuisDeimos)
  • bug #20480 [FrameworkBundle] Register the ArrayDenormalizer (dunglas)
  • bug #20286 [Serializer] Fix DataUriNormalizer's regex (dunglas)
  • bug #20466 [Translation] fixed nested fallback catalogue using multiple locales. (aitboudad)
  • bug #20465 [#18637][TranslationDebug] workaround for getFallbackLocales. (aitboudad)
  • bug #20453 [Cache] Make directory hashing case insensitive (nicolas-grekas)
  • bug #20440 [TwigBridge][TwigBundle][HttpKernel] prefer getSourceContext() over getSource() (xabbuh)
  • bug #20287 Properly format value in UniqueEntityValidator (alcaeus)
  • bug #20422 [Translation][fallback] add missing resources in parent catalogues. (aitboudad)
  • bug #20378 [Form] Fixed show float values as choice value in ChoiceType (yceruto)
  • bug #20294 Improved the design of the metrics in the profiler (javiereguiluz)
  • bug #20375 [HttpFoundation][Session] Fix memcache session handler (klandaika)
  • bug #20377 [Console] Fix infinite loop on missing input (chalasr)
  • bug #20372 [Console] simplified code (fabpot)
  • bug #20342 [Form] Fix UrlType transforms valid protocols (ogizanagi)
  • bug #20292 Enhance GAE compat by removing some realpath() (nicolas-grekas)
  • bug #20326 [VarDumper] Fix dumping Twig source in stack traces (nicolas-grekas)
  • bug #20321 Compatibility with Twig 1.27 (xkobal)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris
Favicon for Symfony Blog 03:43 Symfony 2.8.14 released » Post from Symfony Blog Visit off-site link

Symfony 2.8.14 has just been released. Here is a list of the most important changes:

  • bug #20543 [DI] Fix error when trying to resolve a DefinitionDecorator (nicolas-grekas)
  • bug #20544 [PhpUnitBridge] Fix time-sensitive tests that use data providers (julienfalque)
  • bug #20484 bumped min version of Twig to 1.28 (fabpot)
  • bug #20519 [Debug] Remove GLOBALS from exception context to avoid endless recursion (Seldaek)
  • bug #20455 [ClassLoader] Fix ClassCollectionLoader inlining with __halt_compiler (giosh94mhz)
  • bug #20307 [Form] Fix DateTimeType marked as invalid on request with single_text and zero seconds (LuisDeimos)
  • bug #20466 [Translation] fixed nested fallback catalogue using multiple locales. (aitboudad)
  • bug #20465 [#18637][TranslationDebug] workaround for getFallbackLocales. (aitboudad)
  • bug #20440 [TwigBridge][TwigBundle][HttpKernel] prefer getSourceContext() over getSource() (xabbuh)
  • bug #20422 [Translation][fallback] add missing resources in parent catalogues. (aitboudad)
  • bug #20378 [Form] Fixed show float values as choice value in ChoiceType (yceruto)
  • bug #20294 Improved the design of the metrics in the profiler (javiereguiluz)
  • bug #20375 [HttpFoundation][Session] Fix memcache session handler (klandaika)
  • bug #20377 [Console] Fix infinite loop on missing input (chalasr)
  • bug #20372 [Console] simplified code (fabpot)
  • bug #20342 [Form] Fix UrlType transforms valid protocols (ogizanagi)
  • bug #20292 Enhance GAE compat by removing some realpath() (nicolas-grekas)
  • bug #20326 [VarDumper] Fix dumping Twig source in stack traces (nicolas-grekas)
  • bug #20321 Compatibility with Twig 1.27 (xkobal)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris
Favicon for Symfony Blog 03:20 Symfony 2.7.21 released » Post from Symfony Blog Visit off-site link

Symfony 2.7.21 has just been released. Here is a list of the most important changes:

  • bug #20543 [DI] Fix error when trying to resolve a DefinitionDecorator (nicolas-grekas)
  • bug #20484 bumped min version of Twig to 1.28 (fabpot)
  • bug #20519 [Debug] Remove GLOBALS from exception context to avoid endless recursion (Seldaek)
  • bug #20455 [ClassLoader] Fix ClassCollectionLoader inlining with __halt_compiler (giosh94mhz)
  • bug #20307 [Form] Fix DateTimeType marked as invalid on request with single_text and zero seconds (LuisDeimos)
  • bug #20466 [Translation] fixed nested fallback catalogue using multiple locales. (aitboudad)
  • bug #20465 [#18637][TranslationDebug] workaround for getFallbackLocales. (aitboudad)
  • bug #20440 [TwigBridge][TwigBundle][HttpKernel] prefer getSourceContext() over getSource() (xabbuh)
  • bug #20422 [Translation][fallback] add missing resources in parent catalogues. (aitboudad)
  • bug #20378 [Form] Fixed show float values as choice value in ChoiceType (yceruto)
  • bug #20375 [HttpFoundation][Session] Fix memcache session handler (klandaika)
  • bug #20377 [Console] Fix infinite loop on missing input (chalasr)
  • bug #20342 [Form] Fix UrlType transforms valid protocols (ogizanagi)
  • bug #20292 Enhance GAE compat by removing some realpath() (nicolas-grekas)
  • bug #20321 Compatibility with Twig 1.27 (xkobal)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.

Be trained by Symfony experts - 2016-12-19 Paris - 2016-12-19 Paris - 2016-12-21 Paris

News stories from Tuesday 15 November, 2016

Favicon for A List Apart: The Full Feed 16:00 The Coming Revolution in Email Design » Post from A List Apart: The Full Feed Visit off-site link

Email, the web’s much maligned little cousin, is in the midst of a revolution—one that will change not only how designers and developers build HTML email campaigns, but also the way in which subscribers interact with those campaigns.

Despite the slowness of email client vendors to update their rendering engines, email designers are developing new ways of bringing commonplace techniques on the web to the inbox. Effects like animation and interactivity are increasingly used by developers to pull off campaigns once thought impossible. And, for anyone coming from the world of the web, there are more tools, templates, and frameworks than ever to make that transition as smooth as possible. For seasoned email developers, these tools can decrease email production times and increase the reliability and efficacy of email campaigns.

Perhaps more importantly, the email industry itself is in a state of reinvention. For the first time, email client vendors—traditionally hesitant to update or change their rendering engines—are listening to the concerns of email professionals. While progress is likely to be slow, there is finally hope for improved support for HTML and CSS in the inbox.

Although some problems still need to be addressed, there has never been a better time to take email seriously. For a channel that nearly every business uses, and that most consumers can’t live without, these changes signal an important shift in a thriving industry—one that designers, developers, and strategists for the web should start paying attention to.

Let’s look at how these changes are manifesting themselves.

The web comes to email

It’s an old saw that email design is stuck in the past. For the longest time, developers have been forced to revisit coding techniques that were dated even back in the early 2000s if they wanted to build an HTML email campaign. Locked into table-based layouts and reliant on inline styles, most developers refused to believe that email could do anything more than look serviceable and deliver some basic content to subscribers.

For a few email developers, though, frustrating constraints became inspiring challenges and the catalyst for a variety of paradigm-shifting techniques.

When I last wrote about email for A List Apart, most people were just discovering responsive email design. Practices that were common on the web—the use of fluid grids, fluid images, and media queries—were still brand new to the world of email marketing. However, the limitations of some email clients forced developers to completely rethink responsive email.

Until recently, Gmail refused to support media queries (and most embedded styles), leaving well-designed, responsive campaigns looking disastrous in mobile Gmail apps. While their recently announced update to support responsive emails is a huge step forward for the community, the pioneering efforts of frustrated email developers shouldn’t go unnoticed.

Building on the work first introduced by MailChimp’s Fabio Carneiro, people like Mike Ragan and Nicole Merlin developed a set of techniques typically called hybrid coding. Instead of relying on media queries to trigger states, hybrid emails are fluid by default, leaving behind fixed pixels for percentage-based tables. These fluid tables are then constrained to appropriate sizes on desktop with the CSS max-width property and conditional ghost tables for Microsoft Outlook, which doesn’t support max-width. Combined with Julie Ng’s responsive-by-default images, hybrid coding is an effective way for email developers to build campaigns that work well across nearly every popular email client.

<img alt="" src="" width="600" style="display: block; width: 100%; max-width: 100%; min-width: 100px; font-family: sans-serif; color: #000000; font-size: 24px; border="0";" />

Responsive-by-default images with HTML attributes and inline CSS.

More recently, two other methods have emerged that address the issues with mobile email using more advanced techniques. Both Rémi Parmentier’s Fab Four technique and Stig Morten Myre’s mobile-first approach take the concept of mobile-first development so common on the web and apply it to email. Instead of relying on percentage-based fluid tables, both techniques take advantage of the CSS calc function to determine table and table cell widths, allowing for more adaptable emails across a wide range of clients. And, in both cases, developers can largely drop the use of tables in their markup (save for Microsoft ghost tables), creating emails that hew closer to modern web markup.

Moving beyond responsive layouts, email designers are increasingly adding animation and interactivity to their campaigns, creating more engaging experiences for subscribers. Animated GIFs have long been a staple of email design, but CSS animations are becoming more prevalent. Basic transitions and stylistic flourishes like Email Weekly’s heart animation (scroll down) or Nest’s color-shifting background colors are relatively easy to implement, fall back gracefully when not supported, and give email designers more options to surprise and delight their audiences.

Image showing Nest’s keyframe-animation-driven shifting background colors.Nest’s keyframe-animation-driven shifting background colors. Image courtesy of Nest.

Combined with the checkbox hack and Mark Robbins’s punched card coding, CSS animations allow email developers to create highly interactive experiences for the inbox. While earlier examples of interactivity were reserved for elements like product carousels, people like Robbins and the Rebelmail team have started creating full-blown checkout experiences right in an email.

Image showing the different stages of Rebelmail’s interactive checkout email.The different stages of Rebelmail’s interactive checkout email. Image courtesy of Rebelmail.

Interactivity doesn’t have to be reserved for viewing retail products, though. At Litmus, animations and interactivity were used to provide a full product tour inside of an email.

Screenshot showing Litmus Buildeer—a code editor built for email design and development.An interactive product tour in an email. Image courtesy of Litmus.

In this case, interactivity was used to provide product education, allowing users to experience a product before they even got their hands on it. While similar educational effects have been achieved in the past with animated GIFs, the addition of truly interactive elements created an experience that elevated it above similar campaigns.

Finally, the web’s focus on accessibility is cropping up in email, too. Both table-based layouts and inconsistencies in support for semantic elements across email clients have contributed to a near-complete lack of accessibility for email campaigns. Advocates are now speaking out and helping to change the way developers build emails with accessibility in mind.

The use of role=presentation on tables in email is becoming more widespread. By including role=presentation on table elements, screen readers recognize that those tables are used for layout instead of presenting tabular data and skip right to the content of the campaign.

Developers are also embracing semantic elements like proper headings and paragraphs to provide added value for people with visual impairments. So long as you are careful to override default margins on semantic, block-level elements, designers can safely use those elements without worrying about broken layouts. It is now something that every email developer should be doing.

Combined with email’s focus on alternative text—widely used to combat email clients that disable images for security reasons—accessible tables and semantic elements are laying the foundation for more usable, accessible email campaigns. There’s still a huge amount of research and education needed around accessibility in email, but the email world is slowly catching up to that of the web.

All of these techniques, mostly commonplace on the web, are relatively new to the world of HTML email. Somes have been used on a limited scale, but they are on the verge of becoming mainstream. And, while animation and interactivity aren’t appropriate for every email campaign, they are valuable additions to the email toolbox. Taken together, they represent a massive shift in how developers and marketers approach HTML email and are changing the way subscribers think about the humble inbox.

Better tooling

If anything can be considered a constant on the web, it’s that web designers and developers love building tools and frameworks to (in theory) improve their workflows and the reliability of their code. Just like accessibility and interactivity, this focus on tooling and frameworks has been making its way into the email industry over the past few years.

Instead of relying on individual, locally saved, static HTML files, many email developers are now embracing not only GitHub to host and share code, but complete build systems to compile that code, as well. Tools like Grunt and Gulp are now in wider use, as are static site generators like Middleman.

Being able to focus on discrete components means developers no longer have to update multiple HTML files when managing large email programs. For teams in charge of dozens, if not hundreds, of different email templates, this is a godsend. Updating a logo in one place and having it propagate across different campaigns, for example, saves tons of time.

The use of build tools also opens up the possibility of hyperpersonalized campaigns: emails with custom content and custom layouts on a per-subscriber basis. Allowing build systems to handle the compilation of individual modules means that those modules can be pieced together in a virtually unlimited number of ways based on conditions set at the beginning of the build process. This moves personalization in email beyond basic name substitutions and gives email marketers an unbelievably powerful way to connect with subscribers and provide way more value than your typical “batch and blast” campaign.

Likewise, more email developers are relying on preprocessors like Sass and Less to speed up the development workflow. Controlling styles through variables, mixins, and logic can be extremely powerful. While CSS post processors aren’t in wide use, a few savvy email developers are now starting to take advantage of upcoming CSS features in their campaigns.

Email developers and designers working with smaller teams, or those less familiar with advanced tools like preprocessors and build tools, now have a plethora of HTML email templates and frameworks at their disposal. They range in complexity from simple, static HTML files that make customization easy to completely abstracted coding frameworks like MJML and Zurb’s Foundation for Emails 2. Both MJML and Foundation for Emails 2 introduce their own templating languages, allowing email developers to use markup closer to that found on the web, which is then compiled into more complex, table-based HTML.

          <mj-text>Hello World!</mj-text>

An example of MJML’s templating language, which compiles to table-based markup.

One area that still needs improvement is testing. While tools like Litmus have vastly improved the experience of testing static emails across clients, interactive emails present new challenges. Since testing services currently return static screenshots taken from the inbox, access to devices is crucial for teams working on interactive campaigns. Although a few people are coming up with novel approaches to testing interactive emails (most notably Cyrill Gross’s use of WebKit browsers and clever JavaScript), tooling around interactive email testing will need to improve for more email developers to adopt some of the techniques I describe here.

A seat at the table

Two of the most exciting developments in the email world are the recent Microsoft and Litmus partnership and Gmail’s announcement of support for media queries.

Due to their typically abysmal support for HTML and CSS (especially the box model and floats), the many variations of Outlook have long been the biggest thorn in email developers’ sides. Outlook is the primary reason that emails use tables for layout.

Now, though, for the first time, Microsoft is reaching out to the email community to document bugs and rendering problems in order to guide future development efforts and improve the rendering engines underpinning their email clients. While we’ll still have to rely on tables for the foreseeable future, this is a good indicator that the email community is moving closer to some form of standards, similar to the web in the early 2000s. I don’t think we’ll ever see standards as widely propagated across email clients as they are on the web, but this is the first step toward better HTML and CSS support for email developers.

One likely result of the Microsoft/Litmus partnership is that more email client vendors will open up lines of communication with the email design industry. With any luck, and a lot of work, Microsoft will be the first of many vendors to sit down at the table with email designers, developers, and marketers in order to improve things not only for email professionals, but also for the subscribers we serve. There are already signs that things are getting better beyond Microsoft’s promise to improve.

Gmail, one of the more problematic email clients, recently updated their rendering engine to support display: none;—an unprecedented move from a team that is historically unsympathetic to the concerns of the email community. Email developers were in for an even bigger surprise from the Gmail team when they announced that they will be supporting media queries and embedded styles, too. While the hybrid coding approach mentioned earlier is still useful for addressing some email clients, this change means that it is now easier than ever to apply the principles of responsive web design—fluid grids, fluid images, and media queries—to HTML email campaigns.

Perhaps more interesting is Gmail’s added support for embedded CSS and element, class, and ID selectors. With this one change, embedded styles will be nearly universally supported—meaning that email designers will no longer be bound to inline styles and all the headaches they bring. Emails will now be easier to design, develop, and maintain. The lighter code base and more familiar style of writing CSS means that many of the blockers for web developers taking email seriously will be removed.

Beyond rallying around improved support for HTML and CSS, the email community itself is thriving. I remember the dark days—really only a few years ago—of email design, when it was extraordinarily difficult to find reliable information about how to build email campaigns, let alone connect with others doing the same. Today, people interested in email have a large and growing community to turn to for help. More marketers, designers, and developers are sharing their work and opinions, contributing to a discourse that is helping to shape the industry in new and interesting ways.

Perhaps more importantly, designers and developers are beginning to understand that working with email is a viable career option. Instead of relegating email to one more task as a web dev, many are now taking up the mantle of the full-time email developer.

Now’s the time

Where once there was just darkness and Dreamweaver, the email world is brightening with the light of a growing community, better tools, and amazing techniques to animate a traditionally static medium. And, with the increasing support of email client vendors, we can finally see the flicker of email standards way off on the horizon.

While some folks have expressed emotions ranging from amusement to scorn when discussing email marketing, no one can take it for granted anymore. Subscribers love email, even if you don’t. Email is routinely the most effective digital marketing channel. Companies and teams need to embrace that fact and take email seriously. Fortunately, now’s the perfect time to do that. Never have there been more tools, resources, and people dedicated to making email better.

The revolution in email is bound to be a slow one, but make no mistake: it’s coming. The web is leaking into the inbox. If you can’t keep up, your campaigns (and you) will be left behind.

News stories from Monday 14 November, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: ADOBE XD » Post from A List Apart: The Full Feed Visit off-site link

ADOBE XD BETA, the only all-in-one solution for designing, prototyping, and sharing experiences for websites and mobile apps.

News stories from Monday 07 November, 2016

Favicon for Kopozky 14:51 Doing It Right » Post from Kopozky Visit off-site link

News stories from Tuesday 01 November, 2016

Favicon for A List Apart: The Full Feed 15:00 Let Emotion Be Your Guide » Post from A List Apart: The Full Feed Visit off-site link

We were sitting in a market research room in the midst of a long day of customer interviews. Across from us, a young mother was telling us about her experience bringing her daughter into the ER during a severe asthma attack. We had been interviewing people about their healthcare journeys for a large hospital group, but we’d been running into a few problems.

First, the end-goal of the interviews was to develop a strategy for the hospital group’s website. But what we’d discovered, within the first morning of interviews aimed at creating a customer journey map, was that hospital websites were part of no one’s journey. This wasn’t wildly surprising to us—in fact it was part of the reason we’d recommended doing customer journey mapping in the first place. The hospital had a lot of disease content on their site, and we wanted to see whether people ever thought to access that content in the course of researching a condition. The answer had been a resounding no. Some people said things like, “Hmm, I’d never think to go to a hospital website. That’s an interesting idea.” Others didn’t even know that hospitals had websites. And even though we’d anticipated this response, the overwhelming consistency on this point was starting to freak out our client a little—in particular it started to freak out the person whose job it was to redesign the site.

The second issue was that our interviews were falling a little flat. People were answering our questions but there was no passion behind their responses, which made it difficult to determine where their interactions with the hospital fell short of expectations. Some of this was to be expected. Not every customer journey is a thrill ride, after all. Some people’s stories were about mundane conditions. I had this weird thing on my hand, and my wife was bugging me to get it checked out, so I did. The doctor gave me cream, and it went away, was one story. Another was from someone with strep throat. We didn’t expect much excitement from a story about strep throat, and we didn’t get it. But mixed in with the mundane experiences were people who had chronic conditions, or were caregivers for children, spouses, or parents with debilitating diseases, or people who had been diagnosed with cancer. And these people had been fairly flat as well.

We were struggling with two problems that we needed to solve simultaneously. First: what to do with the three remaining days of interviews we had lined up, when we’d already discovered on the morning of day one that no one went to hospital websites. And second: how to get information that our client could really use. We thought that if we could just dig a little deeper underneath people’s individual stories, we could produce something truly meaningful for not only our client, but the people sitting with us in the interview rooms.

We’d been following the standard protocol for journey mapping: prompting users to tell us about a specific healthcare experience they’d had recently, and then asking them at each step what they did, how they were feeling and what they were thinking. But the young mother telling us about her daughter’s chronic asthma made us change our approach.

“How were you feeling when you got to the ER?” we asked.

“I was terrified,” she said. “I thought my daughter was going to die.” And then, she began to cry. As user experience professionals we’re constantly reminding ourselves that we are not our users; but we are both parents and in that moment, we knew exactly what the woman in front of us meant. The entire chemistry of the room shifted. The interview subject in front of us was no longer an interview subject. She was a mother telling us about the worst day of her entire life. We all grabbed for the tissue box, and the three of us dabbed at our eyes together.

And from that point on, she didn’t just tell us her story as though we were three people sitting in front of a two-way mirror.  She told us her story the way she might tell her best friend.

We realized, in that interview, that this was not just another project. We’ve both had long careers in user research and user experience, but we’d never worked on a project that involved the worst day of people’s lives. There might be emotion involved in using a frustrating tool at work or shopping for the perfect gift, but nothing compares to the day you find yourself rushing to the emergency room with your child.

So we decided to throw out the focus on the hospital website, concentrate on where emotion was taking us, and trust that we would be able to reconcile our findings with our client’s needs. We, as human beings, wanted to hear other human beings tell us about the difficulties of caring for a mother with Alzheimer’s disease. We wanted to know what it felt like to receive a cancer diagnosis after a long journey to many doctors across a spectrum of specialties. We wanted to understand what we could do, in any small way, to help make these Worst Days minutely less horrible, less terrifying, and less out-of-control. We knew that the client was behind the two-way mirror, concerned about the website navigation, but we also knew that we were going to get to someplace much more important and meaningful by following wherever these stories took us.

We also realized that not all customer journeys are equal. We still wanted to understand what people’s journeys with strep throat and weird hand rashes looked like, because those were important too. Those journeys told us about the routine issues that we all experience whenever we come into contact with the medical establishment—the frustration of waiting endlessly at urgent care, the annoyance of finding someone who can see you at a time when you can take off from work, the importance of a doctor who listens. But we also wanted to get to the impassioned stories where the stakes and emotions were much higher, so we adjusted our questioning style accordingly. We stuck to our standard protocol for the routine medical stories. And for the high-stakes journeys, the ones that could leave us near tears or taking deep breaths at the end of the interview, we proceeded more slowly. We gave our interview subjects room to pause, sigh, and cry. We let there be silence in the room. We tried to make it not feel weird for people to share their most painful moments with two strangers.

When we completed our interviews at the end of the week, we had an incredibly rich number of stories to draw from—so many, in fact, that we were able to craft a digital strategy that went far beyond what the hospital website would do. (Website? We kept saying to ourselves. Who cares about the website?) We realized that in many ways, we were limiting ourselves by thinking about a website strategy, or even a digital strategy. By connecting with the emotional content of the conversations, we started to think about a customer strategy—one that would be medium-agnostic.

In Designing for Emotion, Aarron Walter encourages us to “think of our designs not as a façade for interaction, but as people with whom our audience can have an inspired conversation.” As we moved into making strategic recommendations, we thought a lot about how the hospital (like most hospitals) interacted with their patients as a bureaucratic, depersonalized entity. It was as though patients were spilling over with a hundred different needs, and the hospital group was simply silent. We also thought about what a helpful human would do at various stages of the journey, and found that there were multiple points where pushing information out to customers could make a world of difference.

We heard from people diagnosed with cancer who said, “After I heard the word ‘cancer’ I didn’t hear anything else, so then I went home and Googled it and completely panicked.” So we recommended that the day after someone gets a devastating diagnosis like that, there is a follow-up email with more information, reliable information resources, and videos of other people who experienced the same thing and what it was like for them.

We heard from people who spent the entire day waiting for their loved ones to get out of surgery, not knowing how much longer it would take, and worried that if they stepped out for a coffee, they would miss the crucial announcement over the loudspeaker. As a result, we proposed that relatives receive text message updates such as, “Your daughter is just starting her surgery. We expect that it will take about an hour and a half. We will text you again when she is moved to the recovery room.”

The stories were so strong that we believed they would help our client refocus their attention away from the website and toward the million other touchpoints and opportunities we saw to help make the worst day of people’s lives a little less horrible.

And for the most part, that is what happened. We picked a few journeys that we thought provided a window on the range of stories we’d been hearing. As we talked through some of the more heart-rending journeys there were audible gasps in the room: the story of a doctor who had refused to see a patient after she’d brought in her own research on her daughter’s condition; a woman with a worsening disease who had visited multiple doctors to try to get a diagnosis; a man who was caring for his mother-in-law, who was so debilitated by Alzheimer’s that she routinely tried to climb out the second floor bedroom window.

In Design for Real Life, Sarah Wachter-Boettcher and Eric Meyer note that “the more users have opened up to you in the research phase” the more realistic your personas can be. More realistic personas, in turn, make it easier to imagine crisis points. And this was exactly what began to unfold as we shared our user journeys. As we told these stories, we felt a shift in the room. The clients started to share their own unforgettable healthcare experiences. One woman pulled out her phone and showed us pictures of her tiny premature infant, wearing her husband’s wedding ring around her wrist as she lay there in an incubator, surrounded by tubes and wires. When we took a break we overheard a number of people on the client side talking over the details of these stories and coming up with ideas for how they could help that went so beyond the hospital website it was hard to believe that had been our starting point. One person pointed out that a number of journeys started in Urgent Care and suggested that perhaps the company should look at expanding into urgent care facilities.

In the end, the research changed the company’s approach to the site.

“We talked about the stories throughout the course of the project,” one of our client contacts told me. “There was so much raw humanity to them.” A year after the project wrapped up (even though due to organizational changes at the hospital group our strategy recommendations have yet to be implemented), our client quickly rattled off the names of a few of our customer types. It is worth noting, too, that while our recommendations went much farther than the original scope of the project, the client appreciated being able to make informed strategic decisions about the path forward. Their immediate need was a revamped website, but once they understood that this need paled in comparison to all of the other places they could have an impact on their customers’ lives, they began talking excitedly about how to make this vision a reality down the road.

For us, this project changed the way we conceptualize projects, and illustrated that the framework of a website strategy or even “digital” strategy isn’t always meaningful. Because as the digital world increasingly melds with the IRL world, as customers spend their days shifting between websites, apps, texting, and face-to-face interactions, it becomes increasingly important for designers and researchers to drop the distinctions we’ve drawn around where an interaction happens, or where emotion spikes.

Before jumping in however, it is important to prep the team about how, and most importantly, why your interview questions probe into how customers are feeling. When you get into the interview room, coaxing out these emotional stories requires establishing emotional rapport quickly, and making it a safe place for participants to express themselves.

Being able to establish this rapport has changed our approach to other projects as well—we’ve seen that emotion can play into customer journeys in the unlikeliest of places. On a recent project for a client who sells enterprise software, we interviewed a customer who had recently gone through a system upgrade experience which affected tens of thousands of users. It did not go well and he was shaken by the experience. “The pressure on our team was incredible. I am never doing that ever again,” he said. Even for this highly technical product, fear, frustration, anger, and trust were significant elements of the customer journey. This is a journey where a customer has ten thousand people angry at him if the product he bought does not perform well, and he could even be out of a job if it gets bad enough. So while the enterprise software industry doesn’t exactly scream “worst day of my life” in the same way that hospitals do, emotion can run high there as well.

We sometimes forget that customers are human beings and human beings are driven by emotion, especially during critical life events. Prior to walking into the interview room we’d thought we might unearth some hidden problems around parking at the ER, navigating the hospital, and, of course, issues with the website content. But those issues were so eclipsed by all of the emotions surrounding a hospital visit that they came to seem irrelevant. Not being able to find parking at the ER is annoying, but more important was not knowing what you were supposed to do next because you’d just been told you have cancer, or because you feared for your child’s life. By digging deeper into this core insight, we were able to provide recommendations that went beyond websites, and instead took the entire human experience into account.

For researchers and designers tasked with improving experiences, it is essential to have an understanding of the customer journey in its full, messy, emotional agglomeration. Regardless of the touchpoint your customer is interacting with, the emotional ride is often what ties it all together, particularly in high-stakes subject matter. Are your client’s customers likely to be frustrated, or are they likely to be having the worst day of their lives? In the latter types of situations, recognize that you will get much more impactful insights when you address the emotions head-on.

And when appropriate, don’t be afraid to cry.

Favicon for A List Apart: The Full Feed 15:00 Awaken the Champion A/B Tester Within » Post from A List Apart: The Full Feed Visit off-site link

Athletes in every sport monitor and capture data to help them win. They use cameras, sensors, and wearables to optimize their caloric intake, training regimens, and athletic performance, using data and exploratory thinking to refine every advantage possible. It may not be an Olympic event (yet!), but A/B testing can be dominated the same way.

I talked to a website owner recently who loves the “always be testing” philosophy. He explained that he instructs his teams to always test something—the message, the design, the layout, the offer, the CTA.

I asked, “But how do they know what to pick?” He thought about it and responded, “They don’t.”

Relying on intuition, experienced as your team may be, will only get you so far. To “always test something” can be a great philosophy, but testing for the sake of testing is often a massive waste of resources—as is A/B testing without significant thought and preparation. 

Where standard A/B testing can answer questions like “Which version converts better?” A/B testing combined with advanced analyses gives you something more important—a framework to answer questions like “Why did the winning version convert better?”

Changing athletes, or a waste of resources?

Typical A/B testing is based on algorithms that are powered by data during the test, but we started trying a different model on our projects here at Clicktale, putting heavy emphasis on data before, during, and after the test. The results have been more interesting and strategic, not just tactical.

Let’s imagine that wants to reduce bounce rate and increase Buy Now clicks. Time for an A/B test, right?

The site’s UX lead gets an idea to split test their current site, comparing versions with current athletes to versions featuring former Olympians.

Wheaties Page DesignWheaties page design.

But what if your team monitored in-page visitor behavior and saw that an overwhelming majority of site visitors do not scroll below the fold to even notice the athletes featured there?

Now the idea of testing the different athlete variants sounds like a waste of time and resources, right?

But something happens when you take a different vantage point. What if your team watched session replays and noticed that those who do visit the athlete profiles tend to stay on the site longer and increase the rate of “Buy Now” clicks exponentially? That may be a subset of site visitors, but it’s a subset that’s working how you want.

If the desired outcome is to leverage the great experiences built into the pages, perhaps it would be wise to bring the athlete profiles higher. Or to A/B test elements that should encourage users to scroll down.

In our experience, both with A/B testing our own web properties and in aggregating the data of the 100 billion in-screen behaviors we’ve tracked, we know this to be true: testing should be powerful, focused, and actionable. In making business decisions, it helps when you’re able to see visual and conclusive evidence.

Imagine a marathon runner who doesn’t pay attention to other competitors once the race begins. Now, think about one who paces herself, watches the other racers, and modifies her cadence accordingly.

By doing something similar, your team can be agile in making changes and fixing bugs. Each time your team makes an adjustment, you can start another A/B test ... which lets you improve the customer experience faster than if you had to wait days for the first A/B test to be completed.

The race is on

Once an A/B test is underway, the machines use data-based algorithms to determine a winner. Based on traffic, conversion rate, number of variations, and the minimum improvement you want to detect, the finish line may be days or weeks away. What is an ambitious A/B tester to do?

Watch session replay of each variation immediately, once you’ve received a representative number of visitors. Use them to validate funnels and quickly be alert to any customer experience issues that may cause your funnels to leak.

Focus on the experience. Understanding which user behavior dominates each page is powerful, internalizing why users are behaving as they are enables you to take corrective actions mid-course and position yourself properly.

The next test

In your post-test assessments, again use data to understand why the winning variation succeeded with its target audience. Understanding the reason can help you prioritize future elements to test.

For example, when testing a control with a promotional banner (that should increase conversions) against a variation without a promotion, a PM may conclude that the promotion is ineffective when that variation loses.

Studying a heatmap of the test can reveal new insights. In this example, conversions were reduced because the banner pushed the “buy now” CTA out of sight.

Example of A/B testing on mobile devicesExample of A/B testing on mobile devices.

In this case, as a next test, the PM may decide not to remove the banner, but rather to test it in a way that keeps the more important “buy now” CTA visible. There is a good chance such a combination will yield even better results.

There are plenty of other examples of this, too. For instance, the web insights manager at a credit card company told me that having the aggregate data, in the form of heatmaps, helps him continually make more informed decisions about this A/B testing. In their case, they were able to rely on data that indicated they could remove a content panel without hurting their KPIs.

Another one of our customers, GoDaddy, was able to increase conversions on its checkout page by four percent after running an A/B test. “With our volume, that was a huge, huge increase…. We also tripled our donations to Round Up for Charity,” said Ana Grace, GoDaddy’s director of ecommerce, global product management. But the optimization doesn’t stop once a test is complete; GoDaddy continues to monitor new pages after changes, and sometimes finds additional hypotheses that require testing.

What it takes to go for the gold

I was not blessed with the natural athletic ability of an Olympian, but when it comes to A/B testing web assets and mobile apps, I have what I need to determine which version will be the winner. The powerful combination of behavioral analytics and big data gives athletes the knowledge they need to make the most of their assets, and it can do the same for you.

News stories from Tuesday 25 October, 2016

Favicon for A List Apart: The Full Feed 15:00 Network Access: Finding and Working with Creative Communities » Post from A List Apart: The Full Feed Visit off-site link

A curious complaint seems to ripple across the internet every so often: people state that “design” is stale. The criticism is that no original ideas are being generated; anything new is quickly co-opted and copied en-masse, leading to even more sterility, conceptually. And that leads to lots of articles lamenting the state of the communities they work in.

What people see is an endless recycling within their group, with very little bleed-over into other disciplines or networks. Too often, we speak about our design communities and networks as resources to be used, not as groups of people.

Anthony McCann describes the two main ways we view creative networks and the digital commons:

We have these two ways of speaking: commons as a pool of resources to be managed, and commons as an alternative to treating the world as made up of resources.

One view is that communities are essentially pools of user-generated content. That freely available content is there to be mined—the best ideas extracted and repackaged for profit or future projects. This is idea as commodity, and it very conveniently strips out the people doing the creating, instead looking at their conceptual and design work as a resource.

Another way is to view creative networks as interdependent networks of people. By nature, they cannot be resources, and any work put into the community is to sustain and nourish those human connections, not create assets. The focus is on contributing.

A wider view

By looking at your design communities as resources to be mined, you limit yourself to preset, habitual methods of sharing and usage. The more that network content is packaged for sale and distribution, the less “fresh” it will be. In Dougland Hine’s essay Friendship is a Commons, he says when we talk enthusiastically about the digital commons these days, we too often use the language of resource management, not the language of social relations.

Perhaps we should take a wider, more global view.

There are numerous digital design communities across the world; they are fluid and fresh, and operate according to distinct and complex social rules and mores. These designers are actively addressing problems in their own communities in original ways, and the result is unique, culturally relevant work. By joining and interacting with them—by accessing these networks—we can rethink what the design community is today.

Exploring larger communities

There are a number of creative communities I’ve become a part of, to varying degrees of attention. I’ve been a member of Behance for almost 10 years (Fig. 1), back when it was something very different (“We are pleased to invite you to join the Behance Network, in partnership with MTV”).

Screenshot of an old Behance Network pageFig. 1: Screenshot of the Behance creative community website in 2009. Source: belladonna

While I lived in Japan, Behance was a way for me to learn new digital design techniques and participate in a Western-focused, largely English speaking design network. As time has gone on, it’s strange that I now use it almost exclusively to see what is happening outside the West.

Instagram, Twitter, and Ello are three mobile platforms with a number of features that are great for collecting visual ideas without the necessity of always participating. The algorithms are focused on showing more of what I have seen—the more often I view work from Asian and African designers and illustrators, the more often I discover new work from those communities. While interesting for me, it does create filter bubbles, and I need to be careful of falling into the trap of seeing more of the same.

There is, of course, a counter-reaction to the public, extractive nature of these platforms—the rise of “Slack as community.” The joke about belonging to 5-10 different Slack groups is getting old, but illustrates a trend in the industry during the past year or so. I see this especially with designers of color, where the firehoses of racist/sexist abuse on open digital networks means that creativity is shelved in favor of simple preservation. Instead, we move, quietly and deliberately, to Slack, a platform that is explicit in its embrace of a diverse usership, where the access is much more tightly controlled, and where the empathy in design/dev networks is more readily shared and nurtured.

Right now, these are the creative platforms where I contribute my visual thinking, work, and conversations toward addressing messy visual questions—interactive ideas that assume a radically different way of viewing the world. There are, of course, others.

Exploring visual design alternatives

In Volume II of Mawahib (a series of books that showcase Arab illustrators, photographers, and graphic designers), we see one of these design communities compiled and printed, an offline record of a thriving visual network (Fig. 2).

Photograph of printed book interior and coverFig. 2: Page spreads from the Mawahib book, showcasing Arab illustration and design work

And perhaps it is in the banding together that real creative change can happen. I was fascinated to read this article about an illustration collective in Singapore. At 7 years old, it’s reportedly the longest running drawing event in Singapore. Michael Ng says, “Many people don’t know illustrators like us exist in Singapore and they’re amazed. Companies have also come up to hire us for work because of the event. We also network amongst ourselves, passing on opportunities and collaborations.” Comments like this show that there are thriving visual design scenes worldwide, ones that collaborate internally, and work for exposure and monetary gain externally.

Illustrated poster promoting an eventFig. 3: Poster from the Organisation of Illustrators Council in Singapore, advertising one of their collaborative sketching nights

UX research that builds community

Earlier in this article, we started by looking at the different ways people view existing creative communities. But what about people who create new ones? Here, once again, we have designers and strategists who use careful cultural research to create and develop sustainable digital networks, not simply resource libraries.

First, let’s look at the pilot of My Voice, an open source medical tool developed at Reboot. The residents of Wamba, a rural area in Nasarawa State, Nigeria, struggled to find a way to communicate with their healthcare providers. Reboot saw an opportunity to develop an empowering, responsive platform for the community, a way for people to share feedback with clinics and doctors in the area.

After a nine-week trial of the platform and software, the residents of Wamba saw the clinics begin making small changes to how they communicated—things like better payment info and hours of operation. The health department officials in the area also saw a chance to better monitor their clinics and appear more responsive to their constituents. What began as a way to report on clinic status and quality became a way for the community and local government to improve together.

Photo of two people facing one another; one is a woman wearing a black headwrap and a red sweaterFig. 4: Interviews with community residents for the MyVoice medical app

In another project, a group of researchers worked with a community in South Africa’s Eastern Cape to design and test mobile digital storytelling. Their experience creating a storytelling platform that did not follow European narrative tradition is enlightening, and hits on a key framing in line with how the people in Ndungunyeni view creative networks (Fig. 4).

Contrary to their initial ideas, the UX researchers found that storytelling ” an individual activity is discordant with villagers’ proximity, shared use of phones and communication norms. They devote significant time exchanging views in meetings and these protocols of speaking and listening contribute to cohesion, shared identity and security.”

Image of two planning documents presenting an arrangement of photos and digital media viewing device controls, with lines pointing to various photos and device control icons on one end, and to paragraphs of text on the otherFig 5: Mobile digital storytelling prototype (left) and story recording UI (right)

In both of these examples, we see new creative networks relying on linked social systems and cues in order to thrive. Most importantly, they rely on reciprocation—the trade of ideas, whether there is immediate personal benefit or not. Each of the participants—the community members, the UX designers, the clinics, and the local government— was able to collaborate on a common goal. Simply-crafted technology and UX made this possible, even in rural areas with little cellular connectivity. They all contributed, not looking to extract value, but to add it; they used these networking tools to deepen their interactions with others.

Building alternatives to current networks

Almost every project we work on as designers would certainly benefit from alternative viewpoints. That can be hard to set up, however, and collaborating with designers and developers outside your immediate circle may seem scary at first. Keep in mind that the goal is to add value to others’ networks and build interpersonal connections. This is the only way that we keep the creative ideas fresh.

Starting with freelance and project work

Sometimes the simplest way to access different creative circles is simply to pay for project work.  A great example is Karabo Moletsane’s project for Quartz Africa. An accomplished illustrator from South Africa, Moletsane recently did a set of 32 wonderful portraits for the Quartz Africa Innovators 2016 Series (Fig. 6). When I asked Moletsane about how she got the illustration job, she said it came via her work on Moletsane also said she regularly posts work on her Instagram and Behance, making Quartz’s choice to work with this talented South African for a global series on African innovators a no-brainer.

A combined graphic. On the left is a piece of contemporary artwork depicting the portrait of a woman. On the right is a piece displaying 32 portraits in a similar style, arranged in rows and columns.Fig. 6: Karabo Moletsane’s full series of 32 African Innovators, for Quartz Magazine

Hiring and team-building from different networks

Sometimes, shorter freelance projects won’t give you long-term quality access to new design communities and ideas. Sometimes you need to bring people onto your team, full-time. Again, I point out what Dougland Hine says regarding the ways digital communities can work:

...people have had powerful experiences of what it means to come together, work and build communities [but] the new forms of collaboration easily turn into new forms of exploitation…

Instead of looking for short-term access, hiring and developing team members from other networks can be a powerful alternative. Tyler Kessler, the CEO of Lumogram in St. Louis, recently wrote about hiring a new head of development based in Nigeria, and what it has meant to his company. He used Andela, a startup that is training and hiring out a new generation of developers from Nigeria.

Collaboration around specific Ideas

Your contributions to networks also need not be permanent or rigid. There are numerous opportunities to join collectives, or working groups, that build more ephemeral networks around specific issues. One such project, by the DESIS Cluster Collective (pdf), was set up “to investigate which new services, tools, and solutions we can design together with the elderly, when thinking about our future society.” The breadth of ideas is astounding, from systems for healthier eating, to mini-parks within urban areas for seniors to hang out in. Each team involved contributed deep user research, information design, and cultural cues to propose new ways for our elderly to coexist (Fig. 7).

Combined image depicting a young woman and an elderly woman sitting at a table, several arms leaning on top of a large map, and a series of four stick-figure illustrations featuring people addressing environmental and situational challenges.Fig. 7: Cultural interface research with the elderly, conducted by the Royal College of Art, England in 2013

The form and utility of design communities in the 21st century is fluid, and goes from groups of like-minded designers and illustrators to communities working digitally to solve specific problems. Even short-term collectives are addressing social issues.

All are intricate groups of creative humans. They shouldn’t be viewed, in any way, as “resources” for extraction and inspiration. Too often in the Western design world, we hear that ideas have largely plateaued and become homogenous, but that ignores the amazing work flourishing in other nations and pockets of the internet. How you build connections among other creative people makes you part of the network. See them, however ephemeral and globally distributed, as a powerful way to expand your design horizons and be part of something different.


Favicon for A List Apart: The Full Feed 15:00 Liminal Thinking » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share an excerpt from Practice 4 of Dave Gray's new book, Liminal Thinking, available now from Two Waves Books. Use code ALA-LT for 20% off!

A theory that explains everything, explains nothing
Karl Popper

Here’s a story I heard from a friend of mine named Adrian Howard. His team was working on a software project, and they were working so hard that they were burning themselves out. They were working late nights, and they agreed as a team to slow down their pace. “We’re going to work 9 to 5, and we’re going to get as much done as we can, but we’re not going to stay late. We’re not going to work late at night. We’re going to pace ourselves. Slow and steady wins the race.”

Well, there was one guy on the team who just didn’t do that. He was staying late at night, and Adrian was getting quite frustrated by that. Adrian had a theory about what was going on. What seemed obvious to him was that this guy was being macho, trying to prove himself, trying to outdo all the other coders, and showing them that he was a tough guy. Everything that Adrian could observe about this guy confirmed that belief.

Late one night, Adrian was so frustrated that he went over and confronted the guy about the issue. He expected a confrontation, but to his surprise, the guy broke down in tears. Adrian discovered that this guy was not working late because he was trying to prove something, but because home wasn’t a safe place for him. They were able to achieve a breakthrough, but it was only possible because Adrian went up and talked to him. Without that conversation, there wouldn’t have been a breakthrough.

It’s easy to make up theories about why people do what they do, but those theories are often wrong, even when they can consistently and reliably predict what someone will do.

For example, think about your horoscope. Horoscopes make predictions all the time:

  • “Prepare yourself for a learning experience about leaping to conclusions.”
  • “You may find the atmosphere today a bit oppressive.”
  • “Today, what seems like an innocent conversation will hold an entirely different connotation for one of the other people involved.”
  • “Stand up to the people who usually intimidate you. Today, they will be no match for you.”

These predictions are so vague that you can read anything you want into them. They are practically self-fulfilling prophecies: if you believe them, they are almost guaranteed to come true, because you will set your expectations and act in ways that make them come true. And in any case, they can never be disproven.

So what makes a good theory, anyway?

A scientist and philosopher named Karl Popper spent a lot of time thinking about this. Here’s the test he came up with, and I think it’s a good one: Does the theory make a prediction that might not come true? That is, can it be proven false?

What makes this a good test? Popper noted that it’s relatively easy to develop a theory that offers predictions—like a horoscope—that can never be disproven.

The test of a good theory, he said, is not that it can’t be disproven, but that it can be disproven.

For example, if I have a theory that you are now surrounded by invisible, undetectable, flying elephants, well, there’s no way you can prove me wrong. But if my theory can be subjected to some kind of test—if it is possible that it could be disproved, then the theory can be tested.

He called this trait falsifiability: the possibility that a theory could be proven false.

Many theories people have about other people are like horoscopes. They are not falsifiable theories, but self-fulfilling prophecies that can never be disproven.

Just because you can predict someone’s behavior does not validate your theories about them, any more than a horoscope prediction “coming true” means it was a valid prediction. If you want to understand what’s going on inside someone else’s head, sometimes you need to have a conversation with them.

Many years after the Vietnam War, former U.S. Secretary of State Robert McNamara met with Nguyen Co Thach, former Foreign Minister of Vietnam, who had fought for the Viet Cong in the war. McNamara had formed the hypothesis that the war could have been avoided, that Vietnam and the United States could have both achieved their objectives without the terrible loss of life. When he presented his thinking to Thach, Thach said, “You’re totally wrong. We were fighting for our independence. You were fighting to enslave us.”

“But what did you accomplish?” asked McNamara. “You didn’t get any more than we were willing to give you at the beginning of the war. You could have had the whole damn thing: independence, unification.”

“Mr. McNamara,” answered Thach. “You must have never read a history book. If you had, you’d know that we weren’t pawns of the Chinese or the Russians. Don’t you understand that we have been fighting the Chinese for a thousand years? We were fighting for our independence. And we would fight to the last man. And we were determined to do so. And no amount of bombing, no amount of U.S. pressure would ever have stopped us.”

McNamara then realized that the entire war had been based on a complete misunderstanding. He said: “In the case of Vietnam, we didn’t know them well enough to empathize. And there was total misunderstanding as a result. They believed that we had simply replaced the French as a colonial power, and we were seeking to subject South and North Vietnam to our colonial interests, which was absolutely absurd. And we saw Vietnam as an element of the Cold War. Not what they saw it as: a civil war.”

Sometimes people come into conflict not because they disagree, but because they fundamentally misunderstand each other. This can happen when people are viewing a situation from completely different points of view.

Have you ever had someone that you worked with, where you thought, this person is insane; they make no sense; they are crazy; they’re just nuts?

Everyone knows someone like that, right?

Sometimes people really do have mental disorders, including problems that can create danger for themselves and others. If that’s the case, it might make sense to stay away from them, or to seek help from a mental health professional.

But far more often, saying another person is crazy is just a way to create internal coherence within your belief bubble. Your “obvious” is stopping you from seeing clearly. The “crazy person” may be acting based on beliefs that are inconceivable to you because they are outside your bubble.

If you think to yourself, this person is just nuts, and nothing can be done about it, it can’t be changed, then it’s possible that your theory about that person is constrained by a limiting belief.

Most people don’t test their theories about other people, because it’s a potential bubble-buster: if you give your self-sealing logic bubble a true test, then it just might collapse on you.

People do fake tests all the time, of course.

Here’s an easy way to do a fake test of your beliefs. Just search the Internet. No matter what your belief is, you’ll find plenty of articles that support and reinforce your bubble. The Internet is like a grocery store for facts. It’s easier than ever to find “facts” that support pretty much any belief.

Fake tests will help if your goal is to feel better about yourself and reinforce your bubble. But if you want to figure out what is really going on, a fake test will not help.

What will help is triangulation: the practice of developing multiple viewpoints and theories that you can compare, contrast, combine, and validate, to get a better understanding of what’s going on.

U.S. military strategist Roy Adams told me this story about an “aha” moment he had in Iraq.

He was having a beer with a friend who was in the Special Forces. Usually, they didn’t talk about work, but he happened to have a map with him. At the time, Adams and his team were designing their plans based on the political boundaries of the map, so on the map were districts, as well as the people who were in charge of the districts.

His friend said, “You know, this is really interesting.” And he picked up a pen and said, “Let me draw the tribal boundaries on this map for you.” The boundaries were completely different but overlapping. Suddenly, Adams had two different versions of reality on his map.

The political map was primarily a Shia map, and the tribal map had both Sunni and Shia. Only by overlaying the two maps did Adams start to understand the situation. Neither map would have made sense by itself.

By laying these maps over each other, suddenly things started to click. Now he understood why they were having success in some places and meeting resistance in others. Everything started to make more sense.

The insights in this case came not from one map or another, but through overlaying them. This is the practice of triangulation. Each map represented one theory of the world, one version of reality. It was only by viewing the situation through multiple perspectives—multiple theories—that he was able to gain insight and see the situation differently. (Fig. 1)

Illustration of two people holding rectangles and identifying where they overlapFig 1: Look for alternatives.

My friend Adrian Howard told me about a similar experience he had when working at a large Telecom company that had grown by acquiring other companies over many years. His team found itself running up against resistance and pushback that seemed odd and inexplicable. Then someone on the team took some markers and color-coded the boxes on the org chart based on which companies the people in each box had originally come from—many of whom used to be fierce competitors—and suddenly the reasons for the resistance became clear and understandable.

For any one observation there may be a vast number of possible explanations. Many of them may be based on beliefs that are outside of your current belief bubble, in which case, they may seem strange, absurd, crazy, or just plain wrong.

Most of the time we are all walking around with our heads so full of “obvious” that we can’t see what’s really going on. If you think something is obvious, that’s an idea that bears closer examination. Why do you think it’s obvious? What personal experiences have you had that led to that belief? Can you imagine a different set of experiences that might lead to a different belief?

Cultivate as many theories as you can—including some that seem odd, counter-intuitive, or even mutually contradictory—and hold onto them loosely. Don’t get too attached to any one of them. (Fig. 2)

An illustration of a person holding the strings of three large balloonsFig 2: Hold your theories loosely.

Then you can start asking questions and seeking valid information to help you understand what’s really going on. The way to seek understanding is to empty your cup, step up and give people your full attention, suspend your beliefs and judgments, and listen carefully.

The thing to remember is that people act in ways that make sense to them. If something doesn’t make sense to you, then you’re missing something.

What are you missing? If someone says something that seems odd or unbelievable, ask yourself, “What would I need to believe for that to be true?”

In many cases, the only way you’re ever going to understand what’s inside someone else’s head is by talking to them. Sometimes that idea might seem scary. It may be that you will hear something that threatens your bubble of belief. But if you can get over your fear, go and talk to the dragon, or take the ogre out for coffee. You just may learn something that will change your life.

Practice exercises

Triangulate and validate. Look at situations from as many points of view as possible. Consider the possibility that seemingly different or contradictory beliefs may be valid. If something doesn’t make sense to you, then you’re missing something.

Exercise #1

Think about a co-worker or family member, someone you care about, or can’t walk away from for whatever reason, that you have trouble getting along with. Consider their beliefs and behavior, and come up with as many theories as you can to explain why they act the way they do. Then see if you can have a conversation with that person to explore what’s really going on.

Exercise #2

Think of a situation at home or work that you find problematic. Try to come up with as many perspectives as you can that might give you a different way to look at the situation. What is your current theory? What is its opposite? How many perspectives or points of view can you think of that might help you see that situation through different eyes?

Want to read more?

Get 20% off your copy of Liminal Thinking and other titles from Two Waves Books—an imprint of Rosenfeld Media—with code ALA-LT.

Cover of Liminal Thinking

News stories from Monday 24 October, 2016

Favicon for A List Apart: The Full Feed 05:01 This week's sponsor: INDEED PRIME » Post from A List Apart: The Full Feed Visit off-site link

INDEED PRIME, the job search platform for top tech talent. Apply to 100 top tech companies with 1 simple application.

News stories from Tuesday 18 October, 2016

Favicon for A List Apart: The Full Feed 15:00 JavaScript for Web Designers: DOM Scripting » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share an excerpt from Chapter 5 of Mat Marquis' new book, JavaScript for Web Designers, available now from A Book Apart.

Before we do anything with a page, you and I need to have a talk about something very important: the Document Object Model. There are two purposes to the DOM: providing JavaScript with a map of all the elements on our page, and providing us with a set of methods for accessing those elements, their attributes, and their contents.

The “object” part of Document Object Model should make a lot more sense now than it did the first time the DOM came up, though: the DOM is a representation of a web page in the form of an object, made up of properties that represent each of the document’s child elements and subproperties representing each of those elements’ child elements, and so on. It’s objects all the way down.

window: The Global Context

Everything we do with JavaScript falls within the scope of a single object: window. The window object represents, predictably enough, the entire browser window. It contains the entire DOM, as well as—and this is the tricky part—the whole of JavaScript.

When we first talked about variable scope, we touched on the concept of there being “global” and “local” scopes, meaning that a variable could be made available either to every part of our scripts or to their enclosing function alone.

The window object is that global scope. All of the functions and methods built into JavaScript are built off of the window object. We don’t have to reference window constantly, of course, or you would’ve seen a lot of it before now—since window is the global scope, JavaScript checks window for any variables we haven’t defined ourselves. In fact, the console object that you’ve hopefully come to know and love is a method of the window object:

function log() { [native code] }

It’s hard to visualize globally vs. locally scoped variables before knowing about window, but much easier after: when we introduce a variable to the global scope, we’re making it a property of window—and since we don’t explicitly have to reference window whenever we’re accessing one of its properties or methods, we can call that variable anywhere in our scripts by just using its identifier. When we access an identifier, what we’re really doing is this:

function ourFunction() {
    var localVar = "I’m local.";
    globalVar = "I’m global.";

    return "I’m global too!";

I’m global too!


I’m global.

The DOM’s entire representation of the page is a property of window: specifically, window.document. Just entering window.document in your developer console will return all of the markup on the current page in one enormous string, which isn’t particularly useful—but everything on the page can be accessed as subproperties of window.document the exact same way. Remember that we don’t need to specify window in order to access its document property—window is the only game in town, after all.



Those two properties are themselves objects that contain properties that are objects, and so on down the chain. (“Everything is an object, kinda.”)

Using the DOM

The objects in window.document make up JavaScript’s map of the document, but it isn’t terribly useful for us—at least, not when we’re trying to access DOM nodes the way we’d access any other object. Winding our way through the document object manually would be a huge headache for us, and that means our scripts would completely fall apart as soon as any markup changed.

But window.document isn’t just a representation of the page; it also provides us with a smarter API for accessing that information. For instance, if we want to find every p element on a page, we don’t have to write out a string of property keys—we use a helper method built into document that gathers them all into an array-like list for us. Open up any site you want—so long as it likely has a paragraph element or two in it—and try this out in your console:

document.getElementsByTagName( "p" );
[<p>...<&sol;p>, <p>...<&sol;p>, <p>...<&sol;p>, <p>...<&sol;p>]

Since we’re dealing with such familiar data types, we already have some idea how to work with them:

var paragraphs = document.getElementsByTagName( "p" );


paragraphs[ 0 ];

But DOM methods don’t give us arrays, strictly speaking. Methods like getElementsByTagName return “node lists,” which behave a lot like arrays. Each item in a nodeList refers to an individual node in the DOM—like a p or a div—and will come with a number of DOM-specific methods built in. For example, the innerHTML method will return any markup a node contains—elements, text, and so on—as a string:

var paragraphs = document.getElementsByTagName( "p" ),
    lastIndex = paragraphs.length – 1, /* Use the length of the `paragraphs` node list minus 1 (because of zero-indexing) to get the last paragraph on the page */
    lastParagraph = paragraphs[ lastIndex ]; 

And that’s how I spent my summer vacation.
Fig 5.1: First drafts are always tough.

Fig 5.1: First drafts are always tough.

The same way these methods give us access to information on the rendered page, they allow us to alter that information, as well. For example, the innerHTML method does this the same way we’d change the value of any other object: a single equals sign, followed by the new value.

var paragraphs = document.getElementsByTagName( "p" ),
    firstParagraph = paragraphs[ 0 ];

firstParagraph.innerHTML = "Listen up, chumps:";
"Listen up, chumps:"

JavaScript’s map of the DOM works both ways: document is updated whenever any markup changes, and our markup is updated whenever anything within document changes (Fig 5.1).

Likewise, the DOM API gives us a number of methods for creating, adding, and removing elements. They’re all more or less spelled out in plain English, so even though things can seem a little verbose, it isn’t too hard to break down.

DOM Scripting

Before we get started, let’s abandon our developer console for a bit. Ages ago now, we walked through setting up a bare-bones HTML template that pulls in a remote script, and we’re going to revisit that setup now. Between the knowledge you’ve gained about JavaScript so far and an introduction to the DOM, we’re done with just telling our console to parrot things back to us—it’s time to build something.

We’re going to add a “cut” to an index page full of text—a teaser paragraph followed by a link to reveal the full text. We’re not going to make the user navigate to another page, though. Instead, we’ll use JavaScript to show the full text on the same page.

Let’s start by setting up an HTML document that links out to an external stylesheet and external script file—nothing fancy. Both our stylesheet and script files are empty with .css and .js extensions, for now—I like to keep my CSS in a /css subdirectory and my JavaScript in a /js subdirectory, but do whatever makes you most comfortable.

<!DOCTYPE html>
        <meta charset="utf-8">
        <link rel="stylesheet" type="text/css" href="css/style.css">

        <script src="js/script.js"></script>

We’re going to populate that page with several paragraphs of text. Any ol’ text you can find laying around will do, including—with apologies to the content strategists in the audience—a little old-fashioned lorem ipsum. We’re just mocking up a quick article page, like a blog post.

<!DOCTYPE html>
        <meta charset="utf-8">
        <link rel="stylesheet" type="text/css" href="css/style.css">
        <h1>JavaScript for Web Designers</h1>

        <p>In all fairness, I should start this book with an apology—not to you, reader, though I don’t doubt that I’ll owe you at least one by the time we get to the end. I owe JavaScript a number of apologies for the things I’ve said to it during the early years of my career, some of which were strong enough to etch glass.</p>

        <p>This is my not-so-subtle way of saying that JavaScript can be a tricky thing to learn.</p>

        [ … ]

        <script src="js/script.js"></script>

Feel free to open up the stylesheet and play with the typography, but don’t get too distracted. We’ll need to write a little CSS later, but for now: we’ve got scripting to do.

We can break this script down into a few discrete tasks: we need to add a Read More link to the first paragraph, we need to hide all the p elements apart from the first one, and we need to reveal those hidden elements when the user interacts with the Read More link.

We’ll start by adding that Read More link to the end of the first paragraph. Open up your still-empty script.js file and enter the following:

var newLink = document.createElement( "a" );

First, we’re intializing the variable newLink, which uses document.createElement( "a" ) to—just like it says on the tin—create a new a element. This element doesn’t really exist anywhere yet—to get it to appear on the page we’ll need to add it manually. First, though, <a></a> without any attributes or contents isn’t very useful. Before adding it to the page, let’s populate it with whatever information it needs.

We could do this after adding the link to the DOM, of course, but there’s no sense in making multiple updates to the element on the page instead of one update that adds the final result—doing all the work on that element before dropping it into the page helps keep our code predictable.

Making a single trip to the DOM whenever possible is also better for performance—but performance micro-optimization is easy to obsess over. As you’ve seen, JavaScript frequently offers us multiple ways to do the same thing, and one of those methods may technically outperform the other. This invariably leads to “excessively clever” code—convoluted loops that require in-person explanations to make any sense at all, just for the sake of shaving off precious picoseconds of load time. I’ve done it; I still catch myself doing it; but you should try not to. So while making as few round-trips to the DOM as possible is a good habit to be in for the sake of performance, the main reason is that it keeps our code readable and predictable. By only making trips to the DOM when we really need to, we avoid repeating ourselves and we make our interaction points with the DOM more obvious for future maintainers of our scripts.

So. Back to our empty, attribute-less <a></a> floating in the JavaScript ether, totally independent of our document.

Now we can use two other DOM interfaces to make that link more useful: setAttribute to give it attributes, and innerHTML to populate it with text. These have a slightly different syntax. We can just assign a string using innerHTML, the way we’d assign a value to any other object. setAttribute, on the other hand, expects two arguments: the attribute and the value we want for that attribute, in that order. Since we don’t actually plan to have this link go anywhere, we’ll just set a hash as the href—a link to the page you’re already on.

var newLink = document.createElement( "a" );

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

You’ll notice we’re using these interfaces on our stored reference to the element instead of on document itself. All the DOM’s nodes have access to methods like the ones we’re using here—we only use document.getElementsByTagName( "p" ) because we want to get all the paragraph elements in the document. If we only wanted to get all the paragraph elements inside a certain div, we could do the same thing with a reference to that div—something like ourSpecificDiv.getElementsByTagName( "p" );. And since we’ll want to set the href attribute and the inner HTML of the link we’ve created, we reference these properties using newLink.setAttribute and newLink.innerHTML.

Next: we want this link to come at the end of our first paragraph, so our script will need a way to reference that first paragraph. We already know that document.getElementsByTagName( "p" ) gives us a node list of all the paragraphs in the page. Since node lists behave like arrays, we can reference the first item in the node list one by using the index 0.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

For the sake of keeping our code readable, it’s a good idea to initialize our variables up at the top of a script—even if only by initializing them as undefined (by giving them an identifier but no value)—if we plan to assign them a value later on. This way we know all the identifiers in play.

So now we have everything we need in order to append a link to the end of the first paragraph: the element that we want to append (newLink) and the element we want to append it to (firstParagraph).

One of the built-in methods on all DOM nodes is appendChild, which—as the name implies—allows us to append a child element to that DOM node. We’ll call that appendChild method on our saved reference to the first paragraph in the document, passing it newLink as an argument.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

firstParagraph.appendChild( newLink );

Now—finally—we have something we can point at when we reload the page. If everything has gone according to plan, you’ll now have a Read More link at the end of the first paragraph on the page. If everything hasn’t gone according to plan—because of a misplaced semicolon or mismatched parentheses, for example—your developer console will give you a heads-up that something has gone wrong, so be sure to keep it open.

Pretty close, but a little janky-looking—our link is crashing into the paragraph above it, since that link is display: inline by default (Fig 5.2).

Well, it’s a start.

Fig 5.2: Well, it’s a start.

We have a couple of options for dealing with this: I won’t get into all the various syntaxes here, but the DOM also gives us access to styling information about elements—though, in its most basic form, it will only allow us to read and change styling information associated with a style attribute. Just to get a feel for how that works, let’s change the link to display: inline-block and add a few pixels of margin to the left side, so it isn’t colliding with our text. Just like setting attributes, we’ll do this before we add the link to the page:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more"; = "inline-block"; = "10px";

firstParagraph.appendChild( newLink );

Well, adding those lines worked, but not without a couple of catches. First, let’s talk about that syntax (Fig 5.3).

Now we’re talking.

Fig 5.3: Now we’re talking.

Remember that identifiers can’t contain hyphens, and since everything is an object (sort of), the DOM references styles in object format as well. Any CSS property that contains a hyphen instead gets camel-cased: margin-left becomes marginLeft, border-radius-top-left becomes borderRadiusTopLeft, and so on. Since the value we set for those properties is a string, however, hyphens are just fine. A little awkward and one more thing to remember, but this is manageable enough—certainly no reason to avoid styling in JavaScript, if the situation makes it absolutely necessary.

A better reason to avoid styling in JavaScript is to maintain a separation of behavior and presentation. JavaScript is our “behavioral” layer the way CSS is our “presentational” layer, and seldom the twain should meet. Changing styles on a page shouldn’t mean rooting through line after line of functions and variables, the same way we wouldn’t want to bury styles in our markup. The people who might end up maintaining the styles for the site may not be completely comfortable editing JavaScript—and since changing styles in JavaScript means we’re indirectly adding styles via style attributes, whatever we write in a script is going to override the contents of a stylesheet by default.

We can maintain that separation of concerns by instead using setAttribute again to give our link a class. So, let’s scratch out those two styling lines and add one setting a class in their place.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

firstParagraph.appendChild( newLink );

Now we can style .more-link in our stylesheets as usual:

.more-link {
    display: inline-block;
    margin-left: 10px;

Much better (Fig 5.4). It’s worth keeping in mind for the future that using setAttribute this way on a node in the DOM would mean overwriting any classes already on the element, but that’s not a concern where we’re putting this element together from scratch.

No visible changes, but this change keeps our styling decisions in our CSS and our behavioral decisions in JavaScript.

Fig 5.4: No visible changes, but this change keeps our styling decisions in our CSS and our behavioral decisions in JavaScript.

Now we’re ready to move on to the second item on our to-do list: hiding all the other paragraphs.

Since we’ve made changes to code we know already worked, be sure to reload the page to make sure everything is still working as expected. We don’t want to introduce a bug here and continue on writing code, or we’ll eventually get stuck digging back through all the changes we made. If everything has gone according to plan, the page should look the same when we reload it now.

Now we have a list of all the paragraphs on the page, and we need to act on each of them. We need a loop—and since we’re iterating over an array-like node list, we need a for loop. Just to make sure we have our loop in order, we’ll log each paragraph to the console before we go any further:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {
    console.log( allParagraphs[ i ] );

firstParagraph.appendChild( newLink );

Your Read More link should still be kicking around in the first paragraph as usual, and your console should be rich with filler text (Fig 5.5).

Fig 5.5: Looks like our loop is doing what we expect.

Fig 5.5: Looks like our loop is doing what we expect.

Now we have to hide those paragraphs with display: none, and we have a couple of options: we could use a class the way we did before, but it wouldn’t be a terrible idea to use styles in JavaScript for this. We’re controlling all the hiding and showing from our script, and there’s no chance we’ll want that behavior to be overridden by something in a stylesheet. In this case, it makes sense to use the DOM’s built-in methods for applying styles:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {
    allParagraphs[ i ].style.display = "none";

firstParagraph.appendChild( newLink );

If we reload the page now, everything is gone: our JavaScript loops through the entire list of paragraphs and hides them all. We need to make an exception for the first paragraph, and that means conditional logic—an if statement, and the i variable gives us an easy value to check against:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {

    if( i === 0 ) {
    allParagraphs[ i ].style.display = "none";

firstParagraph.appendChild( newLink );

If this is the first time through of the loop, the continue keyword skips the rest of the current iteration and then—unlike if we’d used break—the loop continues on to the next iteration.

If you reload the page now, we’ll have a single paragraph with a Read More link at the end, but all the others will be hidden. Things are looking good so far—and if things aren’t looking quite so good for you, double-check your console to make sure nothing is amiss.

Now that you’ve got a solid grounding in the DOM, let’s really dig in and see where to take it from here.

Want to read more?

The rest of this chapter (even more than you just read!) goes even deeper—and that’s only one chapter out of Mat’s hands-on, help-you-with-your-current-project guide. Check out the rest of JavaScript for Web Designers at A Book Apart.

News stories from Monday 17 October, 2016

Favicon for heise Security 14:51 Identitätsdiebstahl: Banking-Trojaner Acecard will Selfies von Opfern knipsen » Post from heise Security Visit off-site link
Identitätsdiebstahl: Auf ein Selfie mit dem Banking-Trojaner Acecard

Ein Android-Schädling bittet seine Opfer inklusive Personalausweis vor die Kamera.

Favicon for heise Security 12:55 Github hat Liste mit kompromittierten Online-Shops gelöscht » Post from heise Security Visit off-site link
Speicherchip auf Kreditkarte

Der Online-Dienst entfernte kommentarlos die Liste eines Sicherheitsforschers mit URLs zu Online-Shops mit Skimming-Malware. Auch Gitlab löschte die Auflistung, gestand kurz darauf aber einen Fehler ein.

Favicon for heise Security 11:41 Sicherheitsmesse it-sa 2016: Von Ransomware bis SCADA » Post from heise Security Visit off-site link
Sicherheitsmesse it-sa 2016: Von Ransomware bis SCADA

Vom 18. bis zum 20. Oktober präsentieren im Messezentrum Nürnberg mehr als 470 Aussteller ihre Produkte und Dienstleistungen aus Bereichen wie Cloud Computing, IT Forensik, Datensicherung oder Hosting. Auch die c't-Krypto-Kampagne ist vor Ort.

Favicon for heise Security 10:19 Verschlüsselte Kommunikation: Erstes Code-Audit der pEp-Engine veröffentlicht » Post from heise Security Visit off-site link
pretty easy privacy

Die Schweizer pEp-Stiftung hat das Code-Audit der pEp-Engine durch die Kölner Firma Sektioneins veröffentlicht. Sektioneins entdeckte einige Fehler und wurde beauftragt, bei jedem relevanten Update den Code neu zu prüfen.

News stories from Sunday 16 October, 2016

Favicon for heise Security 12:42 HTTPS-Verschlüsselung im Web erreicht erstmals 50 Prozent » Post from heise Security Visit off-site link
https symbolbild

Rund die Hälfte der Webseiten wird mittlerweile per HTTPS verschlüsselt zum Nutzer übertragen. Das ergeben Zahlen von Google und Mozilla.

News stories from Saturday 15 October, 2016

Favicon for heise Security 13:07 Offene Datenbank: 58 Millionen Datensätze im Umlauf » Post from heise Security Visit off-site link
Gefahren aus dem Netz

Durch eine ungeschützte MongoDB-Datenbank des texanischen Dienstleisters Modern Business Solutions sind mindestens 58 Millionen Einträge aus der Automobilbranche und Personalvermittlung geleakt.

Favicon for heise Security 11:43 DDoS-Tool Mirai versklavt Gateways von Sierra Wireless fürs IoT-Botnet » Post from heise Security Visit off-site link
DDoS-Tool Mirai versklavt Gateways von Sierra Wireless fürs IoT-Botnet

Die nächsten IoT-Gerätchen werden von Botnets vereinnahmt: Die Modems von Sierra Wireless werden aber nicht über eine Sicherheitslücke übernommen, sondern über unverändert gelassene Standard-Passwörter.

Favicon for the web hates me 08:00 Lean Testing mit » Post from the web hates me Visit off-site link

Lange Zeit war es ruhig hier im Blog. Das hatte aber einen Grund. Und wie ich finde einen sehr guten. Nach ungefähr einem Jahr Arbeit bin ich stolz unsere „Software as s Service“-Lösung präsentieren zu können und um nun auch offiziell die offene Beta-Phase einzuläuten. Sehr viel Arbeit, sehr viel Stolz. Aber um was geht es eigentlich […]

The post Lean Testing mit appeared first on the web hates me.

News stories from Friday 14 October, 2016

Favicon for heise Security 17:55 Kryptogeld-Projekt Ethereum: Der nächste Hard Fork kommt » Post from heise Security Visit off-site link
Kryptogeld-Projekt Ethereum: Der nächste Hard Fork kommt

Erneut steht der Kryptowährung ein harter Fork bevor. Der soll zum Schutz vor DOS-Attacken dienen, die seit rund drei Wochen das Ethereum-Netzwerk verlangsamen.

Favicon for heise Security 15:56 "Tutuapp": Chinesischer App Store mit Raubkopien verbreitet sich » Post from heise Security Visit off-site link
Chinesischer App Store mit App-Raubkopien verbreitet sich schnell

Um an eine gehackte Version von Pokemon Go heranzukommen, installieren anscheinend immer mehr Jugendliche den dubiosen "Tutuapp"-Store auf ihren iPhones und Android-Smartphones. Der Weg für Malware ist dann frei.

Favicon for heise Security 14:43 GlobalSign annulliert versehentlich Zertifikate von vielen Webseiten » Post from heise Security Visit off-site link
GlobalSign annulliert versehentlich Zertifikate von vielen Webseiten

Aktuell warnen einige Webbrowser davor, dass Verbindungen zu Webseiten wie etwa Wikipedia nicht mehr gesichert sind, da mit dem Zertifikat der Seite etwas nicht stimmt.

Favicon for heise Security 13:33 Von der Leyen benennt Chef ihrer neuen Cyber-Truppe » Post from heise Security Visit off-site link
Von der Leyen benennt Chef ihrer neuen Cyber-Truppe

13.500 Soldaten und Zivilisten soll die neue Cyber-Truppe der Bundeswehr umfassen, zu deren Chef nun der Generalmajor Ludwig Leinhos ernannt wurde. Zuvor leitete er den Aufbaustab der Truppe.

Favicon for heise Security 11:57 SSHowDowN: Zwölf Jahre alter OpenSSH-Bug gefährdet unzählige IoT-Geräte » Post from heise Security Visit off-site link
SSHowDowN: Zwölf Jahre alter OpenSSH-Bug soll unzählige IoT-Geräte gefährden

Akamai warnt davor, dass Kriminelle unvermindert Millionen IoT-Geräte für DDoS-Attacken missbrauchen. Die dafür ausgenutzte Schwachstelle ist älter als ein Jahrzehnt. Viele Geräte sollen sich nicht patchen lassen.

Favicon for heise Security 09:53 Bilanz: Facebook hat Sicherheitsforschern bisher 5 Millionen US-Dollar gezahlt » Post from heise Security Visit off-site link
Facebook Bug Bounty

Vor fünf Jahren hat Facebook sein Bug-Bounty-Programm gestartet und seitdem tausenden von Sicherheitsforschern Prämien gezahlt. Das Programm umspannt immer mehr Produkte des Unternehmens.

Favicon for heise Security 09:12 Magento-Updates: Checkout-Prozess als Einfallstor für Angreifer » Post from heise Security Visit off-site link
Magento-Updates: Checkout-Prozess als Einfallstor für Angreifer

Sicherheits-Patches für das Shop-System schließen mehrere Lücken. Zwei davon gelten als kritisch.

News stories from Thursday 13 October, 2016

Favicon for heise Security 17:04 Gezinkte Primzahlen ermöglichen Hintertüren in Verschlüsselung » Post from heise Security Visit off-site link
Gezinkte Primzahlen ermöglichen Hintertüren in Verschlüsselung

Ein Forscherteam hat aufgezeigt, dass man durch geschickte Konstruktion einer Primzahl eine Hintertür in Verschlüsselungsverfahren einbauen kann. Nicht auszuschließen, dass dies bei etablierten Verfahren längst passiert ist.

News stories from Tuesday 11 October, 2016

Favicon for A List Apart: The Full Feed 15:00 Using CSS Mod Queries with Range Selectors » Post from A List Apart: The Full Feed Visit off-site link

Recently, I was asked to build a simple list that would display in a grid—one that could start with a single element and grow throughout the day, yet alway be tidy regardless of the length. So, as you do sometimes when you’re busy with one thing and asked if you can do something completely different, I tried to think of any reason why it couldn’t be done, came up blank, and distractedly said, “Yes.”

At the time, I was working on a London-based news organization’s website. We’d spent the previous year migrating their CMS to the Adobe AEM platform while simultaneously implementing a responsive UI—both big improvements. Since that phase was complete, we were starting to focus on finessing the UI and building new features. The development project was divided into a number of small semiautonomous teams. My team was focusing on hub pages, and I was leading the UI effort.

Each hub page is essentially a list of lists, simply there to help readers find content that interests them. As you can imagine, a news website is almost exclusively made of content lists! A page full of generic vertical lists would be unhelpful and unappealing; we wanted readers to enjoy browsing the content related to their sphere of interest. Sections needed to be distinct and the lists had to be both individually distinguishable and sit harmoniously together. In short, the visual display was critical to the usability and effectiveness of the entire page.

That “simple list” I said I’d build would be high profile, sitting in its own panel near the top of a hub page and serving to highlight a specific point of interest. Starting with one item and growing throughout the day as related articles were published, the list needed to be a rectangular grid rather than a single column, and never have “leftover” items in the last row. And no matter how many child elements it contained at any given moment, it had to stay tidy and neat because it would display above the fold. Each item would be more or less square, with the first item set at 100% width, the second two at 50%, and all subsequent items 33% and arranged in rows of three. My simple list suddenly wasn’t so simple.

Not everyone wants a generic grid or stack of identical items—there’s something nice about selective prominence, grouped elements, and graceful line endings. These styles can be hardcoded if you know the list will always be an exact length, but it becomes more of a challenge when the length can change. How could I keep that last row tidy when there were fewer than three items?

Various arrangements of list items that do and do not break the planned layout in the bottom row.Our intended layout would break visually as more items were added to the list.

When it came to actually building the thing, I realized that knowing the length of the list wasn’t very helpful. Having loved Heydon Pickering’s excellent article on quantity queries for CSS, I assumed I could find out the length of the list using QQs, then style it accordingly and all would be fine.

But since my list could be any length, I’d need an infinite number of QQs to meet the requirements! I couldn’t have a QQ for every eventuality. Plus, there were rumors a “Load More” button might be added down the road, letting users dynamically inject another 10 or so items. I needed a different solution.

After a minor meltdown, I asked myself, What would Lea Verou do? Well, not panicking would be a good start. Also, it would help to simplify and identify the underlying requirements. Since the list would fundamentally comprise rows of three, I needed to know the remainder from mod 3.

The “mod” query

Being able to select and style elements by the number of siblings is great, but there’s more to this than mere length. In this case, it would be much better to know if my list is divisible by a certain number rather than how long it is.

Unfortunately, there isn’t a native mod query in CSS, but we can create one by combining two selectors: :nth-child(3n) (aka the “modulo” selector) and the :first-child selector.

The following query selects everything if the list is divisible by three:

li:nth-last-child(3n):first-child ~ li { 
 … selects everything in a list divisible by three … 
Four rows of list items (cats in boxes). The top and bottom rows are selected (full color, not grayed out) because each is divisible by 3.Only those rows divisible by three are selected. See the Pen Using CSS Mod Queries with Range Selectors: Fig 2 by Patrick (@clanceyp) on CodePen. Cat image via Paper Bird Publishing.

Let’s talk through that code. (I use li for “list item” in the examples.)

The css selector:

li:nth-last-child(3n):first-child ~ li

Select all following siblings:

... ~ li

The first child (first li in the list, in this case):

...:first-child ...

Every third item starting from the end of the list:


That combination basically means if the first child is 3n from the end, select all of its siblings.

The query selects all siblings of the first item, but doesn’t include the first item itself, so we need to add a selector for it separately.

li:nth-last-child(3n):first-child ~ li { 
… styles for list items in a list divisible by 3 …  

Check out the demo and give it a try!

What about remainders?

With my mod query, I can select all the items in a list if the list is divisible by three, but I’ll need to apply different styles if there are remainders. (In the case of remainder 1, I’ll just need to count back in the CSS from the second-to-last element, instead of the last. This can be achieved by simply adding +1 to the query.)

li:nth-last-child(3n+1):first-child ~ li { 
… styles for elements in list length, mod 3 remainder = 1 …  

Ditto for remainder 2—I just add +2 to the query.

li:nth-last-child(3n+2):first-child ~ li { 
… styles for elements in list length, mod 3 remainder = 2 …  

Creating a range selector

Now I have a way to determine if the list length is divisible by any given number, with or without remainders, but I still need to select a range. As with mod query, there isn’t a native CSS range selector, but we can create one by combining two selectors: :nth-child(n) (i.e., “everything above”) and :nth-child(-n) (i.e., “everything below”).

This allows us to select items 3 to 5, inclusive:

... styles for items 3 to 5 inclusive ...
A row of six list items (graphics of cats in boxes). The two on the left are grayed out, followed by three “selected” cats in full color, and the one on the right is grayed out.We’ve selected a range: cats 3, 4, and 5.

True, that could just as easily be achieved with simple :nth-child(n) syntax and targeting the item positions directly—li:nth-child(3), li:nth-child(4), li:nth-child(5){ ... }—but defining a start and end to a range is obviously much more versatile. Let’s quickly unpack the selector to see what it’s doing.

Selects all the items up to and including the fifth item:

li:nth-child(n+3):nth-child(-n+5){ … }

Selects all the items from the third item onwards:

li:nth-child(n+3):nth-child(-n+5){ … }

Combining the two—li:nth-child(n+3):nth-child(-n+5)—creates a range selector.

If we look at an example, we might have a product grid where the list items contain an image, title, and description. Let’s say the product image speaks for itself, so in the first row we promote the image and hide all the text. With the second and third row, we display the title and image as a thumbnail, while in subsequent rows we hide the image and show the title and description on a single line.

A grid of three cat graphics in a top row, then two rows of  blocks each comprising a cat graphic and a product title, then four rows each listing text for product title and  product details.A product grid of our cats. We have standalone graphics in the top row, small graphics plus product titles in the second and third rows, and then we lose the graphics and only show text for all rows after that. See the Pen Using CSS Mod Queries with Range Selectors: Fig 4 by Patrick (@clanceyp) on CodePen.

By using the range selector, we can select the first three, the fourth through ninth, and the 10th onwards. This allows us to change the ranges at different breakpoints in the CSS so we can keep our product grid nice and responsive.

Notes on SCSS mixins

Since I was using a CSS preprocessor, I simplified my code by using preprocessor functions; these are SCSS mixins for creating range selectors and mod queries.

// range selector mixin
@mixin select-range($start, $end){
// mod query mixin
@mixin mod-list($mod, $remainder){
  &:nth-last-child(#{$mod}n+#{$remainder}):first-child ~ li {

Then in my code I could nest the mixins.

li {
@include mod-list(3, 0){
  @include select-range(3, 5){
    // styles for items 3 to 5 in a list mod 3 remainder = 0

Which is, if nothing else, much easier to read!

Putting it all together

So now that I have a little arsenal of tools to help me deal with mods, ranges, and ranges within mods, I can break away from standard-implementation fixed length or fixed-layout lists. Creative use of mod queries and range selectors lets me apply styles to change the layout of elements.

Getting back to the original requirement—getting my list to behave—it became clear that if I styled the list assuming it was a multiple of three, then there would only be two other use cases to support:

  • Mod 3, remainder 1
  • Mod 3, remainder 2

If there was one remaining item, I’d make the second row take three items (instead of the default two), but if the remainder was 2, I could make the third row take two items (with the fourth and fifth items at 50%).

In the end, I didn’t need numerous queries at all, and the ones I did need were actually quite simple.

There was one special case: What if the list only contained two elements?

That was solved with a query to select the second item when it’s also the last child.

li:nth-child(2):last-child { 
... styles for the last item if it’s also the second item ...

The queries ultimately weren’t as hard as I’d expected; I just needed to combine the mod and range selectors.

li:nth-last-child(3n):first-child /* mod query */ 
~ li:nth-child(n+3):nth-child(-n+5){ /* range selector */
... styles for 3rd to 5th elements, in a list divisible by 3 ...

Altogether, my CSS looked something like this in the end:

  default settings for list (when its mod 3 remainder 0)
  list items are 33% wide
  except; the first item is 100% 
          the second and third are 50%
li {
  width: 33.33%;
li:first-child {
  width: 100%;
/* range selector for 2nd and 3rd */
  width: 50%;
/* overrides */
/* mod query override, check for mod 3 remainder = 1 */  
li:nth-last-child(3n+1):first-child ~ li:nth-child(n+2):nth-child(-n+3) {
  width: 33.33%; /* override default 50% width for 2nd and 3rd items */
/* mod query override, check for mod 3 remainder = 2 */ 
li:nth-last-child(3n+2):first-child ~ li:nth-child(n+4):nth-child(-n+5) {
  width: 50%; /* override default 33% width for 4th and 5th items */
/* special case, list contains only two items */
li:nth-child(2):last-child {
  margin-left: 25%;

Experience for yourself (and a note on browser support)

The mod queries and range selectors used in this article rely on the CSS3 selectors, so they will work in all modern browsers that support CSS3, including Internet Explorer 9 and above (but remember, IE will expect a valid doctype).

I created a small mod query generator that you can use to experiment with mod queries.

When I first came across QQs, I thought they were great and interesting but largely theoretical, without many practical real-world use cases. However, with mobile usage outstripping desktop, and responsive design now the norm, the need to display lists, target parts of lists depending on the length/mod, and display lists differently at different breakpoints has become much more common. This really brings the practical application of QQs into focus, and I’m finding more than ever that they are an essential part of the UI developer’s toolkit.

Additional resources

News stories from Friday 07 October, 2016

Favicon for Kopozky 16:27 Pry Hard » Post from Kopozky Visit off-site link

Comic strip: “Pry Hard”

Starring: Mr Kopozky and some client

News stories from Tuesday 04 October, 2016

Favicon for A List Apart: The Full Feed 15:00 A Redesign with CSS Shapes » Post from A List Apart: The Full Feed Visit off-site link

Here at An Event Apart (an A List Apart sibling) we recently refreshed the design of our “Why Should You Attend?” page, which had retained an older version of our site design and needed to be brought into alignment with the rest of the site. Along the way, we decided to enhance the page with some cutting-edge design techniques: non-rectangular float shapes and feature queries.

To be clear, we didn’t set out to create a Cutting Edge Technical Example™; rather, our designer (Mike Pick of Monkey Do) gave us a design, and we realized that his vision happened to align nicely with new CSS features that are coming into mainstream support. We were pleased enough with the results and the techniques that we decided to share them with the community.

Styling bubbles

Here are some excerpts from an earlier stage of the designs (Fig. 1). (The end-stage designs weren’t created as comps, so I can’t show their final form, but these are pretty close.)

Fig 1: Late-stage design comps showing “desktop” and “mobile” views.

What interested me was the use of the circular images, which at one point we called “portholes,” but I came to think of as “bubbles.” As I prepared to implement the design in code, I thought back to the talk Jen Simmons has been giving throughout the year at An Event Apart. Specifically, I thought about CSS Shapes and how I might be able to use them to let text flow along the circles’ edges—something like Fig. 2.

Fig 2: Flowing around a circular shape.

This layout technique used to be sort of possible by using crude float hacks like Ragged Float and Sliced Sandbags, but now we have float shapes! We can define a circle—or even a polygon—that describes how text should flow past a floated element.

“Wait a minute,” you may be saying, “I haven’t heard about widespread support for Shapes!” Indeed, you have not. They’re currently supported only in the WebKit/Blink family—Chrome, Safari, and Opera. But that’s no problem: in other browsers, the text will flow past the boxy floats the same way it always has. The same way it does in the design comps, in fact.

The basic CSS looks something like this:

img.bubble.left {
    float: left; margin: 0 40px 0 0 ;
    shape-outside: circle(150px at 130px 130px);
img.bubble.right {
    float: right; margin: 0 0 0 40px;
    shape-outside: circle(150px at 170px 130px);

Each of those bubble images, by the way, is intrinsically 260px wide by 260px tall. In wide views like desktops, they’re left to that size; at smaller widths, they’re scaled to 30% of the viewport’s width.

Shape placement

To understand the shape setup, look at the left-side bubbles. They’re 260×260, with an extra 40 pixels of right margin. That means the margin box (that is, the box described by the outer edge of the margins) is 300 pixels wide by 260 pixels tall, with the actual image filling the left side of that box.

This is why the circular shape is centered at the point 130px 130px—it’s the midpoint of the image in question. So the circle is now centered on the image, and has a radius of 150px. That means it extends 20 pixels beyond the visible outer edge of the circle, as shown here (Fig. 3).

Fig 3: The 150px radius of the shape covers the entire visible part of the image, plus an extra 20px

In order to center the circles on the right-side bubbles, the center point has to be shifted to 170px 130px—traversing the 40-pixel left margin, and half the width of the image, to once again land on the center. The result is illustrated here, with annotations to show how each of the circles’ centerpoints are placed (Fig. 4).

Fig 4: Two of the circular shapes, as highlighted by Chrome’s Inspector and annotated in Keynote (!)

It’s worth examining that screenshot closely. For each image, the light blue box shows the element itself—the img element. The light orange is the basic margin area, 40 pixels wide in each case. The purple circle shows the shape-outside circle. Notice how the text flows into the orange area to come right up against the purple circle. That’s the effect of shape-outside. Areas of the margin outside that shape, and even areas of the element’s content outside the shape, are available for normal-flow content to flow into.

The other thing to notice is the purple circle extending outside the margin area.  This is misleading: any shape defined by shape-outside is clipped at the edge of the element’s margin box. So if I were to increase the circle’s radius to, say, 400 pixels, it would cover half the page in Chrome’s inspector view, but the actual layout of text would be around the margin edges of the floated image—as if there were no shape at all. I’d really like to see Chrome show this by fading the parts of the shape that extend past the margin box. (Firefox and Edge should of course follow suit!)

Being responsive

At this point, things seem great; the text flows past circular float shapes in Chrome/Safari/Opera, and past the standard boxy margin boxes in Firefox/Edge/etc. That’s fine as long as the page never gets so narrow as to let text wrap between bubbles—but, of course, it will, as we see in this screenshot (Fig. 5).

Fig 5: The perils of floats on smaller displays

For the right-floating images, it’s not so bad—but for the left floaters, things aren’t as nice. This particular situation is passably tolerable, but in a situation where just one or two words wrap under the bubble, it will look awful.

An obvious first step is to set some margins on the paragraphs so that they don’t wrap under the accompanying bubbles. For example:

.complex-content div:nth-child(even):not(:last-child) p {
    margin-right: 20%;
.complex-content div:nth-child(odd):not(:last-child) p {
    margin-left: 20%;

The point here being, for all even-numbered child divs (that aren’t the last child) in a complex-content context, add a 20% right margin; for the odd-numbered divs, a similar left margin.

That’s pretty good in Chrome (Fig. 6) (with the circular float shapes) because the text wraps along the bubble and then pushes off at a sensible point. But in Firefox, which still has the boxy floats, it creates a displeasing stairstep effect (Fig. 7).

Fig 6: Chrome (with float shapes)Fig 7: Firefox (without float shapes)

On the flip side, increasing the margin to the point that the text all lines up in Firefox (33% margins) would mean that the float shape in Chrome would be mostly pointless, since the text would never flow down along the bottom half of the circles.

Querying feature support

This is where @supports came into play. By using @supports to run a feature query, I could set the margins for all browsers to the 33% needed when shapes aren’t supported, and then reduce it for browsers that do understand shapes. It goes something like this:

.complex-content div:nth-child(even):not(:last-child) p {
    margin-right: 33%;
.complex-content div:nth-child(odd):not(:last-child) p {
    margin-left: 33%;

@supports (shape-outside: circle()) {
    .complex-content div:nth-child(even):not(:last-child) p {
        margin-right: 20%;
    .complex-content div:nth-child(odd):not(:last-child) p {
        margin-left: 20%;

With that, everything is fine in the two worlds (Fig. 8 and Fig. 9). There are still a few things that could be tweaked, but overall, the effect is pleasing in browsers that support float shapes, and also those that don’t. The two experiences are shown in the following videos. (They don’t autoplay, so click at your leisure.)

ALA CSS Bubble Example CSS Shape Preview in ChromeCaptured in Chrome; higher resolution available (mp4, 3.3MB)ALA CSS Bubble Example in Firefox CSS Shape Preview in FirefoxCaptured in Firefox; higher resolution available (mp4, 2.9MB)

Thanks to feature queries, as browsers like Firefox and MS Edge add support for float shapes, they’ll seamlessly get the experience that currently belongs only to Chrome and its bretheren.  There’s no browser detection to adjust later, no hacks to clear out. There’s only silent progressive enhancement baked right into the CSS itself.  It’s pretty much “style and forget.”

While an arguably minor enhancement, I really enjoyed the process of working with shapes and making them progressively and responsively enhanced. It’s a nice little illustration of how we can use advanced features of CSS right now, without the usual wait for widespread support. This is a general pattern that will see a lot more use as we start to make use of shapes, flexbox, grid, and more cutting-edge layout tools, and I’m glad to be able to offer this case study.

Further reading

If you’d like to know more about float shapes and feature queries, I can do little better than to recommend the following articles.

Favicon for the web hates me 08:00 Aller guten Dinge sind drei – Code Talks 2016 » Post from the web hates me Visit off-site link

Nach einiger Zeit des „Nicht-Sprechens“ habe ich mich dazu entschieden mal wieder einen Vortrag auf einer Konferenz einzureichen. Mit Erfolg. Die war dann auch schon letzte Woche und es war wie immer ein Vergnügen, die Code Talks in Hamburg zu rocken. Torsten (@toddyfranz) und ich wollten ein wenig erzählen, wie wir damals bei Gruner+Jahr das […]

The post Aller guten Dinge sind drei – Code Talks 2016 appeared first on the web hates me.

News stories from Monday 03 October, 2016

Favicon for A List Apart: The Full Feed 05:01 This week's sponsor: Adobe XD » Post from A List Apart: The Full Feed Visit off-site link

ADOBE XD. Go from idea to prototype faster. Download XD to create and share your design ideas, and download the mobile companion apps to preview your prototypes on actual devices.

News stories from Tuesday 27 September, 2016

Favicon for A List Apart: The Full Feed 15:00 Task Performance Indicator: A Management Metric for Customer Experience » Post from A List Apart: The Full Feed Visit off-site link

It’s hard to quantify the customer experience. “Simpler and faster for users” is a tough sell when the value of our work doesn’t make sense to management. We have to prove we’re delivering real value—increased the success rate, or reduced time-on-task, for example—to get their attention. Management understands metrics that link with other organizational metrics, such as lost revenue, support calls, or repeat visits. So, we need to describe our environment with metrics of our own.

For the team I work with, that meant developing a remote testing method that would measure the impact of changes on customer experience—assessing alterations to an app or website in relation to a defined set of customer “top tasks.” The resulting metric is stable, reliable, and repeatable over time. We call it the Task Performance Indicator (TPI).

For example, if a task has a TPI score of 40 (out of 100), it has major issues. If you measure again in 6 months’ time but nothing has been done to address the issues, the testing score will again result in a TPI of 40.

In traditional usability testing, it has long been established that if you test with between three and eight people, you’ll find out if significant problems exist. Unfortunately, that’s not enough to reveal precise success rates or time-on-task measurements. What we’ve discovered from hundreds of tests over many years is that reliable and stable patterns aren’t apparent until you’re testing with between 13 and 18 people. Why is that?

When the number of participants ranges anywhere from 13–18 people, testing results begin to stabilize and you’re left with a reliable baseline TPI metric.

The following chart shows why we can do this (Fig. 1).

A graph showing how TPI scores essentially leveled out upon as more participants were included.Fig 1: TPI scores start to level out and stabilize as more participants are tested.

How TPI scores are calculated

We’ve spent years developing a single score that we believe is a true reflection of the customer experience when completing a task.

For each task, we present the user with a “task question” via live chat. Once they understand what they have to do, the user indicates that they are starting the task. At the end of the task, they must provide an answer to the question. We then ask people how confident they are in their answer.

A number of factors affect the resulting TPI score.

Time: We establish what we call the “Target Time”—how long it should take to complete the task under best practice conditions. The more they exceed the target time, the more it affects the TPI.

Time out: The person takes longer than the maximum time allocated. We set it at 5 minutes.

Confidence: At the end of each task, people are asked how confident they are. For example, low confidence in a correct answer would have a slight negative impact on the TPI score.

Minor wrong: The person is unsure; their answer is almost correct.

Disaster: The person has high confidence, but the wrong result; acting on this wrong answer could have serious consequences.

Gives up: The person gives up on the task.

A TPI of 100 means that the user has successfully completed the task within the agreed target times.

In the following chart, the TPI score is 61 (Fig. 2).

A pie chart illustrating sample results for Overall Task Performance, and a vertical bar showing Mean Completion Times in comparison with Mean Target Times.Fig 2: A visual breakdown of sample results for Overall Task Performance, Mean Completion Times, and Mean Target Times.

Developing task questions

Questions are the greatest source of potential noise in TPI testing. If a question is not worded correctly, it will invalidate the results. To get an overall TPI for a particular website or app, we typically test 10-12 task questions. In choosing a question, keep in mind the following:

Based on customer top tasks. You must choose task questions that are examples of top tasks. If you measure and then seek to improve the performance of tiny tasks (low demand tasks) you may be contributing to a decline in the overall customer experience.

Repeatable. Create task questions that you can test again in 6 to 12 months.

Representative and typical. Don’t make the task questions particularly difficult. Start off with reasonably basic, typical questions.

Universal, everyone can do it. Every one of your test participants must be able to do each task. If you’re going to be testing a mixture of technical, marketing, and sales people, don’t choose a task question that only a salesperson can do.

One task, one unique answer. Limit each task question to only one actual thing you want people to do, and one unique answer.

Does not contain clues. The participant will examine the task question like Sherlock Holmes would hunt for a clue. Make sure it doesn’t contain any obvious keywords that could be answered by conducting a search.

Short—30 words or less. Remember, the participant is seeing each task question for the first time, so aim to keep its length at less than 20 words (and definitely less than 30).

No change within testing period. Choose questions where the website or app is not likely to change during the testing period. Otherwise, you’re not going to be testing like with like.

Case Study: Task questions for OECD

Let’s look at some top tasks for the customers of Organisation for Economic Co-operation and Development (OECD), an economic and policy advice organization.

  1. Access and submit country surveys, reviews, and reports.
  2. Compare country statistical data.
  3. Retrieve statistics on a particular topic.
  4. Browse a publication online for free.
  5. Access, submit, and review working papers.

Based on that list, these task questions were developed:

  1. What are OECD’s latest recommendations regarding Japan’s healthcare system?
  2. In 2008, was Vietnam on the list of countries that received official development assistance?
  3. Did more males per capita die of heart attacks in Canada than in France in 2004?
  4. What is the latest average starting salary, in US dollars, of a primary school teacher across OECD countries?
  5. What is the title of Box 1.2 on page 73 of OECD Employment Outlook 2009?
  6. Find the title of the latest working paper about improvements to New Zealand’s tax system.

Running the test

To test 10-12 task questions usually takes about one hour, and you’ll need between 13 and 18 participants (we average 15). Make sure that they’re representative of your typical customers. 

We’ve found that remote testing is better, faster, and cheaper than traditional lab-based measurement for TPI testing. With remote testing, people are more likely to behave in a natural way because they are in their normal environment—at home or in the office—and using their own computer. That makes it much easier for someone to give you an hour of their time, rather than spend the morning at your lab. And since the cost is much lower than lab-based tests, we can set them up more quickly and more often. It’s even convenient to schedule them using Webex, GoToMeeting, Skype, etc.

The key to a successful test is that you are confident, calm, and quiet. You’re there to facilitate the test—not to guide it or give opinions. Aim to become as invisible as possible.

Prior to beginning the test, introduce yourself and make sure the participant gives you permission to record the session. Next, ask that they share their screen. Remember to stress that you are only testing the website or app—not them. Ask them to go to an agreed start point where all the tasks will originate. (We typically choose the homepage for the site/app, or a blank tab in the browser.)

Explain that for each task, you will paste a question into the chat box found on their screen. Test the chat box to confirm that the participant can read it, and tell them that you will also read the task aloud a couple of times. Once they understand what they have to do, ask them to indicate when they start the task, and that they must give an answer once they’ve finished. After they’ve completed the task, ask the participant how confident they are in their answer.

Analyzing the results

As you observe the tests, you’re looking for patterns. In particular, look for the major reasons people give for selecting the wrong answer or exceeding the target time.

Video recordings of your customers as they try—and often fail—to complete their tasks have powerful potential. They are the raw material of empathy. When we identify a major problem area during a particular test, we compile a video containing three to six participants who were affected. For each participant, we select less than a minute’s worth of video showing them while affected by this problem. We then edit these participant snippets into a combined video (that we try to keep under three minutes). We then get as many stakeholders as possible to watch it. You should seek to distribute these videos as widely, and as often as possible.

How Cisco uses the Task Performance Indicator

Every six months or so, we measure several tasks for Cisco, including the following:

Task: Download the latest firmware for the RV042 router.

The top task of Cisco customers is downloading software. When we started the Task Performance Indicator for software downloads in 2010, a typical customer might take 15 steps and more than 300 seconds to download a piece of software. It was a very frustrating and annoying experience. The Cisco team implemented a continuous improvement process based on the TPI results. Every six months, the Task Performance Indicator was carried out again to see what had been improved and what still needed fixing. By 2012—for a significant percentage of software—the number of steps to download software had been reduced from 15 to 4, and the time on task had dropped from 300 seconds to 40 seconds. Customers were getting a much faster and better experience.

According to Bill Skeet, Senior Manager of Customer Experience for Cisco Digital Support, implementing the TPI has had a dramatic impact on how people think about their jobs:

We now track the score of each task and set goals for each task. We have assigned tasks and goals to product managers to make sure we have a person responsible for managing the quality of the experience ... Decisions in the past were driven primarily by what customers said and not what they did. Of course, that sometimes didn’t yield great results because what users say and what they do can be quite different.

Troubleshooting and bug fixing are also top tasks for Cisco customers. Since 2012, we’ve tested the following.

Task: Ports 2 and 3 on your ASR 9001 router, running v4.3.0 software, intermittently stop functioning for no apparent reason. Find the Cisco recommended fix or workaround for this issue.

Combination of pie charts and browser screenshots, depicting progression of change to the Bug Task Success Rate from February 2012 through December 2014.Fig 3: Bug Task Success Rate Comparisons, February 2012 through December 2014.

For a variety of reasons, it was difficult to solve the underlying problems connected with finding the right bug fix information on the Cisco website. Thus, the scores from February 2012 to February 2013 did not improve in any significant way.

For the May 2013 measurement, the team ran a pilot to show how (with the proper investment) it could be much easier to find bug fix information. As we can see in the preceding image, the success rate jumped. However, it was only a pilot and by the next measurement it had been removed and the score dropped again. The evidence was there, though, and the team soon obtained resources to work on a permanent fix. The initial implementation was for the July 2014 measurement, where we see a significant improvement. More refinements were made, then we see a major turnaround by December 2014.

Task: Create a new guest account to access the website and log in with this new account.

Graph depicting Success/Failure rates from March 2015 through June 2015Fig 4: Success/Failure rates from March 2015 through June 2015

This task was initially measured in 2014; the results were not good.

In fact, nobody succeeded in completing the task during the March 2014 measurements, resulting in three specific design improvements to the sign-up form. These involved:

  1. Clearly labelling mandatory fields
  2. Improving password guidance
  3. Eliminating address mismatch errors.

A shorter pilot form was also launched as a proof of concept. Success jumped by 50% in the July 2014 measurements, but dropped 21% by December 2014 because the pilot form was no longer there. By June 2015, a shorter, simpler form was fully implemented, and the success again reached 50%.

The team was able to show that because of their work:

  • The three design improvements improved the success rate by 29%.
  • The shorter form improved the success rate by 21%.

That’s very powerful. You can isolate a piece of work and link it to a specific increase in the TPI. You can start predicting that if a company invests X it will get a Y TPI increase. This is control and the route to power and respect within your organization, or to trust and credibility with your client.

If you can link it with other key performance indicators, that’s even more powerful.

The following table shows that improvements to the registration form halved the support requests connected with guest account registration (Fig. 5).

Bar chart illustrating registration support request numbers for Q1 2014 (1,500), Q2 2015 (679), and Q3 2015 (689).Fig 5: Registration Support Requests, Q1 2014, Q2 2015, and Q3 2015.

A more simplified guest registration process resulted in:

  • A reduction in support requests—from 1,500 a quarter, to less than 700
  • Three fewer people were required to support customer registration
  • 80% productivity improvement
  • Registration time down to 2 minutes from 3:25.

Task: Pretend you have forgotten the password for the Cisco account and take whatever actions are required to log in.

When we measured the change passwords task, we found that there was a 37% failure rate.

A process of improvement was undertaken, as can be seen by the following chart, and by December 2013, we had a 100% success rate (Fig. 6).

Four pie charts illustration the progression of improvement in success rate from November 2012 (63%), May 2013 (77%), August 2013 (88%), and December 2013 (100%).Fig 6: Progression of success rate improvement from November 2012 to December 2013.

100% success rate is a fantastic result. Job done, right? Wrong. In digital, the job is never done. It is always an evolving environment. You must keep measuring the top tasks because the digital environment that they exist within is constantly changing. Stuff is getting added, stuff is getting removed, and stuff just breaks (Fig. 7).

Two pie charts, one reporting a success rate of 41% for March 2014 and the other a 100% success rate for July 2014.Fig 7: Comparison of success rates, March 2014 and July 2014.

When we measured again in March 2014, the success rate had dropped to 59% because of a technical glitch. It was quickly dealt with, so the rate shot back up to 100% by July.

At every step of the way, the TPI gave us evidence about how well we were doing our job. It’s really helped us fight against some of the “bright shiny object” disease and the tendency for everyone to have an opinion on what we put on our webpages ... because we have data to back it up. It gave us more insight into how content organization played a role in our work for Cisco, something that Jeanne Quinn (senior manager responsible for the Cisco Partner) told us kept things clear and simple while working with the client.

The TPI allows you to express the value of your work in ways that makes sense to management. If it makes sense to management—and if you can prove you’re delivering value—then you get more resources and more respect.

News stories from Tuesday 20 September, 2016

Favicon for A List Apart: The Full Feed 15:00 Why We Should All Be Data Literate » Post from A List Apart: The Full Feed Visit off-site link

Recently, I was lucky enough to see the great Jared Spool talk (spoiler: all Spool talks are great Spool talks). In this instance, the user interface icon warned of the perils of blindly letting data drive design.

I am in total agreement with 90 percent of his premise. Collecting and analyzing quantitative data can indeed inform your design decisions, and smart use of metrics can fix critical issues or simply improve the user experience. However, this doesn’t preclude a serious problem with data, or more specifically, with data users. Spool makes this clear: When you don’t understand what data can and can’t tell you and your work is being dictated by decisions based on that lack of understanding—well, your work and product might end up being rubbish. (Who hasn’t heard a manager fixate on some arbitrary metric, such as, “Jane, increase time on page” or “Get the bounce rate down, whatever it takes”?) Designing to blindly satisfy a number almost always leads to a poorer experience, a poorer product, and ultimately the company getting poorer.

Where Spool and I disagree is in his conclusion that all design teams need to include a data scientist. Or, better yet, that all designers should become data scientists. In a perfect world, that would be terrific. In the less-perfect world that most of us inhabit, I feel there’s a more viable way. Simply put: all designers can and should learn to be data literate. Come to think of it, it’d be nice if all citizens learned to be data literate, but that’s a different think piece.

For now, let’s walk through what data literacy is, how to go about getting it for less effort and cost than a certificate from Trump University, and how we can all build some healthy data habits that will serve our designs for the better.

What Data Literacy Is and Isn’t

Okay, data literacy is a broad term—unlike, say, “design.” In the education field, researchers juggle the terms “quantitative literacy,” “mathematical literacy,” and “quantitative reasoning,” but parsing out fine differences is beyond the scope of this article and, probably, your patience. To keep it simple, let’s think about data literacy as healthy skepticism or even bullshit detection. It’s the kind of skepticism you might adopt when faced with statements from politicians or advertisers. If a cookie box is splashed with a “20% more tasty!” banner, your rightful reaction might be “tastier than what, exactly, and who says?” Yes. Remember that response.

Data literacy does require—sorry, phobics—some math. But it’s not so bad. As a designer, you already use math: figuring pixels, or calculating the square footage of a space, or converting ems to percent and back. The basics of what you already do should give you a good handle on concepts like percentages, probability, scale, and change over time, all of which sometimes can hide the real meaning of a statistic or data set. But if you keep asking questions and know how multiplication and division work, you’ll be 92 percent of the way there. (If you’re wondering where I got that percentage from, well—I made it up. Congratulations, you’re already on the road to data literacy.)

Neil Lutsky writes about data literacy in terms of the “construction, communication, and evaluation of arguments.” Why is this relevant to you as a designer? As Spool notes, many design decisions are increasingly driven by data. Data literacy enables you to evaluate the arguments presented by managers, clients, and even analytics packages, as well as craft your own arguments. (After all, a key part of design is being able to explain why you made specific design decisions.) If someone emails you a spreadsheet and says, “These numbers say why this design has to be 5 percent more blue,” you need to be able to check the data and evaluate whether this is a good decision or just plain bonkers.

Yes, this is part of the job.

It’s So Easy

Look, journalists can get pretty good at being data literate. Not all journalists, of course, but there’s a high correlation between the ability to question data and the quality of the journalism—and it’s not high-level or arcane learning. One Poynter Institute data course was even taught (in slightly modified form) to grade schoolers. You’re a smart cookie, so you can do this. Not to mention the fact that data courses are often self-directed, online, and free (see “Resources” listed below).

Unlike data scientists who face complex questions, large data sets, and need to master concepts like regressions and Fourier transforms, you’re probably going to deal with less complex data. If you regularly need to map out complex edge-node relationships in a huge social graph or tackle big data, then yes, get that master’s degree in the subject or consult a pro. But if you’re up against Google Analytics? You can easily learn how to ask questions and look for answers. Seriously, ask questions and look for answers.

Designers need to be better at data literacy for many of the same reasons we need to work on technical literacy, as Sarah Doody explains. We need to understand what developers can and can’t do, and we need to understand what the data can and can’t do. For example, an A/B test of two different designs can tell you one thing about one thing, but if you don’t understand how data works, you probably didn’t set up the experiment conditions in a way that leads to informative results. (Pro tip: if you want to see how a change affects click-through, don’t test two designs where multiple items differ, and don’t expect the numbers to tell you why that happened.) Again: We need to question the data.

So we’ve defined a need, researched our users, and identified and defined a feature called data literacy. What remains is prototyping. Let’s get into it, shall we?

How to Build Data Literacy by Building Habits

Teaching data literacy is an ongoing topic of academic research and debate, so I’ll leave comprehensive course-building to more capable hands than mine. But together, we can cheaply and easily outline simple habits of critical thought and mathematical practice, and this will get us to, let’s say, 89 percent data literacy. At the least, you’ll be better able to evaluate which data could make your work better, which data should be questioned more thoroughly, and how to talk to metric-happy stakeholders or bosses. (Optional homework: this week, take one metric you track or have been told to track at work, walk through the habits below, and report back.)

Habit one: Check source and context

This is the least you should do when presented with a metric as a fait accompli, whether that metric is from a single study, a politician, or an analytics package.

First, ask about the source of the data (in journalism, this is reflex—“Did the study about the health benefits of smoking come from the National Tobacco Profiteering Association?”). Knowing the source, you can then investigate the second question.

The second question concerns how the data was collected, and what that can tell you—and what it can’t.  Let’s say your boss comes in with some numbers about time-on-page, saying “Some pages are more sticky than others. Let’s redesign the others to keep customers on all the other pages longer.” Should you jump to redesign the less-sticky pages, or is there a different problem at play?

It’s simple, and not undermining, to ask how time-on-page was measured and what it means. It could mean a number of things, things that that single metric will never reveal. Things that could be real problems, real advantages, or a combination of the two. Maybe the pages with higher time-on-page numbers simply took a lot longer to load, so potential customers were sitting there as a complex script or crappy CDN was slooooowly drawing things on the not-a-customer-any-more’s screen. Or it could mean some pages had more content. Or it could mean some were designed poorly and users had to figure out what to do next.

How can you find this out? How can you communicate that it’s important to find out? A quick talk with the dev team or running a few observations with real users could lead you to discover what the real problem is and how you can redesign to improve your product.

What you find out could be the difference between good and bad design. And that comes from knowing how a metric is measured, and what it doesn’t measure. The metric itself won’t tell you.

For your third question, ask the size of the sample. See how many users were hitting that site, whether the time-on-page stat was measured for all or some of these users, and whether that’s representative of the usual load. Your design fix could go in different directions depending on the answer. Maybe the metric was from just one user! This is a thing that sometimes happens.

Fourth, think and talk about context. Does this metric depend on something else? For example, might this metric change over time? Then you have to ask over what time period the metric was measured, if that period is sufficient, and whether the time of year when measured might make a difference.

Remember when I said change over time can be a red flag? Let’s say your boss is in a panic, perusing a chart that shows sales from one product page dropping precipitously last month. Design mandates flood your inbox: “We’ve got to promote this item more! Add some eye-catching design, promote it on our home page!”

What can you do to make the right design decisions? Pick a brighter blue for a starburst graphic on that product page?

Maybe it would be more useful to look at a calendar. Could the drop relate to something seasonal that should be expected? Jack o’lantern sales do tend to drop after November 1. Was there relevant news? Apple’s sales always drop before their annual events, as people expect new products to be announced. A plethora of common-sense questions could be asked.

The other key point about data literacy and change is that being data literate can immunize against common errors when looking at change over time. This gets to numeracy.

Habit two: Be numerate

I first learned about numeracy through John Allen Paulos’ book Innumeracy: Mathematical Illiteracy and its Consequences, though the term “innumeracy” was originated by Pulitzer Prize-winning scientist Douglas Hofstadter. Innumeracy is a parallel to illiteracy; it means the inability to reason with numbers. That is, the innumerate can do math but are more likely to trip up when mathematical reasoning is critical. This often happens when dealing with probability and coincidence, with statistics, and with things like percentages, averages, and changes. It’s not just you—these can be hard to sort out sort out! We’re presented with these metrics a lot, but usually given little time to think about them, so brushing up on that bit of math can really help put out (or avoid) a trash fire of bad design decisions.

Consider this:  A founder comes in with the news that an app has doubled its market base in the two weeks it’s been available. It’s literally gone up 100 percent in that time. That’s pretty awesome, right? Time to break out the bubbly, right? But what if you asked a few questions and found that this really meant the founder was the first user, then eventually her mom got onto it. That is literally doubling the user base exactly 100 percent.

Of course that’s obvious and simple. You see right off why this startup probably shouldn’t make the capital outlay to acquire a bottle or two juuuust yet. But exactly this kind of error gets overlooked easily and often when the math gets a bit more complex.

Any time you see a percentage, such as “23% more” or “we lost 17%,” don’t act until you’ve put on your math hat. You don’t even need to assume malice; this stuff simply gets confusing fast, and it’s part of your job not to misread the data and then make design decisions based on an erroneous understanding.

Here’s an example from Nicolas Kayser-Bril, who looks into the headline, “Risk of Multiple Sclerosis Doubles When Working at Night”:

“Take 1,000 Germans. A single one will develop MS over his lifetime. Now, if every one of these 1,000 Germans worked night shifts, the number of MS sufferers would jump to two. The additional risk of developing MS when working in shifts is one in 1,000, not 100%. Surely this information is more useful when pondering whether to take the job.”

This is a known issue in science journalism that isn’t discussed enough, and often leads to misleading headlines. Whenever there’s a number suggesting something that affects people, or a number suggesting change, look not just at the percentage but at what this would mean in the real world; do the math and see if the result matches the headline’s intimation. Also ask how the percentage was calculated. How was the sausage made? Lynn Arthur Steen explains how percentages presented to you may not just be the difference of two numbers divided by a number. Base lesson: always learn what your analytics application measures and how it calculates things. Four out of five dentists that’s, what, 80 percent true?

Averages are another potentially deceptive metric that simple math can help; sometimes it’s barely relevant, if at all. “The average length of a book purchased on Amazon is 234.23 pages” may not actually tell you anything. Sometimes you need to look into what’s being averaged. Given the example “One in every 15 Europeans is illiterate,” Kayser-Bril points out that maybe close to one in 15 Europeans is under the age of seven. It’s good advice to learn the terms “mode,” “median,” and “standard deviation.” (It doesn’t hurt (much), and can make you a more interesting conversationalist at dinner parties!)

Habit three: Check your biases

I know, that sounds horrible. But in this context, we’re talking about cognitive biases, which everyone has (this is why I encourage designers to study psychology, cognition studies, and sociology as much as they can). Though we have biases, it’s how aware we are of these issues and how we deal with them that counts.

It’s out of scope to list and describe them all (just thinking I know them all is probably an example of Dunning-Kruger). We’ll focus on two that are most immediately relevant when you’re handed supposedly-objective metrics and told to design to them. At least, these are two that I most often see, but that may be selection bias.

Selection bias

Any metric or statistical analysis is only as good as (in part) what you choose to measure. Selection bias is when your choice of what to measure isn’t really random or representative. This can come from a conscious attempt to skew the result, from carelessly overlooking context, or due to some hidden process.

One example might be if you’re trying to determine the average height of the adult male in the United States and find it to be 6'4"—oops, we only collected the heights of basketball players. Online opinion polls are basically embodied examples of selection bias, as the readers of a partisan site are there because they already share the site operator’s opinion. Or you may be given a survey that shows 95 percent of users of your startup’s app say they love it, but when you dig in to the numbers, the people surveyed were all grandmothers of the startup team employees (“Oh, you made this, dear? I love it!”). This holds in usability testing, too: if you only select, say, high-level programmers, you may be convinced that a “to install this app, recompile your OS kernel” is a totally usable feature. Or end up with Pied Piper’s UI.

Now, these all seem like “sure, obvs” examples. But selection bias can show up in much more subtle forms, and in things like clinical studies. Dr. Madhukar Pai’s slides here give some great examples — especially check out Slide 47, which shows how telephone surveys have almost built-in selection biases.

So, what’s a designer to do? As you can see from Dr. Pai’s lecture slides, you can quickly get into some pretty “mathy” work, but the main point is that when you’re faced with a metric, after you’ve checked out the context, look at the sample. You can think about the claim on the cookie box in this way. It’s “20% more tasty”?  What was the sample, 19 servings of chopped liver and one cookie?

Confirmation bias

Storytelling is a powerful tool. Again, it’s how our brains are wired. But as with all tools, it can be used for good or for evil, and can be intentional or accidental. As designers, we’re told we have to be storytellers: how do people act, how do they meet-cute our product, how do they feel, what’s the character arc? This is how we build our knowledge of the world, by building stories about it. But, as Alberto Cairo explains in The Truthful Art this is closely linked to confirmation bias, where we unconsciously (or consciously) search for, select, shape, remember, interpret, or otherwise torture basic information so that it matches what we already think we know, the stories we have. We want to believe.

Confirmation bias can drive selection bias, certainly. If you only test your design with users who already know how your product works (say, power users, stakeholders, and the people who built the product), you will get distorted numbers and a distorted sense of how usable your product is. Don’t laugh: I know of a very large and popular internet company that only does user re-search with power users and stakeholders.

But even if the discovery process is clean, confirmation bias can screw up the interpretation. As Cairo writes, “Even if we are presented with information that renders our beliefs worthless, we’ll try to avoid looking at it, or we’ll twist it in a way that confirms them. We humans try to reduce dissonance no matter what.” What could this mean for your design practice? What could this mean for your designs when stakeholders want you to design to specific data?

Reading (Numbers) is Fundamental

So, yes. If you can work with a data scientist in your design team, definitely do so. Try to work with her and learn alongside her. But if you don’t have this luxury, or the luxury of studying statistics in depth, think of data literacy as a vital part of your design practice. Mike Monteiro is passionate that designers need to know math, and he’s of course correct, but we don’t need to know math just to calculate visual design. We need to know math enough to know how to question and analyze any metric we’re given.

This is something you can practice in everyday life, especially in an election season. When you see someone citing a study, or quoting a number, ask: What was measured? How was it measured? What was the context? What wasn’t measured? Does that work out in real life? Keep looking up terms like selection bias, confirmation bias, Dunning-Kruger, sample size effect, until you remember them and their application. That is how you build habits, and how you’ll build your data literacy muscles.

I’ve long loved the Richard Feynman quote (that Cairo cites in The Truthful Art): “The first principle is that you must not fool yourself — and you are the easiest person to fool.” Consider always that you might be fooling yourself by blindly accepting any metric handed to you. And remember, the second-easiest person to fool is the person who likely handed you the metric, and is motivated to believe a particular outcome. Data literacy requires honesty, mastering numeracy, and stepping through the habits we’ve discussed. Practice every day with news from politics: does a statistic in the news give you that “of course, that’s how things are” feeling? Take a deep breath, and dig in; do you agree with a policy or action because it’s your political party proposing it? What’s the context, the sample size, the bias?

It’s tough to query yourself this way. But that’s the job. It’s tougher to query someone else this way, whether it’s your boss or your significant other. I can’t help you figure out the politics and social minefield of those. But do try. The quality of your work (and life) may depend on it.


News stories from Monday 19 September, 2016

Favicon for A List Apart: The Full Feed 05:01 This week's sponsor: OPTIMAL WORKSHOP » Post from A List Apart: The Full Feed Visit off-site link

OPTIMAL WORKSHOP — test your website‘s performance with fast and powerful UX research tools.​

News stories from Tuesday 13 September, 2016

Favicon for A List Apart: The Full Feed 15:05 Designing Interface Animation: an Interview with Val Head » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: To mark the publication of Designing Interface Animation, ALA managing editor Mica McPheeters and editor Caren Litherland reached out to Val Head via Google Hangouts and email for a freewheeling conversation about web animation. The following interview has been edited for clarity and brevity.

Animation is not new, of course, but its journey on the web has been rocky. For years, technological limitations compelled us to take sides: Should we design rich, captivating sites in Flash? Or should we build static, standards-compliant sites with HTML and CSS (and maybe a little JavaScript)?

Author Val Head describes herself as a “weirdo” who never wanted to choose between those two extremes—and, thanks to the tools at our disposal today, we no longer have to. Without compromising standards, we can now create complex animations natively in the browser: from subtle transitions using CSS to immersive, 3-D worlds with WebGL. Animation today is not just on the web, but of the web. And that, says Val, is a very big deal.

Caren Litherland: Are people intimidated by animation?

Val Head: There are definitely some web folks out there who are intimidated by the idea of using web animation in their work. For some, it’s such a new thing—very few of us have a formal background in motion design or animation—and it can be tough to know where to start or how to use it. I’ve noticed there’s some hesitation to embrace web animation due to the “skip intro” era of Flash sites. There seems to be a fear of recreating past mistakes. But it doesn’t have to be that way at all.

We’re in a new era of web animation right now. The fact that we can create animation with the same technologies we’ve always used to make websites—things like CSS and JavaScript—completely changes the landscape. Now that we can make animation that is properly “of the web” (to borrow a phrase from Jeremy Keith), not just tacked on top with a plug-in, we get to define what the new definition of web animation is with our work.

Right now, on the web, we can create beautiful, purposeful animation that is also accessible, progressively enhanced, and performant. No other medium can do that. Which is really exciting!

CL: I’ve always felt that there was something kind of ahistorical and ahistoricizing about the early web. As the web has matured, it seems to have taken a greater interest in the history and traditions that inform it. Web typography is a good example of this increased self-awareness. Can the same be said for animation?

VH: I think so! In the early days of the web, designers often looked down on it as a less capable medium. Before web type was a thing, a number of my designer friends would say that they could never design for the web because it wasn’t expressive enough as a medium. That the web couldn’t really do design. Then the web matured, web type came along, and that drastically changed how we designed for the web. Web animation is doing much the same thing. It’s another way we have now to be expressive with our design choices, to tell stories, to affect the experience in meaningful ways, and to make our sites unique.

With type, we turned to the long-standing craft of print typography for some direction and ideas, but the more we work with type on the web, the more web typography becomes its own thing. The same is true of web animation. We can look to things like the 12 classic principles of animation for reference, but we’re still defining exactly what web animation will be and the tools and technologies we use for it. Web animation adds another dimension to how we can design on the web and another avenue for reflecting on what the rich histories of design, animation, and film can teach us.

Mica McPheeters: Do you find that animation often gets tacked on at the end of projects? Why is that? Shouldn’t it be incorporated from the outset?

VH: Yes, it often does get left to the end of projects and almost treated as just the icing on top. That’s a big part of what can make animation seem like it’s too hard or ineffective. If you leave any thought of animation until the very end of a project, it’s pretty much doomed to fail or just be meaningless decoration.

Web animation can be so much more than just decoration, but only if we make it part of our design process. It can’t be a meaningful addition to the user experience if you don’t include it in the early conversations that define that experience.

Good web animation takes a whole team. You need input from all disciplines touching the design to make it work well. It can’t just be designed in a vacuum and tossed over the fence. That approach fails spectacularly well when it comes to animation.

Communicating animation ideas and making animation truly part of the process can be the biggest hurdle for teams to embrace animation. Change is hard! That’s why I dedicated two entire chapters of the book to how to get animation done in the real world. I focus on how to communicate animation ideas to teammates and stakeholders, as well as how to prototype those ideas efficiently so you can get to solutions without wasting time. I also cover how to represent animation in your design systems or documentation to empower everyone (no matter what their background is) to make good motion design decisions.

CL: Can you say more about the importance of a motion audit? Can it be carried out in tandem with a content audit? And how do content and animation tie in with each other?

VH: I find motion audits to be incredibly useful before creating a motion style guide or before embarking on new design efforts. It’s so helpful to know where animation is already being used, and to take an objective look at how effective it is both from a UX angle and a branding angle. If you have a team of any significant size, chances are you’ve probably got a lot of redundant, and maybe even conflicting, styles and uses of animation in your site. Motion audits give you a chance to see what you’re already doing, identify things that are working, as well as things that might be broken or just need a little work. They’re also a great way to identify places where animation could provide value but isn’t being used yet.

Looking at all your animation efforts at a high level gives you a chance to consolidate the design decisions behind them, and establish a cohesive approach to animation that will help tie the experience together across mediums and viewport sizes. You really need that high-level view of animation when creating a motion style guide or animation guidelines.

You could definitely collect the data for a motion audit in tandem with a content audit. You’ll likely be looking in all the same places, just collecting up more data as you go through your whole site.

There is a strong tie between content and animation. I’ve been finding this more and more as I work with my consulting clients. Both can be focused around having a strong message and communicating meaningfully. When you have a clear vision of what you want to say, you can say it with the motion you use just like you can say it with the words you choose.

Voice and tone documents can be a great place to start for deciding how your brand expresses itself in motion. I’ve leaned on these more than once in my consulting work. Those same words you use to describe how you’d like your content to feel can be a basis of how you aim to make the animation feel as well. When all your design choices—everything from content, color, type, animation—come from the same place, they create a powerful and cohesive message.

CL: One thing in your book that I found fascinating was your statement that animation “doesn’t have to include large movements or even include motion at all.” Can you talk more about that? And is there any sort of relationship between animation and so called calm technology?

VH: It’s true, animation doesn’t always mean movement. Motion and animation are really two different things, even though we tend to use the words interchangeably. Animation is a change in some property over time, and that property doesn’t have to be a change in position. It can be a change in opacity, or color, or blur. Those kinds of non-movement animation convey a different feel and message than animation with a lot of motion.

If you stick to animating only non-movement properties like opacity, color, and blur, your interface will likely have a more calm and stable feel than if it included a lot of movement. So if your goal is to design something that feels calm, animation can definitely be a part of how you convey that feeling.

Any time you use animation, it says something, there’s no getting around that. When you’re intentional with what you want it to say and how it fits in with the rest of your design effort, you can create animation that feels like it’s so much a part of the design that it’s almost invisible. That’s a magical place to be for design.

MM: Do we also need to be mindful of the potential of animation to cause harm?

VH: We do. Animation can help make interfaces more accessible by reducing cognitive load, helping to focus attention in the right place, or other ways. But it also has potential to cause harm, depending on how you use it. Being aware of how animation can potentially harm or help users leads us to make better decisions when designing it. I included a whole chapter in the book on animating responsibly because it’s an important consideration. I also wrote about how animation can affect people with vestibular disorders a little while back on A List Apart.

MM: Who today, in your opinion, is doing animation right/well/interestingly?

VH: I’m always on the lookout for great uses of animation on the web—in fact, I highlight noteworthy uses of web animation every week in the UI Animation Newsletter.

Stripe Checkout has been one of my favorites for how well it melds UI animation seamlessly into the design. It really achieves that invisible animation that is so well integrated that you don’t necessarily notice it at first. The smooth 3D, microinteraction animation, and sound design on the Sirin Labs product page are also really well done, but take a completely different approach to UI animation than Checkout.

Publications have been using animation in wonderful ways for dataviz and storytelling lately, too. The Wall Street Journal’s Hamilton algorithm piece was a recent data-based favorite of mine and the New York Times did some wonderful storytelling work with animation around the Olympics with their piece on Simone Biles. Also, I really love seeing editorial animation, like the Verge had on a story about Skype’s sound design. The animations they used really brought the story and the sounds they were discussing come to life.

I really love seeing web animation used in such a variety of ways. It makes me extra excited for the future of web animation!

MM: Any parting thoughts, Val?

VH: My best advice for folks who want to use more animation in their work is to start small and don’t be afraid to take risks as you get more comfortable working with animation. The more you animate, the better you’ll get at developing a sense for how to design it well. I wrote Designing Interface Animation to give web folks a solid foundation on animation to build from and I’m really excited to see how web animation will evolve in the near future.

For even more web animation tips and resources, join me and a great bunch of designers and developers on the UI Animation Newsletter for a weekly dose of animation knowledge.

Favicon for A List Apart: The Full Feed 15:05 Designing Interface Animation » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share Chapter 9 of Val Head’s new book, Designing Interface Animation: Meaningful Motion for User Experience, available now from Rosenfeld. For 20% off all books purchased through, use the discount code ALADIA.

Each animation in an interface tells a micro story, and as a user encounters more and more animations throughout your site or product, these micro stories add up to reveal the personality and story of the brand or product behind them. The animations create an impression; they give your brand a certain personality. It’s up to us as designers to take control of the combined story that animations are telling about the brand we’re working on. Your animations will be much more effective if you intentionally design the additional messages they’re sending.

Brand animation design guidelines aren’t something entirely new, of course. Brands have been expressing themselves in motion in commercials, TV bumpers, video titles, and similar places for years, and they’ve had guidelines for those mediums. What’s new is the idea of needing animation design guidelines for the web or interfaces. Even if your brand will never be in a traditional commercial or video, having a website is enough of a reason to need a motion style guide these days.

How Your Brand Moves Tells Its Story

Deciding what you use animation for, and how you implement it, for a particular project defines how you express your brand or tell your brand’s story with animation. Often, the decisions of which properties to animate or what easing to use on which elements is done at the component or page level without considering the bigger picture. Assembling a global set of rules about motion and animation for your entire project will help you make more cohesive animation decisions moving forward. These choices lead to more consistent design decisions surrounding animation and make your design stronger overall. It requires you to go back and forth between the big picture of the overall project and the more detailed components, but your entire design will benefit from looking at the project from both perspectives as you work.

There are two approaches to begin defining how your brand expresses itself in motion. The first is to go from the bottom up: start by evaluating what you already have and build from there. The second is to go from the top down: first, determine what it is your brand should be saying about itself on a high level, and then determine how individual animations will express that concept.

The first approach works best for existing projects that already use animation. There could be hidden gems of communication to build upon in the animations you’ve already designed—ones that will inform the bigger picture you’re working to define. The second approach is generally your only option when starting a brand new project, as there won’t be any existing animation to start from. Whichever approach you choose (or even if you use both), you’ll arrive at the same end result, a common set of guidelines for putting your brand in motion, so they are equally good places to begin.

Defining Your Brand in Motion from the Bottom Up

Before you start documenting for the future, you need to get a good picture of what you’re currently using animation for. It’s hard to move forward before knowing where you currently stand. (That is, unless you’re planning to throw it all out and start over.) For existing projects that already use animation, you can start with a motion audit to find all the instances and ways you’re currently using animation. Collecting these in one place will identify the common threads and even help you eliminate unnecessary duplicated or overly similar animations. A motion audit will focus your animation efforts and the design reasoning behind them.

A motion audit gathers up all the interface animations you’re currently using to identify patterns and evaluate their effectiveness as a group.

The Motion Audit

To collect all your animations in one place, you’ll need some screen recording software that will output video. QuickTime is a handy built-in option for Macs, but a more specialized tool like ScreenFlow can save you some time with its more robust cropping and editing tools. Use whichever tool is easiest and fastest for you. The exact software used is less important than the end collection and what it will tell you.

How to do a motion audit (Fig. 9.1):

  • Collect screen recordings of every animation currently on your site. (Be sure to get a recording of all the different states for interactive animations.)
  • Crop and edit the video clips as needed to focus in on the animations.
  • Assemble all the video clips into one document and group them in categories according to content type (for example, one slide for all the button animations, one slide for navigation animations, etc.).
  • Review the document with your team to evaluate your brand’s existing animation style.

When you have all of those in one place, you can look for global trends, find potential redundancies, and most importantly, evaluate if the way you’re currently using animation accurately reflects the personality of your brand or product.

A screenshot of a page/slide of a motion audit document created for Shopify.Fig 9.1: A screenshot of a page/slide of a motion audit document created for Shopify.

Software for Motion Audits

Recording Animations

For the screen recording part of motion audits, I like to use ScreenFlow. It’s Mac only, but Camtasia offers similar functionality for both Windows and Mac. The QuickTime player that comes installed with OS X is also an option. It’s especially good for recording animations from an iPhone. Just plug it into the computer and select it as a camera in QuickTime.

The Motion Audit Document

My preferred software for the end document is Keynote. (PowerPoint would do just fine here as well.) I prefer it because it makes it easy to set each animation’s video clip to play when clicked and because it lends itself well to be projected and discussed as a group.

When Keynote isn’t an option, creating a web-based motion audit is a good alternative. It’s easy to share, and the video clips can be played directly from within the web pages. I find that having the videos playable from the document is really useful. Often, you’ll discover animations that some of your teammates weren’t aware of or maybe haven’t encountered in a while.

The key is having an end result that can be shared and discussed easily. So if there’s another format that your team has a strong preference for, you can make that work, too.

Evaluate Your Existing Animation’s Design

The first question you’ll want to investigate is: Does the personality expressed by the existing animations fit your brand? Look at the qualities of the animations you’re using to answer this one. What kind of personality traits do the easing and timing used convey? If it’s snappy and bouncy, does that match your brand’s personality and energy? If it’s all stable ease-in-outs, is your brand personality also stable and decided? If you find the mood of the animations doesn’t fit your brand’s personality, small changes to the easing and timing could make a huge difference to bring the animation in line with your brand.

If the personality conveyed from your animations is all over the place and not cohesive at all, starting over and taking the top-down approach described might be the next best step. It’s often easier to work from the top down with a clear vision, as opposed to trying to fix a huge group of existing animations that are all a little bit off.

If the personality conveyed by your animations does fit your brand perfectly, great! Take a detailed look at what all these animations have in common. List the easing, timing, and other design choices they have in common. This will be the basis of your brand’s animation style guide.

Evaluate Your Existing Animation’s Purpose

Next, look at the purpose of the animations you’ve collected. How are they aiding your users in their tasks? Are they bringing something positive to the experience? Their purpose can be anything from something tactical like providing feedback to something more branding related like expressing your brand’s personality. Challenge yourself to articulate a purpose for each one to help you evaluate how useful they are. If there’s no definable purpose for an animation to be there, consider eliminating or redesigning it to have a solid purpose and goal. (Good UX purposes for animation are covered in Chapters 4 through 8.)

It’s also helpful to group the animations in your motion audit by their purpose—gathering up all the animations that are there to give feedback into one section, for example. This can reveal some helpful insights, similarities, and patterns among animations that share a similar purpose.

Define Your Brand in Motion from the Top Down

If your brand doesn’t currently use any animation or if you’re starting a new project, you can develop your brand’s animation design guidelines from the top down instead. That is, start from your brand’s design philosophy or the traits your brand aims to embody and decide how to translate those into animation. It’s starting from a different place, but it gets you to the same end goal of having specific and defined ways that your brand will exist in motion.

The Words You Use to Describe Your Brand

Start with the adjectives that you use to describe your brand or product. The description of the personality or feelings it aims to create. Is your brand energetic? Friendly? Strong? Playful? Stable? All this descriptive language can be translated into motion just like it can for other design tools like typography and color. Animation speaks in similar ways.

A great place to look for these descriptive words is in your copywriting guidelines or voice and tone guidelines. Many of the same words used to describe how to write for your brand can be directly applied to motion as well. Brand style guides or brand books can also be a good source for descriptive language.

If none of the above exists for your brand, you’ll need to do a little work to define your brand’s voice. “5 Easy Steps to Define and Use Your Brand Voice” by Erika Heald could be helpful for a quick start. Or to get even deeper into defining your brand, I recommend reading Designing Brand Identity by Alina Wheeler.


If your brand is energetic, friendly, or bold, animation that relies on a lot of overshoots or follow-through and anticipation can help convey a sense of energy. Softly overshooting the target position can make animations feel both friendly and energetic. Drastic overshoots and quick speed changes read as bold and outgoing. Taken even further, adding a bit of bounce to overshoots or follow-through can convey a sense of even more energy in a movement—so much energy that an object has to bounce off its destination once or twice before it settles (Fig. 9.2).

Placement of square shapes in relation to a target finish line
Placement of square shapes in relation to a target finish line
Placement of square shapes in relation to a target finish line
Fig 9.2: Follow-through and overshoots in motion come across as energetic. The more exaggerated the movement, the more energy is implied. See it in action in this video.

Quick, soft movements—like overshoots—tend to read as energetic in a friendly way. On the other hand, quick movement with sharp changes in direction can suggest impatience, curtness, or urgency. That kind of movement is difficult to show in print, but you can see a video version here to see what I mean.

Playful and Friendly

Playful brands can take advantage of squash and stretch to convey that playfulness (Fig. 9.3). Squash and stretch also makes movements read as energetic. However, beware, because it can also make motion look childish or sloppy if it’s done with too much of a heavy hand. But, on the other hand, when it’s done well, it can really set you apart.

Bouncy easing can also evoke friendliness or playfulness. Wobbly bounces can seem playful and elastic, while springy bounces can seem friendly.

Rounded and elliptical shapes suggesting changes in dimension
Rounded and elliptical shapes suggesting changes in dimension
Rounded and elliptical shapes suggesting changes in dimension
Fig 9.3: Squash and stretch tends to create a sense of playfulness and a little goes a long way. See it in action in this video.

Decisive and Sure

Ease-in-outs—that is any easing that gradually speeds up into the action, is fastest in the middle, and then slows at the end of the action—are balanced and stable. They produce animation that accelerates into the action and then slows down to hit its end target exactly and with precision and decisiveness. Sticking with variations of ease-in-outs can communicate a sense of stability and balance for your brand. A variation of ease-in-out easing applied to a simple horizontal movement would look like this video example in Fig. 9.4.

A graphic illustration the curves of an ease-in-out progressionFig 9.4: Motion with ease-in-out easing like the graph above, and similar easing curve variations, tends to read as calm and decisive action because elements move fastest in the middle of the action and decelerate into their final position. You can see the resulting motion in this video.


The amount of movement you employ can also say something about your brand. Animation doesn’t necessarily have to include large movements or even include motion at all. Smaller movements read as more calm and subtle than larger more drastic movements. Using smaller movements can contribute to the stable and calm personality of your brand.

You can still imply the same kinds of movements, just in a less drastic way. For example, when you aim to create small movements, you might have a modal animate into place from 50% of the way down the screen instead of 100% off-screen past the bottom of the visible area (Fig. 9.5).

Visual comparisons of location adjustments as a square either moves in small increments or large ones
Visual comparisons of location adjustments as a square either moves in small increments or large ones
Visual comparisons of location adjustments as a square either moves in small increments or large ones
Visual comparisons of location adjustments as a square either moves in small increments or large ones
Visual comparisons of location adjustments as a square either moves in small increments or large ones
Visual comparisons of location adjustments as a square either moves in small increments or large ones
Fig 9.5: Both squares in the frames above arrive at the same destination, but the first one gets there by moving a shorter distance. This smaller movement often reads as feeling calmer and more subdued than larger movements. See both in action in video: small movements vs. large movements.


Animating properties like opacity and blur instead of creating movement is another way of conveying a sense of calm and stability (Fig. 9.6). (Animating these properties will change the appearance of the object—making it more transparent or blurred, for example—but because the position of the element isn’t being animated, no movement will occur.) It can also convey a sense of softness or even feel dreamy, depending on how softly you use the opacity and blurs. Sticking to these nonmovement properties can still say so much about your brand in small spaces where motion may not be possible or desirable.

A square shape progressively becoming transparent
A square shape progressively becoming transparent
A square shape progressively becoming transparent
Fig 9.6: Animating non-motion properties, like blur and opacity, can read as more stable and subtle. See it in action in this video.

These are just the start of adjectives to consider when trying to convey a specific type of energy in the design of your animation. Like most other design tools, it’s more of an art than a science. Experiment with the guidelines to find what expresses your brand best for you.

Referencing Motion from Real Life

Looking to the physical world can be a great option for finding your brand’s style for motion by finding a physical object or creature to emulate with your on-screen animation. Technically, you could choose anything at all to base your motion on, but this works best when the thing you choose is relevant—either literally or metaphorically—to your product or brand.

IBM has done a wonderful job of this with its Machines in Motion design guidelines. IBM used to make those giant, room-sized computers, typewriters, and other hardware before becoming the IBM they are today. They decided to reach back to their rich history as a company when defining how they would express their brand in motion (Fig. 9.7).

Screenshot of IBM’s Machines in Motion design guidelinesFig 9.7: IBM’s Machines in Motion design guidelines pair movements from the physical products IBM used to make with matching motion for their animation interactions. See it in action

They used these past machines to inform their motion design efforts on two levels. On a high level, they chose four machine traits that all their interface motions should embody: agility, efficiency, precision, and order. From there, they got more specific and paired motion from the actual machines with screen-based equivalent animations. On-screen menu drawers are animated to have the same motion as the carriage return motion of a 1970s IBM typewriter. Loading spinners are animated to have the same acceleration patterns as reel-to-reel tapes of an old mainframe’s tape drives.

These one-to-one translations of motion from the historical real-world objects to the screen-based motion inform all of their motion design decisions. If you have physical objects, either historical or not, that are significant to your brand or product, you could develop your own guidelines using this same approach.

A more metaphorical approach to emulating real-world objects can work well, too. Finding a particular dance piece or animal movement that speaks to the same personality values as your brand can be a great place to start. Music can be a source of motion inspiration, even if you’re not including any sound in your interface. Choosing a specific rhythm or phrasing from music to apply to your animation’s movement brings a whole new dimension to the idea of UX choreography. There are so many possibilities out there. Find something that feels inspiring for your brand and explore how it can establish a cohesive thread through all your animations.

Staying on Point

  • Animation design guidelines or values can help keep your brand’s motion efforts consistent and cohesive.
  • Collecting and evaluating existing animations as a group with a motion audit can give you valuable insight into how you’re currently using animation.
  • The same words you use to describe your brand and its values can be translated into motion to define your brand’s motion style.
  • Looking to real-world objects or animals to emulate can also help define what your brand looks like in motion.

News stories from Monday 05 September, 2016

Favicon for Kopozky 14:47 The Book – 10 Years Jubilee » Post from Kopozky Visit off-site link

Photo: “Kopozky – The Book”

“Kopozky – The Book”: now available at

News stories from Sunday 07 August, 2016

Favicon for Kopozky 17:59 A Paragon » Post from Kopozky Visit off-site link

Comic strip: “A Paragon”

Starring: Mr Kopozky and The Copywriter

News stories from Friday 10 June, 2016

Favicon for Kopozky 18:06 Not Sunny! » Post from Kopozky Visit off-site link

Comic strip: “Not Sunny!”

Starring: The Admin

News stories from Tuesday 31 May, 2016

Favicon for Joel on Software 01:14 Introducing HyperDev » Post from Joel on Software Visit off-site link

One more thing…

It’s been awhile since we launched a whole new product at Fog Creek Software (the last one was Trello, and that’s doing pretty well). Today we’re announcing the public beta of HyperDev, a developer playground for building full-stack web-apps fast.

HyperDev is going to be the fastest way to bang out code and get it running on the internet. We want to eliminate 100% of the complicated administrative details around getting code up and running on a website. The best way to explain that is with a little tour.

Step one. You go to

Boom. Your new website is already running. You have your own private virtual machine (well, really it’s a container but you don’t have to care about that or know what that means) running on the internet at its own, custom URL which you can already give people and they can already go to it and see the simple code we started you out with.

All that happened just because you went to

Notice what you DIDN’T do.

  • You didn’t make an account.
  • You didn’t use Git. Or any version control, really.
  • You didn’t deal with name servers.
  • You didn’t sign up with a hosting provider.
  • You didn’t provision a server.
  • You didn’t install an operating system or a LAMP stack or Node or operating systems or anything.
  • You didn’t configure the server.
  • You didn’t figure out how to integrate and deploy your code.

You just went to Try it now!

What do you see in your browser?

Well, you’re seeing a basic IDE. There’s a little button that says SHOW and when you click on that, another browser window opens up showing you your website as it appears to the world. Notice that we invented a unique name for you.

Over there in the IDE, in the bottom left, you see some client side files. One of them is called index.html. You know what to do, right? Click on index.html and make a couple of changes to the text.

Now here’s something that is already a little bit magic… As you type changes into the IDE, without saving, those changes are deploying to your new web server and we’re refreshing the web browser for you, so those changes are appearing almost instantly, both in your browser and for anyone else on the internet visiting your URL.

Again, notice what you DIDN’T do:

  • You didn’t hit a “save” button.
  • You didn’t commit to Git.
  • You didn’t push.
  • You didn’t run a deployment script.
  • You didn’t restart the web server.
  • You didn’t refresh the page on your web browser.

You just typed some changes and BOOM they appeared.

OK, so far so good. That’s a little bit like jsFiddle or Stack Overflow snippets, right? NBD.

But let’s look around the IDE some more. In the top left, you see some server side files. These are actual code that actually runs on the actual (virtual) server that we’re running for you. It’s running node. If you go into the server.js file you see a bunch of JavaScript. Now change something there, and watch your window over on the right.

Magic again… the changes you are making to the server-side Javascript code are already deployed and they’re already showing up live in the web browser you’re pointing at your URL.

Literally every change you make is instantly saved, uploaded to the server, the server is restarted with the new code, and your browser is refreshed, all within half a second. So now your server-side code changes are instantly deployed, and once again, notice that you didn’t:

  • Save
  • Do Git incantations
  • Deploy
  • Buy and configure a continuous integration solution
  • Restart anything
  • Send any SIGHUPs

You just changed the code and it was already reflected on the live server.

Now you’re starting to get the idea of HyperDev. It’s just a SUPER FAST way to get running code up on the internet without dealing with any administrative headaches that are not related to your code.

Ok, now I think I know the next question you’re going to ask me.

“Wait a minute,” you’re going to ask. “If I’m not using Git, is this a single-developer solution?”

No. There’s an Invite button in the top left. You can use that to get a link that you give your friends. When they go to that link, they’ll be editing, live, with you, in the same documents. It’s a magical kind of team programming where everything shows up instantly, like Trello, or Google Docs. It is a magical thing to collaborate with a team of two or three or four people banging away on different parts of the code at the same time without a source control system. It’s remarkably productive; you can dive in and help each other or you can each work on different parts of the code.

“This doesn’t make sense. How is the code not permanently broken? You can’t just sync all our changes continuously!”

You’d be surprised just how well it does work, for most small teams and most simple programming projects. Listen, this is not the future of all software development. Professional software development teams will continue to use professional, robust tools like Git and that’s great. But it’s surprising how just having continuous merging and reliable Undo solves the “version control” problem for all kinds of simple coding problems. And it really does create an insanely addictive form of collaboration that supercharges your team productivity.

“What if I literally type ‘DELETE * FROM USERS’ on my way to typing ‘WHERE id=9283’, do I lose all my user data?”

Erm… yes. Don’t do that. This doesn’t come up that often, to be honest, and we’re going to add the world’s simplest “branch” feature so that optionally you can have a “dev” and “live” branch, but for now, yeah, you’d be surprised at how well this works in practice even though in theory it sounds terrifying.

“Does it have to be JavaScript?”

Right now the server we gave you is running Node so today it has to be JavaScript. We’ll add other languages soon.

“What can I do with my server?”

Anything you can do in Node. You can add any package you want just by editing package.json. So literally any working JavaScript you want to cut and paste from Stack Overflow is going to work fine.

“Is my server always up?”

If you don’t use it for a while, we’ll put your server to sleep, but it will never take more than a few seconds to restart. But yes for all intents and purposes, you can treat it like a reasonably reliably, 24/7 web server. This is still a beta so don’t ask me how many 9’s. You can have all the 8’s you want.

“Why would I trust my website to you? What if you go out of business?”

There’s nothing special about the container we gave you; it’s a generic VM running Node. There’s nothing special about the way we told you to write code; we do not give you special frameworks or libraries that will lock you in. Download your source code and host it anywhere and you’re back in business.

“How are you going to make money off of this?”

Aaaaaah! why do you care!

But seriously, the current plan is to have a free version for public / open source code you don’t mind sharing with the world. If you want private code, much like private repos, there will eventually be paid plans, and we’ll have corporate and enterprise versions. For now it’s all just a beta so don’t worry too much about that!

“What is the point of this Joel?”

As developers we have fantastic sets of amazing tools for building, creating, managing, testing, and deploying our source code. They’re powerful and can do anything you might need. But they’re usually too complex and too complicated for very simple projects. Useful little bits of code never get written because you dread the administration of setting up a new dev environment, source code repo, and server. New programmers and students are overwhelmed by the complexity of distributed version control when they’re still learning to write a while loop. Apps that might solve real problems never get written because of the friction of getting started.

Our theory here is that HyperDev can remove all the barriers to getting started and building useful things, and more great things will get built.

“What now?”

Really? Just go to HyperDev and start playing!

News stories from Monday 23 May, 2016

Favicon for 08:12 Hallo Welt! » Post from Visit off-site link

Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Und dann starte mit dem Schreiben!

News stories from Tuesday 10 May, 2016

Favicon for Ramblings of a web guy 22:58 Don't say ASAP when you really mean DEADIN » Post from Ramblings of a web guy Visit off-site link
I have found that people tend to use the acronym ASAP incorrectly. ASAP stands for As Soon As Possible. The most important part of that phrase to me is As Possible. Sometimes, it's only possible to get something done 3 weeks from now due to other priorities. Or, to do it correct, it will take hours or days. However, some people don't seem to get this concept. Here are a couple of examples I found on the web.

The Problem with ASAP

What ‘ASAP’ Really Means

ASAP is toxic, avoid it As Soon As Possible


It's not the fault of those writers. The world in general seems to be confused on this. Not everyone is confused though. I found ASAP — What It REALLY Means which does seem to get the real meaning.

At DealNews, we struggled with the ambiguity surrounding this acronym. To resolve this, we coined our own own phrase and acronym to represent what some people seem to think ASAP means.


We use this when something needs to be done right now. It can't wait. The person being asked to DEADIN a task needs to literally drop what they are doing and do this instead. This is a much clearer term than ASAP.

With this new acronym in your quiver, you can better determine the importance of a task. Now, when someone asks you to do something ASAP, you can ask "Is next Tuesday OK?" Or you can tell them it will take 10 hours to do it right. If they are okay with those answers, they really did mean ASAP. If they are not, you can ask them if you should "Drop Everything And Do It Now". (Pro tip: It still make 10 hours to to right. Don't compromise the quality of your work.)

News stories from Monday 09 May, 2016

Favicon for Zach Holman 01:00 The New 10-Year Vesting Schedule » Post from Zach Holman Visit off-site link

While employees have been busy building things, founders and VCs have flipped the industry on its head and aggressively sought to prevent employees from making money from their stock options.

Traditionally, early employees would receive a option grant of a four year vesting schedule with a one year cliff. In other words, your stock would slowly “vest” — become available for you to purchase — over the course of four years, with the first options vesting one year after your hire date, and (usually) monthly after that.

The promise of this is to keep employees at the company for a number of years, since they don’t receive the full weight of their stock until they’ve been there four years.

Companies still hire with a four year vesting schedule, but the whole damn thing is a lie — in practice, people are going to be stuck at a company for much longer than four years if they want to retain the stock they’ve earned.

This stems from two new developments in recent years: companies are staying private longer (the average age of a recently-IPOed tech company is now 11 years old), and companies clamping down on private sales of employee stock after Facebook’s IPO. The impact is best summed up by the recent Handcuffed to Uber article, which effectively means employees can’t leave Uber without either forfeiting a fortune in unexercised stock, or paying a massive tax bill on imaginary, illiquid stock.

An industry run by people who haven’t been an employee in years

The leaders in the industry don’t really face any of the problems that employees face. They don’t even sugarcoat it: it’s pretty remarkable how plainspoken CEOs and VCs are when it comes to going public:

“I’m going to make sure it happens as late as possible,” said Kalanick to CNBC Monday. He added that he had no idea if Uber would go public in the next three to five years.

Don’t Expect an Uber IPO Any Time Soon


“I’m committed to Palantir for the long term, and I’ve advised the company to remain private for as long as it can,” said Mr. Thiel, a billionaire.

Palantir and Investors Spar Over How to Cash In

This is a much harder pill to swallow for those at Palantir, who tends to pay their engineers far below market rate. All this coming from CEO Alex Karp, who attempted to make the case that companies should simultaneously pay their employees less, give them more equity, but don’t allow them to cash that equity out.

Top venture capitalists agree as well:

This is a top VC and luminary advocating for the position that people who end up wanting to make some money on the stock that they’ve worked hard to vest are disloyal. Nothing I’ve read in the last few weeks has made me more furious. We’re now in a position where the four year vesting schedule isn’t enough for these people. They want the four year vesting schedule, and then they want to control your life for the subsequent 4-8 years while they fuck around in the private market.

If you just had a kid and need some additional liquidity, you’re disloyal. If you’d like to pay off your student debt, forget it, we’re not going to incentivize you to do that. If your partner is going back to school and you have to move across the country, tough luck, please turn in your stock options on the way out. If you’ve been busting your ass on a below market-rate salary for years and now you want a bit of what you’ve worked hard to vest, fuck you, go back to work.

Mechanisms of control

There’s obvious things that can be done to help fix this: one of which is getting rid of the 90-day exercise window, which many companies have started to do.

Another is internal stock buybacks, but these are usually low-key and restrictive. Usually you’ll get capped, either on a personal level (you can’t sell back more than x% of your shares) or on a company-wide level (the maximum that this group of employees can sell is xxx,xxx shares).

Or, sometimes these buybacks are limited by tenure: either it’s only for current employees, or you need to be at a company for x years to be able to participate. That’s somewhat reasonable on the surface, but on the other hand it’s en vogue now for unicorns to staff up and add two thousand people in the last three years you’ve worked there. You might end up managing dozens or hundreds of people in the meantime and have a massive impact on the organization, but still can’t sell some stock to avoid all your eggs in one basket, since only people who have been there four years or more can sell.

Another really dicey thing I’ve heard of happening is the following timeline:

  • Company hires a bunch of people
  • Two years pass
  • Company realizes the stock compensation they’re paying these employees is an order of magnitude lower than market average
  • Company gives new grants to employees to, in effect, “make up” for the difference
  • Company grants at a new four year vesting schedule

And that, ladies and gentlemen, is how you sneak a ton of your employees into a de facto six year vesting schedule. A few companies I’ve heard this happening at will give that refresh grant at maybe 10x their initial grant (given how far below market rate their initial grant was), so the employee is effectively stuck for the whole six year ride if they want to retain what they earn. They’ll virtually all go ahead and stick it out, particularly if they weren’t told that this is a catch-up grant — hey, I must be doing really great here, look at how big this second grant is!

Founders of VC-backed companies are insulated from these problems. Once you’ve reached a certain level of success — say, a $100M valuation or unicorn status or some such milestone — it’s expected that your investors will strongly encourage you to take some money off the table between financing rounds so you don’t have to deal with the stress of running a high-growth business while trying to make ends meet.

No one’s yet explained to me, though, why that reasoning works for founders but not for the first employee.

I get wanting to retain people, but strictly using financial levers to do that feels skeezy, and besides, monetary rewards might not be what ultimately motivates people, past a certain point. If you really want to retain your good people, stop building fucking horrible company cultures. You already got your four year vest out of these tenured employees; you can’t move the levers retroactively just because you’re grumpy it’s five years later and you’re not worth a trillion dollars yet.

Public Enemy

There are some people who have been pushing for solutions to these problems.

Mark Cuban’s been pushing the SEC to make a number of changes to make going public easier, and that “it’s worth the hassle to go public”. Mark Zuckerberg’s been pushing that angle as well. And, of course, Fred Wilson had his truly lovely message to Travis Kalanick:

You can’t just say fuck you. Take the goddamn company public.

There are a lot of possible ways to address these problems: taking companies public earlier, being progressive when it comes to exercise windows, doing internal buybacks more often and more permissively, adjusting the tax laws to treat illiquid options differently, and so on. I just don’t know if anyone’s really going to fix it while the people in charge aren’t experiencing the pain.

News stories from Thursday 28 April, 2016

Favicon for Zach Holman 01:00 Evaluating Delusional Startups » Post from Zach Holman Visit off-site link

We’re proven entrepreneurs — one cofounder interned at Apple in 2015, and the other helped organize the annual Stanford wake-and-bake charity smoke-off — who are going to take a huge bite out of the $45 trillion Korean baked vegan food goods delivery market for people who live within one block of Valencia Street (but not towards Mission Street because it’s gross and off-brand), and we’re looking for seasoned rockstars to launch this rocket ship into outer space, come join us, we’re likely backed by one of the venture capitalists you possibly read about in a recent court deposition!

Okay, so they’re always not going to come at you like this. If you’re in the market for a new gig at a hot startup, it’s worthwhile to spend some time thinking about if your sneaking suspicions are correct and the company you’re interviewing with might be full of pretty delusional people.

Here’s a couple traits of delusional startups I’ve been noticing.

I’m gonna make you rich, Bud Fox

After a long afternoon of interviews, I sat down with some head-of-something-rather. Almost verbatim, as well as I can remember it, he dropped this lovely gem in the first four minutes of the conversation:

Now, certainly you’d be joining a rocket ship. And clearly the stock you’d have would make you rich. So what I want to aaaaahhHHHHHHHHHH! thhhwaapkt

The second part of whatever he was saying got swallowed up by the huge Irony Vortex From Six Months In The Future that zipped into existence right next to him, as the Rocket Ship He Was On would promptly implode half a year later.

In my experience, people who promise riches for you, a new hire, fall into two camps:

  • They’re destined to lose it all, or
  • They’re about to become mega rich, and assume the breadcrumbs that fell from the corners of their mouths will also make you mega rich, obviously

Both of those camps are fairly delusional.

Many leaders — unfortunately not all, but that’s life — that have a good chance at striking it rich tend to be pretty realistic, cautious, and optimistically humble about it. In turn, having those personality traits might also lead them to making more generous decisions down the line that would benefit you as well, so that’s also a bonus.

Lately I’ve heard something specific come up from a number of my close friends: the bonus they just received in the first six months from their new job at a large corporate gig far dwarfed the stock proceeds they made from the hot startup they had worked at for years.

People have been saying this for decades, but it’s always worth reiterating: don’t join a startup for the pay, and if someone’s trying to dangle that in front of your eyes, you can tell them to shove their rocket ship up their you-know-where.

The blame game

A company I was interviewing at borked a final interview slot with a head-of-something-such, so I rescheduled them for coffee the following week.

Sipped my tea for half an hour… no show. Hey, it sucks, but miscommunication happens so it wasn’t much to fret over.

The rescheduled phone call another week later started off with an apology that quickly turned into a shitstorm. The main production service was down he said, and therefore he could not attend our coffee, nor could he look up and send me an email about it, even though he did notice it and did briefly feel bad about it. The fucking CEO shat on my team the next day in front of the whole company which was complete bullshit because his team Had Done All The Necessary Things and really it was The CEO’s Dumb Fault The Shit Was All Broken Anyway right? Christ. In any case the position we were interviewing you for has been filled do you want to try for anything else?

So there were a lot of things to unwind here, and I truly do have stories from interviewing at this company that will last me until the end of the sixth Clinton administration, but the real toxic aspect is the:

  • Dude complaining about leadership
  • Leadership blaming specific people and teams across the whole company

Cultures that throw each other under the bus — in either direction, up or down — don’t function as well. The wheels will fall off the wagon at some point, and you’re going to end up with a shit product. You can even be one of those bonkers 120-hour work week startups, grinding hard at all hours of the day, and still be good people to each other. You’ve got to bounce back from setbacks and mistakes. Blameless cultures are better cultures.

On a related note, it’s amazing what you can sometimes get people to admit in an interview. While chatting with another startup, I informally asked what the two employees thought of one of the cofounders. Total shit was the flat response. Doesn’t do jack, and really doesn’t belong in engineering anymore. Props for their openness, I guess, and maybe it helped me dodge a bullet, but how employees talk about others behind their backs says a lot about how cohesive and supportive the company is.

We’re backed by the best VCs, we’re very highly educated, we know product, we have the best product

I don’t understand how you can love your startup’s product.

For me, the high is all about what’s happening next. Can’t wait to ship that new design. The refactoring getting worked on will be an order of magnitude more performant. The wireframes for where we’re hoping to be two years from now is dripping with dopamine.

I don’t understand people who are happy with what they’ve got today. Once you’re happy, you’re in maintenance mode, and maybe that’s fine if you’ve finished your product and are ready to coast on your fat stacks, but by that point you’re beyond building something new anyway. These startups who eagerly float by on shit they did years ago, assuming that rep will carry through any new competition… I just don’t understand that.

Stewart Butterfield has a healthy viewpoint when he talks about Slack:

Are you bringing other changes to Slack?
Oh, God, yeah. I try to instill this into the rest of the team but certainly I feel that what we have right now is just a giant piece of shit. Like, it’s just terrible and we should be humiliated that we offer this to the public.

Certainly he’s being a bit facetious here, since I don’t imagine he thinks the work his employees have done is shit — rather, a product is a process and it takes a long time to chip away the raw marble into the statue inside of it.

The other weird aspect of this that I’ve noticed is that there are some companies who truly hate their competition. I really dig competition, and I think it brings out good stuff across the board, but when it flips into Hatred Of The Enemy it just gets weird. Like c’mon, each of your apps put mustaches on pictures of fish, y’all gotta chill the fuck out, lol.

Asking people what they think about their competition can be a pretty decent measurement of whether the company twiddles the Thumbs of Delusion. If they flatly espouse hatred, that’s weird. If they take a nuanced approach and contrast differences in respective philosophies, that’s promising, because it means they’ve actually thought through what makes them different, and their product and culture likely will be stronger for it.

It also likely just means fewer dicks at the company. You can only deal with so much hatred in life before it sucks you up into a hole.


I get that startups are supposed to be — by definition, really — delusional, in some respect. You’re building something that wasn’t there before, and it takes a lot of faith to build a nascent idea up into something big. So you need a leader to basically throw down so everyone can rally behind her.

Maybe I’m an ancient, grizzled old industry fuck now that I’m nearly 31, but I’m weary of seeing the sky-high bonkersmobiles driving around town these days. That’s part of the reason I’m cautiously optimistic about this bubble that will certainly almost certainly okay maybe it’ll pop again soon — it’ll get people a little more realistic about their goals again.

I still think startups are great and can change the world and all that bullshit… I just think it’s worthwhile to stop and think hard about what your potential company is promising you. Catching these things early on in the process can help save you a ton of pain down the road.

And if you’re hearing these things at your current company, well, good luck! You’re assuredly already on a rocket ship, surely, so congrats!

News stories from Friday 01 April, 2016

Favicon for Grumpy Gamer 14:45 Hey, guess what day it is... » Post from Grumpy Gamer Visit off-site link

That rights, it's the day the entire Internet magically think it's funny.

Pro-tip: You're not.

As Grumpy Gamer has been for going on twelve years, we're 100% April Fools' Day joke free.

I realize that's kind of ironic to say, since this blog is pretty-much everything free these days as I'm spending all my time blogging about Thimbleweed Park, the new point & click adventure game I'm working on.

And no, that is not a joke, check it out.

News stories from Wednesday 16 March, 2016

Favicon for Zach Holman 01:00 Firing People » Post from Zach Holman Visit off-site link

So it’s been a little over a year since GitHub fired me.

I initially made a vague tweet about leaving the company, and then a few weeks later I wrote Fired, which made it pretty clear that leaving the company was involuntary.

The reaction to that post was pretty interesting. It hit 100,000 page views within the first few days after publishing, spurred 389 comments on Hacker News, and indeed, is currently the 131st most-upvoted story on Hacker News of all time.

Let me just say one thing first: it’s pretty goddamn weird to have so many people interested in discussing one of your biggest professional failures. There were a few hard-hitting Real Professional Journalists out there launching some bombs from the 90 yard line, too:

If an employer has decided to fire you, then you’ve not only failed at your job, you’ve failed as a human being.


Why does everyone feel compelled to live their life in the public? Shut up and sit down! You ain’t special, dear..


Who is the dude?

You and me both, buddy. I ask myself that every day.

The vast majority of the comments were truly lovely, though, as well as the hundreds of emails I got over the subsequent days. Over and over again it became obvious at how commonplace getting fired and getting laid off is. Everyone seemingly has a story about something they fucked up, or about someone that fucked them up. This is not a rare occurrence, and yet no one ever talks about it publicly.

As I stumbled through the rest of 2015, though, something that bothered me at the onset crept forward more and more: the post, much like the initial vague tweet, didn’t say anything. That was purposeful, of course; I was still processing what the whole thing meant to me, and what it could mean.

I’ve spent the last year constantly thinking about it over and over and over. I’ve also talked to hundreds and hundreds of people about the experience and about their experiences, ranging from the relatively unknown developer getting axed to executives getting pushed out of Fortune 500 companies.

It bothers me no one really talks about this. We come up with euphemisms, like “funemployment!” and “finding my next journey!”, while all the while ignoring the real pains associated with getting forced out of a company. And christ, there’s a lot of real pain that can happen.

How can we start fixing these problems if we can’t even talk about them?

Me speaking at Bath Ruby

I spoke this past week at Bath Ruby 2016, in Bath, England. The talk was about my experiences leaving GitHub, as well as the experiences of so many of the people I’ve talked to and studied over the last year. You can follow along with the slide deck if you’d like, or wait for the full video of the talk to come out in the coming weeks.

I also wanted to write a companion piece as well. There’s just a lot that can’t get shoehorned into a time-limited talk. That’s what you’re reading right now. So curl up by the fire, print out this entire thing onto like a bajillion pages of dead tree pulp, and prepare to read a masterpiece about firing people. Once you realize that you’re stuck with this drivel, you can toss the pages onto the fire and start reading this on your iPad instead.

The advice people most readily give out on this topic today is:


“Fire fast”, they say! You have to fire fast because we’re moving really fuckin’ fast and we don’t have no time to deal with no shitty people draggin’ us down! Move fast and break people! Eat a big fat one, we’re going to the fuckin’ MOOOOOOOOON!

What the shit does that even mean, fire fast? Should I fire people four minutes after I hire them? That’ll show ‘em!

What about after a mistake? Should we fire people as retribution? Do people get second chances?

When we fire people, how do we handle things like continuity of insurance? Or details like taxes, stock, and follow-along communication? How do we handle security concerns when someone leaves an organization?

There’s a lot of advice that’s needed beyond fire fast. “Move fast and break people” doesn’t make any goddamn sense to me.

I’ve heard a lot of funny stories from people in the last year. From the cloud host employee who accidentally uploaded a pirated TV show to company servers and got immediately fired his second week on the job (“oops!” he remarked in hindsight) to the Apple employee who liked my initial post but “per company policy I’m not allowed to talk about why your post may or may not be relevant to me”.

I’ve also heard a lot of sad stories too. From someone whose board pushed them out of their own startup, but was forced to say they resigned for the sake of appearance:

There aren’t adjectives to explain the feeling when your baby tells you it doesn’t want/need you any more.

We might ask: why should we even care about this? They are ex-employees, after all. To quote from the seminal 1999 treatise on corporate technology management/worker relations, Office Space:

The answer, of course, is: we should care about all this because we’re human beings, dammit. How we treat employees, past and present, is a reflection on the company itself. Great companies care deeply about the relationship they maintain with everyone who has contributed to the success of the company.

This is kind of a dreary subject, but don’t worry too much: I’m going to aspire to make this piece as funny and as light-hearted as I can. It’s also going to be pretty long, but that’s okay, sometimes long things are worth it. (Haha dick joke, see? See what I’m doing here? God these jokes are going to doom us all.)


One last thing before we can finally ditch from these long-winded introductory sections: what you’re going to be reading is primarily my narrative, with support from many, many other stories hung off of the broader points.

Listen: I’m not super keen on doing this. I don’t particularly want to make this all about me, or about my experiences getting fired or quitting from any of my previous places of employment. This is a particularly depressing aspect in my life, and even a year later I’m still trying to cope with as much depression as anyone can really reasonably deal with.

But I don’t know how to talk about this in the abstract. The specifics are really where all the important details are. You need the specifics to understand the pain.

As such, this primarily comes at the problem from a specific perspective: an American living in San Francisco for a California-based tech startup.

When I initially wrote my first public “I’m fired!” post, some of you in more-civilized places with strong employee-friendly laws like Germany or France were aghast: who did I murder to get fired from my job? How many babies did I microwave to get to that point? Am I on a watchlist for even asking you that question?

California, though, is an at-will state. Employees can be fired for pretty much any reason. If your boss doesn’t like the color of shoes you’re wearing that day, BOOM! Fired. If they don’t like how you break down oxygen using your lungs in order to power your feeble human body, BOOM! Fired. Totally cool. As long as they’re not discriminating against federally-protected classes — religion, race, gender, disability, etc. — they’re in the clear.

Not all of you are working for companies like this. That’s okay — really, that’s great! — because I still think this touches on a lot of really broad points relevant to everyone. As I was building this talk out, I ended up noticing a ton of crossover with generally leaving a company, be it intentionally, unintentionally, on friendly terms, and on hostile terms. Chances are you’re not going to be at your company forever, so a lot of this is going to be helpful for you to start thinking about now, even if you ultimately don’t leave until years in the future.

Beyond that, I tried to target three different perspectives throughout all this, and I’ll call them out in separately-colored sections as well:


You: your perspective. If you ever end up in the hot seat and realize you’re about to get fired, this talk is primarily for you. There’s a lot of helpful hints for you to take into consideration in the moment, but also for the immediate future as well.


Company: from the perspective of the employer. Again, the major thing I’m trying to get across is to normalize the idea of termination of employment. I’m not trying to demonize the employer at all, because there are a lot of things the employer can do to really help the new former employee out and to help the company out as well. I’ll make a note of them in these blocks.


Coworker: the perspective that’s really not considered very much is the coworker’s perspective. Since they’re not usually involved in the termination itself, a lot of times it’s out of sight, out of mind. That’s a bit unfortunate, because there’s also some interesting aspects that can be helpful to keep in mind in the event that someone you work with gets fired.

Got it? Okay, let’s get into the thick of things.


I’m Zach Holman. I was number nine at GitHub, and was there between 2010 and 2015. I saw it grow to 250 employees (they’ve since doubled in size and have grown to 500 in the last year).

I’m kind of at the extreme end of the spectrum when it comes to leaving a company, which can be helpful for others for the purposes of taking lessons away from an experience. It had been a company I had truly grown to love, and in many ways I had been the face of GitHub, as I did a lot of talks and blog posts that mentioned my experiences there. More than once I had been confusingly introduced as a founder or CEO of the company. That, in part, was how I ultimately was able to sneak into the Andreessen Horowitz corporate apartments and stayed there rent-free for sixteen months. I currently have twelve monogrammed a16z robes in my collection, and possibly was involved in mistakenly giving the greenlight to a Zenefits employee who came by asking if they could get an additional key to the stairwell for a… meeting.

Fast forward to summer of 2014: I had been the top committer to the main github/github repository for the last two years, I had just led the team that shipped one of the last major changes to the site, and around that time I had had a mid-year performance review with my manager that was pretty glowing and had resulted in me receiving one of the largest refresh grants they had given during that review period.

This feels a little self-congratulatory to write now, of course, but I’ll lend you a quick reminder: I did get fired nonetheless, ha. The point I’m trying to put across with all this babble is that on the surface, I was objectively one of the last employees one might think to get fired in the subsequent six months. But everyone’s really at risk: unless you own the company, the company owns you.

Around the start of the fall, though, I had started feeling pretty burnt out. I had started to realize that I hadn’t taken a vacation in five years. Sure, I’d been out of town, and I’d even ostensibly taken time off to have some “vacations”, but in hindsight they were really anything but: I’d still be checking email, I’d still be checking every single @github mention on Twitter, and I’d still dip into chat from time to time. Mentally, I would still be in the game. That’s a mistake I’ll never make again, because though I had handled it well for years — and even truly enjoyed it — it really does grind you down over time. Reading virtually every mention of your company’s name on Twitter for five straight years is exhausting.

By the time November came around, I was looking for a new long-term project to take on. I took a week offsite with three other long-tenured GitHubbers and we started to tackle a very large new product, but I think we were all pretty well burnt out by then. By the end of the week it was clear to me how fried I was; brainstorming should not have been that difficult.

I chatted with the CEO at this point about things. He’s always been pretty cognizant of the need for a good work/life balance, and encouraged taking an open-ended sabbatical away from work for awhile.

My preference would be for you to stay at GitHub […] When you came back would be totally up to you

By February, my manager had sent me an email with the following:

Before agreeing to your return […] we need to chat through some things


First thing here from your perspective is to be wary if the goalposts are getting moved on you. I’m not sure if there was miscommunication higher up with my particular situation, but in general things start getting dicey if there’s a set direction you need to head towards and that direction suddenly gets shifted.

After I got fired, I talked to one of my mentors about the whole experience. This is a benefit of finding mentors who have been through everything in the industry way before you even got there: they have that experience that flows pretty easily from them.

After relaying this story, my friend immediately laughed and said, “yeah, that’s exactly the moment when they started the process to fire you”. I kinda shrugged it off and suggested it was a right-hand-meet-left kinda thing, or maybe he was reading it wrong. He replied no, that is exactly the kind of email he had sent in the past when he was firing someone at one of his companies, and it was also the kind of email he had received right before he was fired in the past, too.

Be wary of any sudden goalposts, really. I’ll mention later on about PIPs — performance improvement plans — and how they can be really helpful to employees as well as to employers, but in general if someone’s setting you up with specific new guidelines for you to follow, you should take it with a critical eye.

At this point things were turning a tad surprising. By February, the first time I received an email from my manager about all this, I hadn’t been involved with the company at all for two months through my sabbatical, and I hadn’t even talked to my manager in four months, ever since she had decided that 1:1s weren’t really valuable between her and me. This was well and fine with me, since I had been assigned to a bit of a catch-all team where none of its members worked together on anything, and I was pretty comfortable moving around the organization and working with others in any case.

I was in Colorado at the time, but agreed to meet up and have a video chat about things. When I jumped on the call, I noticed that — surprise! — someone from HR was on the call as well.

Turns out, HR doesn’t normally join calls for fun. Really, I’m not sure anyone joins video chats for fun. So this should have the first thing that tickled my spidey-sense, but I kinda just tucked it in the back of my mind since I didn’t really have time to consider things much while the call was going on.

At this point, I was feeling pretty good about life again; the time off had left me feeling pretty stoked about building things again, and I had a long list of a dozen things I was planning on shipping in my first month back on the job. The call turned fairly confrontational off the bat, though; my manager kept asking how I felt, I said I felt pretty great and wanted to get to work, but that didn’t seem to really to be the correct answer. Things took a turn south and we went back-and-forth about things. This led to her calling me an asshole twice (in front of HR, again, who didn’t seem to mind).

In hindsight, yeah, I was probably a bit of an asshole; I tend to clam up during bits of confrontation that I hadn’t thought through ahead of time, and most of my responses were pretty terse in the affirmative rather than offering a ton of detail about my thoughts.

After the conversation had ended on a fairly poor note, I thought things through some more and found it pretty weird to be in a position with a superior who was outwardly fairly hostile to me, and I made my first major mistake: I talked to HR.

I was on really good terms with the head of HR, so the next day I sent an email to her making my third written formal request in the prior six months or so to be moved off of my team and onto another team. I had some thoughts on where I’d rather see myself, but really, any other team at that point I would have been happy with; I had pretty close working relationships with all of the rest of the managers at the company. On top of that, the team I was currently on didn’t have any association with each other, so I figured it wouldn’t be a big deal to switch to another arbitrary team.

The head of HR was really great, and found the whole situation to be a bit baffling. We started talking about which teams might make sense, and I asked around to a couple people as to whether they would be happy with a new refugee (they were all thumbs-up on the idea). She agreed to talk to some of the higher-ups about things, and we’d probably arrange a sit-down in person when I came back in a few days to SF to sort out the details.


Don’t talk to HR.

This pains me to say. I’ve liked pretty much every person in HR at all the companies I’ve worked for; certainly we don’t want to view them as the enemy.

But you have to look to their motivations, and HR exists only to protect the company’s interests. Naturally you should aim to be cordial if HR comes knocking and wants to talk to you, but going out of your way to bring something to the attention of HR is a risk.

Unfortunately, this is especially important to consider if you’re in a marginalized community. Many women in our industry, for example, have gone to HR to report sexual harassment and promptly found that they were the one who got fired. Similar stories exist in the trans community and with people who have had to deal with racial issues.

Ultimately it’s up to you whether you think HR at your company can be trusted to be responsible with your complaint, but it also might be worthwhile to consider alternative options as well (i.e., speaking with a manager if you think they’d be a strength in the dispute, exploring legal or criminal recourse, and so on).

HR is definitely a friend. But not to you.


Avoid surprises. I’ve talked with a lot of former employees over the last year, and the ones with the most painful stories usually stem from being unceremoniously dropped into their predicament.

From a corporate perspective, it’s always painful to lose employees — regardless of the manner in which the employee leaves the company. But it’s almost always going to be more painful for the former employee, too.

I was out at a conference overseas a few years back with a few coworkers. One of my coworkers received a notice that he was to sit down on a video chat with the person he was reporting to at the time. He was fretting about it given the situation was a bit sudden and out of the ordinary, but I tried to soothe his fears, joking that they wouldn’t fire him right before an international conference that he was representing the company at. Sure enough, they fired him. Shows what I really knew about this stuff.

Losing your job is already tough. Dealing with it without a lot of lead-up to consider your options is even harder.

One of the best ways to tackle this is with a performance improvement plan, or PIP. Instituting a PIP is relatively straightforward: you tell the employee that they’re not really where you’d like to see them and that they’re in danger of losing their job, but you set clear goals so that the employee gets the chance at turning things around.

This is typically viewed as the company covering their ass so when they fire you it’s justified, but really I view it as a mutual benefit: it’s crystal-clear to the employee as to what they need to do to change their status in the organization. Sometimes they just didn’t know they were a low performer. Sometimes there are other problems in their life that impacted their performance, and it’s great to get that communication out there. Sometimes someone’s really not up to snuff, but they can at least spend some time preparing themselves prior to being shown the door.

The point is: surprise firings are the worst types of firings. It’s better for the company and for the employee to both be clear as to what their mutual expectations are. Then they can happily move forward from there.

At this point, I finished up my trip and flew back to San Francisco. It was time to chat in person.


I was fired before I entered the room.

You’re not going to be happy here. We need to move you out of the company.

That was the first thing that was said to me in the meeting between me, the CEO, and the head of HR. Not even sure I had finished sitting down, but I only needed a glance at the faces to know what was in the pipeline for this meeting.

You’re not going to be happy here is a bullshit phrase, of course, but not one that I have a lot of problems with in hindsight. My happiness has no impact on the company — my output does — but I think it was a helpful euphemism, at least.


Chill. The first thing I’d advise if you find yourself in the hot seat is to just chill out. I did that reasonably well, I think, by nodding, laughing, and giving each person in the room a hug before splitting. It was a pretty reasonable break, and I got to have a long chat with the head of HR immediately afterwards where we shot the shit about everything for awhile.

You ever watch soccer (or football, for you hipster international folk that still refuse to call it by its original name)? Dude gets a yellow card, and more often than not what does he do? Yells at the ref. Same for any sport, really. How many times does the ref say ah shit, sorry buddy, totally got it wrong, let me grab that card back? It just doesn’t happen.

That’s where you are in this circumstance. You can’t argue yourself back into a job, so don’t try to. At this point, just consider yourself coasting. If it’s helpful to imagine you’re a tiny alien controlling your humanoid form from inside your head a la the tiny outworlder in Men in Black, go for it.

My friend’s going through a particularly gnarly three- or four-weeks of getting fired from a company right now (don’t ask; it’s a disaster). This is the same type of advice I gave them: don’t feel like you need to make any statements or sign any legal agreements or make any decisions whatsoever while you’re in the room or immediately outside of it. If there’s something that needs your immediate attention, so be it, but most reasonable companies are going to give you some time to collect your thoughts, come up with a plan, and enact it instead of forcing you to sign something at gunpoint.

Remember: even if you’re really shit professionally, you’ll probably only get fired what, every couple of years? If you’re an average person what, maybe once a lifetime? Depending on the experience of management, the person firing you may deal with this situation multiple times a year. They’re better at it than you are, and they’re far less stressed out about it. I was in pretty good spirits at the time, but looking back I certainly wasn’t necessarily in my normal mindset.

Emotionally compromised

You’re basically like new-badass-Spock in the Star Trek reboot: you have been emotionally compromised; please note that shit in the ship’s log.

I’m still not fully certain why I got the axe; it was never made explicit to me. I asked other managers and those on the highest level of leadership, and everyone seemed be as confused as I was.

My best guess is that it’s Tall Poppy Syndrome, a phrase I was unfamiliar with until an Aussie told me about it. (Everything worthwhile in life I’ve learned from an Australian, basically.) The tallest poppy gets cut first.

With that, I don’t mean that I’m particularly talented or anything like that; I mean that I was the most obvious advocate internally for certain viewpoints, given how I’ve talked externally about how the old GitHub worked. In Japanese the phrase apparently translates to The tallest nail gets the hammer, which I think works better for this particular situation, heh. I had on occasion mentioned internally my misgivings about the lack of movement happening on any product development, and additionally the increasing unhappiness of many employees due to some internal policy changes and company growth.

Improving the product and keeping people happy are pretty important in my eyes, but I had declined earlier requests to move towards the management side of things, though, so primarily I was fairly heads-down on building stuff at that point rather than leading the charge for a lot of change internally. So maybe it was something else entirely; I’m not sure. I’m left with a lot of guesses.


Lockdown. The first thing to do after — or even while — someone is fired is to start locking down their access to everything. This is pretty standard to remove liability from any bad actors. Certainly the vast majority of people will never be a problem, but it’s also not insulting or anything from a former employee standpoint, either. (It’s preferred, really: if I’ve very recently been kicked out of a company, I’d really like to be removed from production access as soon as possible so I don’t even have to worry about accidentally breaking something after my tenure is finished, for example. It’s best for everyone.)

From a technical standpoint, you should automate the process of credential rolling as much as possible. All the API keys, passwords, user accounts, and other credentials should be regenerated and replaced in one fell swoop.

Automate this because, well, as you grow, more people are inherently going to leave your company, and streamlining this process is going to make it easier on everyone. No one gets up in the morning, jumps out of bed, throws open the curtains and yells out: OH GOODIE! I GET TO FIRE MORE PEOPLE TODAY AND CHANGE CONFIG VALUES FOR THE NEXT EIGHT HOURS! THANK THE MAKER!

Ideally this should be as close to a single console command or chat command as possible. If you’re following twelve-factor app standards, your config values should already be stored in the environment rather than tucked deep into code constants. Swap them out, and feel better about yourself while you have to perform a pretty dreary task.

Understand the implications of what you’re doing, though. I remember hearing a story from years back of someone getting let go from a company. Sure, that sucks, but what happened next was even worse: the firee had just received their photos back from their recent wedding, so they tossed them into their Dropbox. At the time, Dropbox didn’t really distinguish between personal and corporate accounts, and all the data was kind of mixed together. When the person was let go, the company removed access to the corporate Dropbox account, which makes complete sense, of course. Unfortunately that also deleted all their wedding photos. Basically like salt in an open wound. Dropbox has long since fixed this problem by better splitting up personal and business accounts, but it’s still a somewhat amusing story of what can go wrong if there’s not a deeper understanding of the implications of cutting off someone’s access.

Understand the real-world implications as well. Let’s take a purely hypothetical, can’t-possibly-have-happened-in-real-life example of this.

Does your company:

  • Give out RFID keyfobs instead of traditional metal keys in order to get into your office?
  • Does your office have multiple floors?
  • Do you disable the employee’s keyfob at the exact same time they’re getting fired?
  • Do you, for the sake of argument, also require keyfob access inside your building to access individual floors?
  • Is it possible — just possible at all, stay with me here — that the employee was fired on the third floor?
  • And is it possible that the employee would then go down to the second floor to collect their bag?
  • Is it at all possible that you’ve locked your newly-fired former employee INTO THE STAIRWELL, unable to enter the second floor, instead having to awkwardly text a friend they knew would be next to the door with a very unfortunate HI CAN YOU UNLOCK THE SECOND FLOOR DOOR FOR ME SINCE MY KEYFOB DOESN’T WORK PROBABLY BECAUSE I JUST GOT FIRED HA HA HA YEAH THAT’S A THING NOW WE SHOULD CHAT.

Totally hypothetical situation.

Yeah, totally was me. It was hilarious. I was laughing for a good three minutes while someone got up to grab the door.

Anyway, think about all of these implications. Particularly if the employee loses access to their corporate email account; many times services like healthcare, stock information, and payroll information may be tied to that email address, and that poses even more problems for the former employee.

This also underscores the benefit of keeping a cordial relationship between the company and the former employee. When I was fired, I found I still had access to a small handful of internal apps whose OAuth tokens weren’t getting rolled properly. I shot an email to the security team, so hopefully they were invalidated and taken care for future former employees.

Although now that I think about it, I still have access to the analytics for many of GitHub’s side properties; I’ve been unable to get a number of different people to pull the plug for me. I think instead I’ll just say it’s a clear indicator of the trust my former employer has in my relationship with them. :heart:

One last thing to add in this section. My friend Reg tweeted this recently:

I really like this sentiment a lot, and will keep it in mind when I’m in that position next. Occasionally you’ll see the odd person mention something about this over Twitter or something, and it’s clear that firing someone is a stressful process. But be careful who you vent that stress to — vent up the chain of command, not down — because do keep in mind that you’re still not the one suffering the most from all this.


Determine the rationale. Once someone’s actually been fired, this is really your first opportunity as a coworker to have some involvement in the process. Certainly you’re not aiming to butt in and try to be the center of everything, here, but there’s some things you can keep in mind to help your former coworker, your company, and ultimately, yourself.

Determining the rationale I think is the natural first step. You’re no help to anyone if you get fired as well. And sometimes — but obviously not always — if someone you work with gets fired, it could pose problems for you too, particularly if you work on the same team.

Ask around. Your direct manager is a great place to start if you have a good relationship with them. You don’t necessarily need to invade the firee’s privacy and pry into every single detail, but I think it’s reasonable to ask if the project you’re working on is possibly going to undertake a restructuring, or if it might get killed, or any number of other things. Don’t look desperate, of course — OH MY GOD ARE WE ALL GOING TO GET SHITCANNED???? — but a respectful curiosity shouldn’t hurt in most healthy organizations.

Gossip is a potential next step. Everyone hates on gossip, true, but I think it can have its place for people who aren’t in management positions. Again, knowing every single detail isn’t really relevant to you, but getting the benchmark of people around you on your level can be helpful for you to judge your own position. It also might be helpful as a sort of mea culpa when you talk to your manager, as giving them a perspective from the boots on the ground, so to speak, might be beneficial for them when judging the overall health of the team.


Be truthful internally. Jumping back to the employer’s side of things, just be sure to be truthful. Again, the privacy of your former employee’s experience is very important to keep, but how to talk about it to other employees can be pretty telling.

Be especially cautious when using phrases like mutually agreed. Very few departures are mutually-agreed upon. If they were thinking of leaving, there’s a good chance they’d have already left.

In my case, my former manager emailed her team and included this sentence:

We had a very honest and productive conversation with Zach this morning and decided it was best to part ways.

There certainly wasn’t any conversation, and the sentence implies that it was a mutual decision. She wasn’t even in the room, either, so the we is a bit suspect as well, ha.

In either case, I was already out the door, so it doesn’t bother me very much. But everyone in the rank-and-file are better-networked than you are as a manager, and communication flows pretty freely once an event happens. So be truthful now, otherwise you poison the well for future email announcements. Be a bit misleading today and everyone will look at you as being misleading in the future.

The last bit to consider is group firing: firing more than one person on the same day. This is a very strong signal, and it’s up to you as to what you’re trying to signal here. If you take a bunch of scattered clear under-performers and fire them all on the same day, then the signal might be that the company is cleaning up and is focused squarely on improving problems. If the decision appears rather arbitrary, you run the risk of signaling that firing people is also arbitrary, and your existing employees might be put in a pretty stressful situation when reflecting on their own jobs.

Firing is tough. If you’ve ever done it before you know it’s not necessarily just about the manager and the employee: it can impact a lot more people than that.

So, I was fired. I walked out of the room, got briefly locked inside the office stairwell, and then walked to grab my stuff.


What next?

It’s a tough question. At this point I was kind of on auto-pilot, with the notion of being fired not really settling out in my mind yet.

I went to where my stuff was and started chatting with my closer friends. (I wasn’t escorted out of the building or any of that silliness.)

I started seeing friendly faces walk by and say hi, since in many cases I hadn’t seen or talked to most of my coworkers in months, having never come back in an official capacity from my sabbatical. I immediately took to walking up to them, giving them a long, deeply uncomfortable and lingering hug, and then whispering in their ear: it was very nice working with you. also I just got fired. It was a pretty good troll given such short notice, all things considered. We all had a good laugh, and then people stuck around so they could watch me do it to someone else. By the end I had a good dozen or so people around chatting and avoiding work. A+++ time, would do again.

lol jesus just realized what I typed, god no, I’d probably avoid getting fired the next time, I mean. I’m just pretty dope at trolling is all I’m sayin’.

Egregious selfie of the author

Eventually I walked out of the office and starting heading towards tacos, where I was planning on drinking way too many margaritas with a dear friend who was still at the company (for the time being). Please note: tacos tend to solve all problems. By this point, the remote workers had all heard the news, so my phone started blowing up with text messages. I was still feeling pretty good about life, so I took this selfie and started sending it to people in lieu of going into a ton of detail with each person about my mental state.

In prepping this talk, I took a look at this selfie for the first time in quite a number of months and noticed I was wearing earbuds. Clearly I was listening to something as I strutted out of the office. Luckily I scrobble my music to, so I can go back and look. So that’s how I found out what I was listening to:


On My Own, as sung by Eponine in the award-winning musical Les Misérables. Shit you not. It’s like I’m some emo fourteen-year-old just discovering their first breakup or something. Nice work, Holman.

Shortly thereafter, I tweeted the aforementioned tweet:

Again, it’s pretty vague and didn’t address whether I had quit or I’d been fired. I was pretty far away from processing things. I think being evasive made some sense at the time.

I’ve been journaling every few days pretty regularly for a few years now, and it’s one of the best things I’ve ever done for myself. I definitely wrote a really long entry for myself that day. I went back and took a look while I was preparing this talk, and this section jumped out at me:

The weird part is how much this is about me. This is happening to me right now. I didn’t really expect it to feel so intimate, a kind of whoa, this is my experience right now and nobody else’s.

In hindsight, yeah, that’s absolutely one of the stronger feelings I still feel from everything. When you think about it, most of the experiences you have in life are shared with others: join a new job, share it with your new coworkers. Get married, share it with your new partner and your friends and family. Best I can tell, getting fired and dying are one of the few burdens that are yours and yours alone. I didn’t really anticipate what that would feel like ahead of time.

By later in the night, I was feeling pretty down. It was definitely a roller coaster of a day: text messages, tweets, margaritas, financial advisors, lawyers, introspective walks in the park. I didn’t necessarily think I’d be flying high for the rest of my life, but it didn’t really make the crash all that easier, either. And that experience has really matched my last year, really: some decent highs, some pretty dangerous lows. Five years being that deeply intertwined in a company is toeing a line, and I’ve been paying for it ever since.

Loose Ends

Good god, it really takes an awful amount of work in order to leave work.

There’s a number of immediate concerns you need to deal with:

  • Who owns your physical hardware? Is your computer owned by the company? Your phone? Any other devices? Do you need to wipe any devices, or pull personal data off of any of them?
  • Do you have any outstanding expenses to deal with? I had a conference to Australia in a few subsequent weeks that I had to deal with. I had told them that GitHub would pay for my expenses to attend, but I hadn’t booked that trip yet. Luckily it was no problem for GitHub to pick up the tab (I was still representing the company there, somewhat awkwardly), but it was still something else I needed to remember to handle right away.
  • How’s your healthcare situation, if you’re unfortunate enough to live in a country where healthcare Is A Thing. In the US, COBRA exists to help provide continuity of health insurance between jobs, and it should cover you during any gaps in your coverage. It was one more thing to have to worry about, although admittedly I was pleasantly surprised at how (relatively) easy using COBRA was; I was expecting to jump through some really horrible hoops.

The next thing to consider is severance pay. Each company tends to handle things differently here, and at least in the US, there’s not necessarily a good standard of what to expect in terms of post-termination terms and compensation.

There’s a lot of potential minefields involved in dealing with the separation agreement needed to agree upon severance, though.

Unfortunately I can’t go into much detail here other than say that we reached an equitable agreement, but it did take a considerable amount of time to get to that point.

One of the major general concerns when a worker leaves an American-based startup is the treatment of their stock options. A large part of equity compensation takes place in the form of ISOs, which offer favorable tax treatments in the long term.

Unfortunately, vested unexercised ISOs are capped at 90 days post-employment by law, meaning that they disappear in a puff of smoke once you reach that limit. This poses a problem in today’s anti-IPO startups who simultaneously reject secondary sales, which limit all of the options available for an employee to exercise their stock (the implications of which for an early employee might cost hundreds of thousands of dollars that they don’t have, excluding the corresponding tax hit as well).

Another possibility that’s quickly gaining steam lately is to convert those ISOs to NSOs at the 90 day mark and extend the option window to something longer like seven or ten years instead of a blistering 90 days. In my mind, companies who haven’t switched to a longer 90 day window are actively stealing from their employees; the employees have worked hard to vest their options over a period of years, but because of their participation in the company’s success they’re now unable to exercise their options.

I’ve talked a lot about this in greater length in my aptly-titled post, Fuck Your 90 Day Exercise Window, as well as started a listing of employee-friendly companies with extended exercise windows. Suffice to say, this is a pretty important aspect to me and was a big topic in the discussions surrounding my separation agreement.

I had been talking to various officials in leadership for a few months hammering out the details and had been under the impression that we had reached an agreement, but I was surprised to find out that wasn’t the case. I was informed 28 hours before my 90 day window closed that the agreement I had thought I had didn’t exist; it was then that I realized I had 28 hours to either come up with hundreds of thousands of dollars that I didn’t have to save half of my stock, or I could sign the agreement as-is and avoid losing half of my already-diminished stake. I opted to sign.


Get everything in writing. This also supports my earlier point of aiming to not do anything in the room while you’re getting fired; it allows you to take some time out and think things through once you have the legalese in front of you (and preferably in front of a lawyer).

I think it’s fully acceptable to stay on-the-record. So no phone calls, no meetings in person. Again, you’re up against people who have done this frequently in the past, and it’s a good chance these thoughts haven’t crossed your mind before.

A lot of it certainly might not even be malicious; I’d imagine a lot of people you chat with could be good friends who want to see you leave in good shape, but at the end of the day it’s really dicey to assume the company as a whole is deeply looking out for your interests. The only person looking out for your interests is you.

This also underlines the generally great advice of always knowing a good lawyer, a good accountant, and a good financial advisor. You don’t necessarily have to be currently engaged with a firm; just knowing who to ask for recommendations is a great start. If you can take some time and have some introductory calls with different firms ahead of time, that’s even better. The vast majority of legal and financial firms will be happy to take a quick introductory phone call with you free-of-charge to explain their value proposition. This is highly advantageous for you to do ahead of time so you don’t need to do this when you’re deep in the thick of a potential crisis.

All things considered, though, we did reach an agreement and I was officially free and clear of the company.

Life after

That brings us to the last few months and up to the present. I’ve spent the last year or so trying to sort out my life and my resulting depression. Shit sucks. Professionally I’ve done some consulting and private talks here and there, which have been tepidly interesting. I’ve also served in a formal advisory role to three startups, which I’ve really come to enjoy; after being so heads-down on a single problem for the last five years, it’s nice to get a fair amount of depth in multiple new problem spaces, some of which are new to me.

But I still haven’t found the next thing I’m really interested in, which just feeds into the whole cycle some more. For better or worse, that’ll be changing pretty quickly, since I’m pretty broke after working part-time and living in San Francisco for so long. Even though I helped move a company’s valuation almost two billion dollars, I haven’t made a dime from the company outside of making a pretty below-to-average salary. That’s after six years.

Think on that, kids, when you’re busting your ass day and night to strike it rich with your startup dreams.


It’s cool to stay in touch. Something that’s kind of cracked me up lately is the sheer logistics behind keeping in touch with my former coworkers. On one hand, you lose out on your normal chat conversations, lunches, and in-person meetings with these colleagues. It’s just a human trait that it’s harder to keep these relationships up when they’re out of sight, out of mind.

Beyond that, though, when you’re out of the company you’re also out of the rolodex. You might not know someone’s phone number or personal email address anymore, for example. A large part of the time you, as a coworker, might be in a bit better position to reach out to a former colleague than they are to you, since you still have access to these infrastructures. It’s possible someone would be up for a chat, but the difficulty in doing so provides a bit of a barrier, so it’s fine to reach out and say hi sometimes! Even the worst corporate breakups that I’ve heard about are usually able to insulate between bad experiences with the company versus bad experiences with you, so you shouldn’t be too worried about that if you weren’t directly involved.

The one aspect about all of this that you might want to keep in mind that I’ve heard crop up again and again from a number of former employees is around the idea of conversational topics.

In some sense I think it’s natural for existing employees to vent to former employees that may have left on bad terms about the gossip that’s happening at the company. To take an example from my own experiences, I don’t think there’s anyone else on the planet that knows more dirt on GitHub than I do at this point, even including current employees. I’m certain I gave two to three times as many 1:1s than anyone else at the company in the subsequent months following my departure; I think I was a natural point of contact to many who were frustrated at some internal aspects of the company they were dealing with.

And that’s fine, to an extent; schadenfreude is a thing, and it can be helpful for awhile, for both parties. But man, it gets tiring, particularly when you’re not paid for it. Especially when you’re still suffering from feelings from it. It’s hard to move on when every day there’s something new to trigger it all over again.

So don’t be afraid to be cautious with what you say. If they’re up to hearing new dirt, so be it; if they’re a bit fried about it, chat about your new puppy instead. Everyone loves puppies.

One of the very bright points from all of this is the self-organized GitHub alumni network. Xubbers, we call ourselves. We have a private Facebook group and a private Slack room to talk about things. It’s really about 60% therapy, 20% shooting the shit just like the old days, and 20% networking and supporting each other as we move forward in our new careers apart.

I can’t underline how much I’ve appreciated this group. In the past I’ve kept in contact with coworkers from previous points of employment, but I hadn’t worked somewhere with enough former employees to necessarily warrant a full alumni group.

Highly recommend pulling a group like this together for your own company. On a long enough timescale, you’re all going to join our ranks anyway. Unless you die first. Then we’ll mount your head on the wall like in a private hunter’s club or something. “The one that almost got away”, we’ll call it.

Xubber meetup

In some sense, I think alumni really continue the culture of the company, independent of what changes may or may not befall the company itself.

One of my favorite stories about all this lately is from Parse. Unfortunately, the circumstances around it aren’t super happy: after being acquired by Facebook, Parse ultimately was killed off last month.

The Parse alumni, though, got together last month to give their beloved company a proper send-off:

No funeral would be complete, though, without a cake. (I’m stretching the metaphor here, but that’s okay, just roll with it.) Parse’s take on the cake involved an upside-down Facebook “like” button, complete with blood:

The most important part of a company is the lasting mark they leave on the world. That mark is almost always the people. Chances are, your people aren’t going to be at your company forever. You want them to move on and do great things. You want them to carry with them the best parts of your culture on to new challenges, new companies, and new approaches.

Once you see that happening, then you can be satisfied with the job you’ve done.


Cultivate the relationship with your alumni. Immediately after parting ways with an employee, there will be a number of important aspects that will require a lot of communication: healthcare, taxes, stock, and so on. So that type of follow-on communication is important to keep in mind.

There are plenty of longer-term relationships to keep in mind as well, though. Things like help with recruiting referrals, potential professional relationships with the former employee’s new company, and other bidirectional ways to help each other in general. It’s good to support those lines of communication.

One way to help this along is to simply provide an obvious point of contact. Having something like an alumni@ email address available is a huge benefit. Otherwise it becomes a smorgasbord of playing guess-the-email-account, which causes problems for your current employees as well. Just set up an alumni@ email alias to forward emails from and keep it up-to-date through any changes in your organizational side of things.

The last thing to consider is that your alumni are a truly fantastic source of recruiting talent. Most employment terminations are either voluntary (i.e., quitting) or at least on fairly good terms. There are plenty of reasons to leave a job for purposes unrelated to your overall opinion of the company: maybe you’re moving to a different city, or you’re taking a break from work to focus on your kids, or you simply want to try something new. You can be an advocate for your former employer without having to continue your tenure there yourself.

And that’s a good thing. Everyone wants to be the one who helps their friend find a new job. That’s one of the best things you can do for someone. If the company treated them well, they can treat the company well by helping to staff it with good people.

If the company has a poor relationship with former employees, however, one can expect that relationship to go both ways. And nothing is a stronger signal for prospective new hires than to talk to former employees and get their thoughts on the situation.


It’s not your company. If you don’t own the company, the company owns you.

That’s really been a hard lesson for me. I was pretty wrapped up in working there. It’s a broader concept, really, shoved down our throats in the tech industry. Work long hours and move fast. Here, try on this company hoodie. Have this catered lunch so you don’t have to go out into the real world. This is your new home. The industry is replete with this stuff.

One of my friends took an interesting perspective:

I always try to leave on a high note. Because once you’re there, you’re never going to hit that peak again.

What she was getting at is that I think you’ll know. You’ll know the difference between doing far and away your best work, and doing work that is still good, but just nominally better than what you’ve been doing. Once you catch yourself adjusting to that incremental progression… maybe it’s time to leave, to change things up. Just thought that was interesting.

One of my favorite conversations I’ve had recently was with Ron Johnson. Ron was in charge of rolling out the Apple Store: everything from the Genius Bar to the physical setup to how the staff operated. He eventually left Apple and became the CEO at JC Penny, one of the large stalwart department stores in the United States. Depending on who you ask, he either revolutionized what department stores could be but ran out of time to see the changes bear fruit, or seriously jeopardized JC Penny’s relationship with its customers by putting them through some new changes.

In either case, there had been some discussions internally and he had agreed to resign. A few days later, the board went ahead and very publicly fired him instead.

We chatted about this, and he said something that I really think helped clarify my opinion on everything:

There’s nothing wrong with moving along… regardless of whether it is self-driven or company-driven. Maybe we need new language… right now it’s either we resign or get fired.

Maybe there’s a third concept which is “next”.

Maybe we should simply recognize it’s time for next.

I like that sentiment.

Firing people is a normal function in a healthy, growing company. The company you start at might end up very distinctly different by the time you leave it. Or you might be the one who does the changing. Life’s too nuanced to make these blanket assumptions when we hear about someone getting fired.

Talk about it. If not publicly, then talk openly with your friends and family about things. I don’t know much, but I do know we can’t start fixing and improving this process if we continue to push the discussions to dark alleyways of our minds.

When I finished this talk in the UK last week, I was kind of nervous about how many in the audience could really identify with aspects that I was describing. Shortly after the conference finished up we went to the conference after-party and I was showered with story after story of bad experiences, good experiences, and just overall experiences, from people who hadn’t really been able to talk frankly about these topics before. It was pretty humbling. So many people have stories.

Thanks for reading my story.

What’s next?

News stories from Tuesday 01 March, 2016

Favicon for Zach Holman 01:00 How to Deploy Software » Post from Zach Holman Visit off-site link

How to
Deploy Software

Make your team’s deploys as boring as hell and stop stressing about it.

Let's talk deployment

Whenever you make a change to your codebase, there's always going to be a risk that you're about to break something.

No one likes downtime, no one likes cranky users, and no one enjoys angry managers. So the act of deploying new code to production tends to be a pretty stressful process.

It doesn't have to be as stressful, though. There's one phrase I'm going to be reiterating over and over throughout this whole piece:

Your deploys should be as boring, straightforward, and stress-free as possible.

Deploying major new features to production should be as easy as starting a flamewar on Hacker News about spaces versus tabs. They should be easy for new employees to understand, they should be defensive towards errors, and they should be well-tested far before the first end-user ever sees a line of new code.

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

A lot of this piece stems from both my experiences during my five-year tenure at GitHub and during my last year of advising and consulting with a whole slew of tech companies big and small, with an emphasis on improving their deployment workflows (which have ranged from "pretty respectable" to "I think the servers must literally be on fire right now"). In particular, one of the startups I'm advising is Dockbit, whose product is squarely aimed at collaborating on deploys, and much of this piece stems from conversations I've had with their team. There's so many different parts of the puzzle that I thought it'd be helpful to get it written down.

I'm indebted to some friends from different companies who gave this a look-over and helped shed some light on their respective deploy perspectives: Corey Donohoe (Heroku), Jesse Toth (GitHub), Aman Gupta (GitHub), and Paul Betts (Slack). I continually found it amusing how the different companies might have taken different approaches but generally all focused on the same underlying aspects of collaboration, risk, and caution. I think there's something universal here.

Anyway, this is a long intro and for that I'd apologize, but this whole goddamn writeup is going to be long anyway, so deal with it, lol.

Table of Contents

  1. Goals

    Aren't deploys a solved problem?

  2. Prepare

    Start prepping for the deploy by thinking about testing, feature flags, and your general code collaboration approach.

  3. Branch

    Branching your code is really the fundamental part of deploying. You're segregating any possible unintended consequences of the new code you're deploying. Start thinking about different approaches involved with branch deploys, auto deploys on master, and blue/green deploys.

  4. Control

    The meat of deploys. How can you control the code that gets released? Deal with different permissions structures around deployment and merges, develop an audit trail of all your deploys, and keep everything orderly with deploy locks and deploy queues.

  5. Monitor

    Cool, so your code's out in the wild. Now you can fret about the different monitoring aspects of your deploy, gathering metrics to prove your deploy, and ultimately making the decision as to whether or not to roll back your changes.

  6. Conclusion

    "What did we learn, Palmer?"
    "I don't know, sir."
    "I don't fuckin' know either. I guess we learned not to do it again."
    "Yes, sir."

How to Deploy Software was originally published on March 1, 2016.


Aren't deploys a solved problem?

If you’re talking about the process of taking lines of code and transferring them onto a different server, then yeah, things are pretty solved and are pretty boring. You’ve got Capistrano in Ruby, Fabric in Python, Shipit in Node, all of AWS, and hell, even FTP is going to stick around for probably another few centuries. So tools aren’t really a problem right now.

So if we have pretty good tooling at this point, why do deploys go wrong? Why do people ship bugs at all? Why is there downtime? We’re all perfect programmers with perfect code, dammit.

Obviously things happen that you didn’t quite anticipate. And that’s where I think deployment is such an interesting area for small- to medium-sized companies to focus on. Very few areas will give you a bigger bang for your buck. Can you build processes into your workflow that anticipate these problems early? Can you use different tooling to help collaborate on your deploys easier?

This isn't a tooling problem; this is a process problem.

The vast, vast majority of startups I've talked to the last few years just don't have a good handle on what a "good" deployment workflow looks like from an organizational perspective.

You don't need release managers, you don't need special deploy days, you don't need all hands on deck for every single deploy. You just need to take some smart approaches.


Start with a good foundation

You've got to walk before you run. I think there's a trendy aspect of startups out there that all want to get on the coolest new deployment tooling, but when you pop in and look at their process they're spending 80% of their time futzing with the basics. If they were to streamline that first, everything else would fall in place a lot quicker.


Testing is the easiest place with which to start. It's not necessarily part of the literal deployment process, but it has a tremendous impact on it.

There's a lot of tricks that depend on your language or your platform or your framework, but as general advice: test your code, and speed those tests up.

My favorite quote about this was written by Ryan Tomayko in GitHub's internal testing docs:

We can make good tests run fast but we can't make fast tests be good.

So start with a good foundation: have good tests. Don't skimp out on this, because it impacts everything else down the line.

Once you start having a quality test suite that you can rely upon, though, it's time to start throwing money at the problem. If you have any sort of revenue or funding behind your team, almost the number one area you should spend money on is whatever you run your tests on. If you use something like Travis CI or CircleCI, run parallel builds if you can and double whatever you're spending today. If you run on dedicated hardware, buy a huge server.

The amount of benefit I've seen companies gain by moving to a faster test suite is one of the most important productivity benefits you can earn, simply because it impacts iteration feedback cycles, time to deploy, developer happiness, and inertia. Throw money at the problem: servers are cheap, developers are not.

I made an informal Twitter poll asking my followers just how fast their tests suite ran. Granted, it's hard to account for microservices, language variation, the surprising amount of people who didn't have any tests at all, and full-stack vs quicker unit tests, but it still became pretty clear that most people are going to be waiting at least five minutes after a push to see the build status:

How fast should fast really be? GitHub's tests generally ran within 2-3 minutes while I was there. We didn't have a lot of integration tests, which allowed for relatively quick test runs, but in general the faster you can run them the faster you're going to have that feedback loop for your developers.

There are a lot of projects around aimed at helping parallelize your builds. There's parallel_tests and test-queue in Ruby, for example. There's a good chance you'll need to write your tests differently if your tests aren't yet fully independent from each other, but that's really something you should be aiming to do in either case.

Feature Flags

The other aspect of all this is to start looking at your code and transitioning it to support multiple deployed codepaths at once.

Again, our goal is that your deploys should be as boring, straightforward, and stress-free as possible. The natural stress point of deploying any new code is running into problems you can't foresee, and you ultimately impact user behavior (i.e., they experience downtime and bugs). Bad code is going to end up getting deployed even if you have the best programmers in the universe. Whether that bad code impacts 100% of users or just one user is what's important.

One easy way to handle this is with feature flags. Feature flags have been around since, well, technically since the if statement was invented, but the first time I remember really hearing about a company's usage of feature flags was Flickr's 2009 post, Flipping Out.

These allow us to turn on features that we are actively developing without being affected by the changes other developers are making. It also lets us turn individual features on and off for testing.

Having features in production that only you can see, or only your team can see, or all of your employees can see provides for two things: you can test code in the real world with real data and make sure things work and "feel right", and you can get real benchmarks as to the performance and risk involved if the feature got rolled out to the general population of all your users.

The huge benefit of all of this means that when you're ready to deploy your new feature, all you have to do is flip one line to true and everyone sees the new code paths. It makes that typically-scary new release deploy as boring, straightforward, and stress-free as possible.

Provably-correct deploys

As an additional step, feature flags provide a great way to prove that the code you're about to deploy won't have adverse impacts on performance and reliability. There's been a number of new tools and behaviors in recent years that help you do this.

I wrote a lot about this a couple years back in my companion written piece to my talk, Move Fast and Break Nothing. The gist of it is to run both codepaths of the feature flag in production and only return the results of the legacy code, collect statistics on both codepaths, and be able to generate graphs and statistical data on whether the code you're introducing to production matches the behavior of the code you're replacing. Once you have that data, you can be sure you won't break anything. Deploys become boring, straightforward, and stress-free.

Move Fast Break Nothing screenshot

GitHub open-sourced a Ruby library called Scientist to help abstract a lot of this away. The library's being ported to most popular languages at this point, so it might be worth your time to look into this if you're interested.

The other leg of all of this is percentage rollout. Once you're pretty confident that the code you're deploying is accurate, it's still prudent to only roll it out to a small percentage of users first to double-check and triple-check nothing unforeseen is going to break. It's better to break things for 5% of users instead of 100%.

There's plenty of libraries that aim to help out with this, ranging from Rollout in Ruby, Togglz in Java, fflip in JavaScript, and many others. There's also startups tackling this problem too, like LaunchDarkly.

It's also worth noting that this doesn't have to be a web-only thing. Native apps can benefit from this exact behavior too. Take a peek at GroundControl for a library that handles this behavior in iOS.

Feeling good with how you're building your code out? Great. Now that we got that out of the way, we can start talking about deploys.


Organize with branches

A lot of the organizational problems surrounding deployment stems from a lack of communication between the person deploying new code and the rest of the people who work on the app with her. You want everyone to know the full scope of changes you're pushing, and you want to avoid stepping on anyone else's toes while you do it.

There's a few interesting behaviors that can be used to help with this, and they all depend on the simplest unit of deployment: the branch.

Code branches

By "branch", I mean a branch in Git, or Mercurial, or whatever you happen to be using for version control. Cut a branch early, work on it, and push it up to your preferred code host (GitLab, Bitbucket, etc).

You should also be using pull requests, merge requests, or other code review to keep track of discussion on the code you're introducing. Deployments need to be collaborative, and using code review is a big part of that. We'll touch on pull requests in a bit more detail later in this piece.

Code Review

The topic of code review is long, complicated, and pretty specific to your organization and your risk profile. I think there's a couple important areas common to all organizations to consider, though:

  • Your branch is your responsibility. The companies I've seen who have tended to be more successful have all had this idea that the ultimate responsibility of the code that gets deployed falls upon the person or people who wrote that code. They don't throw code over the wall to some special person with deploy powers or testing powers and then get up and go to lunch. Those people certainly should be involved in the process of code review, but the most important part of all of this is that you are responsible for your code. If it breaks, you fix it… not your poor ops team. So don't break it.

  • Start reviews early and often. You don't need to finish a branch before you can request comments on it. If you can open a code review with imaginary code to gauge interest in the interface, for example, those twenty minutes spent doing that and getting told "no, let's not do this" is far preferable than blowing two weeks on that full implementation instead.

  • Someone needs to review. How you do this can depend on the organization, but certainly getting another pair of eyes on code can be really helpful. For more structured companies, you might want to explicitly assign people to the review and demand they review it before it goes out. For less structured companies, you could mention different teams to see who's most readily available to help you out. In either end of the spectrum, you're setting expectations that someone needs to lend you a hand before storming off and deploying code solo.

Branch and deploy pacing

There's an old joke that's been passed around from time to time about code review. Whenever you open a code review on a branch with six lines of code, you're more likely to get a lot of teammates dropping in and picking apart those six lines left and right. But when you push a branch that you've been working on for weeks, you'll usually just get people commenting with a quick 👍🏼 looks good to me!

Basically, developers are usually just a bunch of goddamn lazy trolls.

You can use that to your advantage, though: build software using quick, tiny branches and pull requests. Make them small enough to where it's easy for someone to drop in and review your pull in a couple minutes or less. If you build massive branches, it will take a massive amount of time for someone else to review your work, and that leads to a general slow-down with the pace of development.

Confused at how to make everything so small? This is where those feature flags from earlier come into play. When my team of three rebuilt GitHub Issues in 2014, we had shipped probably hundreds of tiny pull requests to production behind a feature flag that only we could see. We deployed a lot of partially-built components before they were "perfect". It made it a lot easier to review code as it was going out, and it made it quicker to build and see the new product in a real-world environment.

You want to deploy quickly and often. A team of ten could probably deploy at least 7-15 branches a day without too much fretting. Again, the smaller the diff, the more boring, straightforward, and stress-free your deploys become.

Branch deploys

When you're ready to deploy your new code, you should always deploy your branch before merging. Always.

View your entire repository as a record of fact. Whatever you have on your master branch (or whatever you've changed your default branch to be) should be noted as being the absolute reflection of what is on production. In other words, you can always be sure that your master branch is "good" and is a known state where the software isn't breaking.

Branches are the question. If you merge your branch first into master and then deploy master, you no longer have an easy way to determining what your good, known state is without doing an icky rollback in version control. It's not necessarily rocket science to do, but if you deploy something that breaks the site, the last thing you want to do is have to think about anything. You just want an easy out.

This is why it's important that your deploy tooling allows you to deploy your branch to production first. Once you're sure that your performance hasn't suffered, there's no stability issues, and your feature is working as intended, then you can merge it. The whole point of having this process is not for when things work, it's when things don't work. And when things don't work, the solution is boring, straightforward, and stress-free: you redeploy master. That's it. You're back to your known "good" state.


Part of all that is to have a stronger idea of what your "known state" is. The easiest way of doing that is to have a simple rule that's never broken:

Unless you're testing a branch, whatever is deployed to production is always reflected by the master branch.

The easiest way I've seen to handle this is to just always auto-deploy the master branch if it's changed. It's a pretty simple ruleset to remember, and it encourages people to make branches for all but the most risk-free commits.

There's a number of features in tooling that will help you do this. If you're on a platform like Heroku, they might have an option that lets you automatically deploy new versions on specific branches. CI providers like Travis CI also will allow auto deploys on build success. And self-hosted tools like Heaven and hubot-deploy — tools we'll talk about in greater detail in the next section — will help you manage this as well.

Auto-deploys are also helpful when you do merge the branch you're working on into master. Your tooling should pick up a new revision and deploy the site again. Even though the content of the software isn't changing (you're effectively redeploying the same codebase), the SHA-1 does change, which makes it more explicit as to what the current known state of production is (which again, just reaffirms that the master branch is the known state).

Blue-green deploys

Martin Fowler has pushed this idea of blue-green deployment since his 2010 article (which is definitely worth a read). In it, Fowler talks about the concept of using two identical production environments, which he calls "blue" and "green". Blue might be the "live" production environment, and green might be the idle production environment. You can then deploy to green, verify that everything is working as intended, and make a seamless cutover from blue to green. Production gains the new code without a lot of risk.

One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production.

This is a pretty powerful idea, and it's become even more powerful with the growing popularity of virtualization, containers, and generally having environments that can be easily thrown away and forgotten. Instead of having a simple blue/green deployment, you can spin up production environments for basically everything in the visual light spectrum.

There's a multitude of reasons behind doing this, from having disaster recovery available to having additional time to test critical features before users see them, but my favorite is the additional ability to play with new code.

Playing with new code ends up being pretty important in the product development cycle. Certainly a lot of problems should be caught earlier in code review or through automated testing, but if you're trying to do real product work, it's sometimes hard to predict how something will feel until you've tried it out for an extended period of time with real data. This is why blue-green deploys in production are more important than having a simple staging server whose data might be stale or completely fabricated.

What's more, if you have a specific environment that you've spun up with your code deployed to it, you can start bringing different stakeholders on board earlier in the process. Not everyone has the technical chops to pull your code down on their machine and spin your code up locally — and nor should they! If you can show your new live screen to someone in the billing department, for example, they can give you some realistic feedback on it prior to it going out live to the whole company. That can catch a ton of bugs and problems early on.

Heroku Pipelines

Whether or not you use Heroku, take a look at how they've been building out their concept of "Review Apps" in their ecosystem: apps get deployed straight from a pull request and can be immediately played with in the real world instead of just being viewed through screenshots or long-winded "this is what it might work like in the future" paragraphs. Get more people involved early before you have a chance to inconvenience them with bad product later on.


Controlling the deployment process

Look, I'm totally the hippie liberal yuppie when it comes organizational manners in a startup: I believe strongly in developer autonomy, a bottom-up approach to product development, and generally will side with the employee rather than management. I think it makes for happier employees and better product. But with deployment, well, it's a pretty important, all-or-nothing process to get right. So I think adding some control around the deployment process makes a lot of sense.

Luckily, deployment tooling is an area where adding restrictions ends up freeing teammates up from stress, so if you do it right it's going to be a huge, huge benefit instead of what people might traditionally think of as a blocker. In other words, your process should facilitate work getting done, not get in the way of work.

Audit trails

I'm kind of surprised at how many startups I've seen unable to quickly bring up an audit log of deployments. There might be some sort of papertrail in a chat room transcript somewhere, but it's not something that is readily accessible when you need it.

The benefit of some type of audit trail for your deployments is basically what you'd expect: you'd be able to find out who deployed what to where and when. Every now and then you'll run into problems that don't manifest themselves until hours, days, or weeks after deployment, and being able to jump back and tie it to a specific code change can save you a lot of time.

A lot of services will generate these types of deployment listings fairly trivially for you. Amazon CodeDeploy and Dockbit, for example, have a lot of tooling around deploys in general but also serves as a nice trail of what happened when. GitHub's excellent Deployment API is a nice way to integrate with your external systems while still plugging deploy status directly into Pull Requests:

GitHub's deployment API

If you're playing on expert mode, plug your deployments and deployment times into one of the many, many time series databases and services like InfluxDB, Grafana, Librato, or Graphite. The ability to compare any given metric and layer deployment metrics on top of it is incredibly powerful: seeing a gradual increase of exceptions starting two hours ago might be curious at first, but not if you see an obvious deploy happen right at that time, too.

Deploy locking

Once you reach the point of having more than one person in a codebase, you're naturally going to have problems if multiple people try to deploy different code at once. While it's certainly possible to have multiple branches deployed to production at once — and it's advisable, as you grow past a certain point — you do need to have the tooling set up to deal with those deploys. Deploy locking is the first thing to take a look at.

Deploy locking is basically what you'd expect it to be: locking production so that only one person can deploy code at a time. There's many ways to do this, but the important part is that you make this visible.

The simplest way to achieve this visibility is through chat. A common pattern might be to set up deploy commands that simultaneously lock production like:

/deploy <app>/<branch> to <environment>


/deploy api/new-permissions to production

This makes it clear to everyone else in chat that you're deploying. I've seen a few companies hop in Slack and mention everyone in the Slack deploy room with @here I'm deploying […]!. I think that's unnecessary, and only serves to distract your coworkers. By just tossing it in the room you'll be visible enough. If it's been awhile since a deploy and it's not immediately obvious if production is being used, you can add an additional chat command that returns the current state of production.

There's a number of pretty easy ways to plug this type of workflow into your chat. Dockbit has a Slack integration that adds deploy support to different rooms. There's also an open source option called SlashDeploy that integrates GitHub Deployments with Slack and gives you this workflow as well (as well as handling other aspects like locking).

Another possibility that I've seen is to build web tooling around all of this. Slack has a custom internal app that provides a visual interface to deployment. Pinterest has open sourced their web-based deployment system. You can take the idea of locking to many different forms; it just depends on what's most impactful for your team.

Once a deploy's branch has been merged to master, production should automatically unlock for the next person to use.

There's a certain amount of decorum required while locking production. Certainly you don't want people to wait to deploy while a careless programmer forgot they left production locked. Automatically unlocking on a merge to master is helpful, and you can also set up periodic reminders to mention the deployer if the environment had been locked for longer than 10 minutes, for instance. The idea is to shit and get off the pot as soon as possible.

Deploy queueing

Once you have a lot of deployment locks happening and you have a lot of people on board deploying, you're obviously going to have some deploy contention. For that, draw from your deepest resolve of Britishness inside of you, and form a queue.

A deploy queue has a couple parts: 1) if there's a wait, add your name to the end of the list, and 2) allow for people to cut the line (sometimes Really Important Deploys Need To Happen Right This Minute and you need to allow for that).

The only problem with deploy queueing is having too many people queued to deploy. GitHub's been facing this internally the last year or so; come Monday when everybody wants to deploy their changes, the list of those looking to deploy can be an hour or more long. I'm not particularly a microservices advocate, but I think deploy queues specifically see a nice benefit if you're able to split things off from a majestic monolith.


There's a number of methods to help restrict who can deploy and how someone can deploy.

2FA is one option. Hopefully your employee's chat account won't get popped, and hopefully they have other security measures turned on their machine (full disk encryption, strong passwords, etc.). But for a little more peace of mind you can require a 2FA process to deploy.

You might get 2FA from your chat provider already. Campfire and Slack, for example, both support 2FA. If you want it to happen every time you deploy, however, you can build a challenge/response process into the process. Heroku and Basecamp both have a process like that internally, for instance.

Another possibility to handle the who side of permissions is to investigate what I tend to call, "riding shotgun". I've seen a number of companies who have either informal or formal processes or tooling for ensuring that at least one senior developer is involved in every deploy. There's no reason you can't build out a 2FA-style process like that into a chat client, for example, requiring both the deployer and the senior developer that's riding shotgun to confirm that code can go out.


Admire and check your work

Once you've got your code deployed, it's time to verify that what you just did actually did what you did intend it to do.

Check the playbook

All deploys should really hit the exact same game plan each time, no matter if it's a frontend change or a backend change or anything else. You're going to want to check to see if the site is still up, if the performance took a sudden turn for the worse, if error rates started elevating, or if there's an influx of new support issues. It's to your advantage to streamline that game plan.

If you have multiple sources of information for all of these aspects, try putting a link to each of these dashboards in your final deploy confirmation in chat, for example. That'll remind everyone every time to look and verify they're not impacting any metrics negatively.

Ideally, this should all be drawn from one source. Then it's easier to direct a new employee, for example, towards the important metrics to look at while making their first deploy. Pinterest's Teletraan, for example, has all of this in one interface.


There's a number of metrics you can collect and compare that will help you determine whether you just made a successful deploy.

The most obvious, of course, is the general error rate. Has it dramatically shot up? If so, you probably should redeploy master and go ahead and fix those problems. You can automate a lot of this, and even automate the redeploy if the error rate crosses a certain threshold. Again, if you assume the master branch is always a known state you can roll back to, it makes it much easier to automate auto-rollbacks if you trigger a slew of exceptions right after deploy.

The deployments themselves are interesting metrics to keep on-hand as well. Zooming out over the last year or so can help give you a good example of whether you're scaling the development pace up, or if it's clear that there's some problems and things are slowing down. You can also take a step further and collect metrics on who's doing the deploying and, though I haven't heard of anyone do this explicitly yet, tie error rates back to deployer and develop a good measurement of who are reliable deployers on the team.

Post-deploy cleanup

The final bit of housework that's required is the cleanup.

The slightly aggressively-titled Feature Toggles are one of the worst kinds of Technical Debt talks a bit about this. If you're building things with feature flags and staff deployments, you run the risk of complicating the long-term sustainability of your codebase:

The plumbing and scaffolding logic to support branching in code becomes a nasty form of technical debt, from the moment each feature switch is introduced. Feature flags make the code more fragile and brittle, harder to test, harder to understand and maintain, harder to support, and less secure.

You don't need to do this right after a deploy; if you have a bigger feature or bugfix that needs to go out, you'll want to spend your time monitoring metrics instead of immediately deleting code. You should do it at some point after the deploy, though. If you have a large release, you can make it part of your shipping checklist to come back and remove code maybe a day or a week after it's gone out. One approach I liked to take was to prepare two pull requests: one that toggles the feature flag (i.e., ships the feature to everyone), and one that cleans up and removes all the excess code you introduced. When I'm sure that I haven't broken anything and it looks good, I can just merge the cleanup pull request later without a lot of thinking or development.

You should celebrate this internally, too: it's the final sign that your coworker has successfully finished what they were working on. And everyone likes it when a diff is almost entirely red. Removing code is fun.

Deleted branch

You can also delete the branch when you're done with it, too. There's nothing wrong with deleting branches when you're done with them. If you're using GitHub's pull requests, for example, you can always restore a deleted branch, so you'll benefit from having it cleared out of your branch list but you won't actually lose any data. This step can also be automated, too: periodically run a script that looks for stale branches that have been merged into master, and then delete those branches.


The whole ballgame

I only get emotional about two things: a moving photo of a Golden Retriever leaning with her best friend on top of a hill overlooking an ocean looking towards a beautiful sunset, and deployment workflows. The reason I care so much about this stuff is because I really do think it's a critical part of the whole ballgame. At the end of the day, I care about two things: how my coworkers are feeling, and how good the product I'm working on is. Everything else stems from those two aspects for me.

Deployments can cause stress and frustration, particularly if your company's pace of development is sluggish. It also can slow down and prevent you from getting features and fixes out to your users.

I think it's worthwhile to think about this, and worthwhile to improve your own workflows. Spend some time and get your deploys to be as boring, straightforward, and stress-free as possible. It'll pay off.

Written by Zach Holman. Thanks for reading.

If you liked this, you might like some of the other things I've written. If you didn't like this, well, they're not all winners.

I also do some consulting about all of this stuff as well if your company's looking for help.

Did reading this leave you with questions, or do you have anything you'd like to talk about? Feel free to drop by my ask-me-anything repository on GitHub and file a new issue so we can chat about it in the open with other people in the community.

I hope we eventually domesticate sea otters.

News stories from Thursday 28 January, 2016

Favicon for Zach Holman 01:00 Startup Interviewing is Fucked » Post from Zach Holman Visit off-site link

Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.

Most startups have zero users and are a glimmer of the successful business they might wind up being some day. But we’re still romanticizing the idea that programming riddles will magically be the best benchmark for hiring, even though technology is very rarely the cause for any given startup’s success.

Know what you need

There’s such a wild gulf between what gets asked in interviews and what gets done in the gig’s daily grind that it’s a wonder how startups make it out of the initial incubation phase in the first place.

I’m a product engineer. I don’t have a formal CS background, but I build things for the web, and I’m really good at it. Not once in the last ten months that I’ve on-and-off interviewed have I ever seen anything remotely close to a view or a controller or even a model. Not every company has insisted upon using programming riddles as a hiring technique, but the ones that do almost exclusively focus on weird algorithmic approaches to problems that don’t exist in the real world.

Interviewer: How would you write a method to do this operation?

Me: writes a one-liner in Ruby

Interviewer: Okay now what if you couldn’t use the standard library? Imagine it’s a 200GB file and you have to do it all in memory in Ruby.

Me: Why the fuck would I do that?

Certainly there are some jobs where being extremely performant and algorithmically “correct” are legitimate to interview against. But look around: how many small, less-than-50-person startups are doing work like that? The dirty secret is most startups for the first few years are glorified CRUD apps, and finding well-rounded and diverse people who can have the biggest impact tend to be the ones who are comfortable wearing a lot of hats.

My favorite few tweets from this week talked about this:

Worry more about whether you’re self-selecting the wrong people into your organization.

Power dynamics

A huge problem with all this is that it creates a power dynamic that virtually all but assures that people who are bad at technical interviews will fail.

Algorithm-based challenges typically come from a place where the interviewer, in all their self-aggrandizing smugness, comes up with something they think demonstrates cleverness. A reliable bet is to try solving it with recursion from the start; it’s goddamn catnip for interviewers. If that doesn’t work, try doing it all in one pass rather than in an O(n) operation, because the extra 1ms you save in this use case will surely demonstrate your worth to the organization.

When you come at it from this perspective, you’re immediately telling your prospective coworker than “I have a secret that only I know right now, and I want you to arrive at this correct answer.” It becomes stressful because there is a correct answer.

Every single product I’ve built in my professional career has not had a correct answer. It’s more akin to carving a statue out of marble: you have a vague understanding of what you want to see, but you have to continually chip away at it and refine it until you end up with one possible result. You arrive at the answer, together, with your teammates. You don’t sit on a preconceived answer and direct your coworker to slug through it alone.


This is why I so strongly advocate for pair programming at some point in the interview process. Take an hour and knock off whatever bug or feature you were going to work on together. Not happening to be doing anything interesting today? The bug is too “boring”? Cool, then why are you working on it? If it’s representative of the real work that the candidate will face in the job, then it’s good enough to interview on. Besides, you can learn a lot from someone even in the simplest of fixes.

Build something real together. The very act of doing that entirely changes the power dynamic; I cannot stress that enough. Whereas previously you had someone struggling to find out a secret only you were initially privy to, you’re now working together on a problem neither of you have a firm answer to yet. Before it was adversarial; now it’s collaborative. It’ll put your candidate at ease, and they’ll be able to demonstrate their skillset to you much easier.

No one has any idea what they’re doing

I’ve heard — and experienced — so many things happening in tech interviews that are just bonkers.

You have stories from people like Max Howell who get rejected from jobs ostensibly because he’s not a good enough developer to whiteboard out algorithms, even though he built one of most popular tools for software developers today.

I interviewed for a director of engineering role last year for a startup with famously massive growth that had fundamental problems with their hundreds of developers not being able to get any product shipped. I had a good discussion with their CEO and CTO about overhauling their entire process, CI, deployment, and management structure, and then when I went in for the final round of interviews for this non-programming leadership role the interviews were done almost entirely by junior developers who asked me beginner JavaScript questions. It just boggles my mind.

Look, I get it. It takes time and effort to interview someone, and most of you just want to get back to building stuff. Coming up with a standard question lets you get away with doing more with less effort, and gives you a modicum of an ability for comparison across different candidates.

But really take a long look at whether this selects the right candidates. The skill set needed for most early startups — particularly of early employees — is a glorious, twisted mess of product, code, marketing, design, communication, and empathy. Don’t filter out those people by doing what a Microsoft or an Apple does. They’re big companies, and let me be the first to tell you: that ain’t you right now. You have different priorities.

It’s more work, but it makes for better companies and better hires, in my opinion. But what do I know; I failed those fucking tests anyway.

News stories from Friday 08 January, 2016

Favicon for Zach Holman 01:00 Fuck Your 90 Day Exercise Window » Post from Zach Holman Visit off-site link

There are a lot of problems with the compensation we give early employees at startups. I don’t know how to fix all of them, but one obvious area to start directing our anger towards is something we can fix relatively quickly: the customary 90 day exercise window.

90 days and poof

Most startups give you a 90 day window to exercise your vested options once you leave the company — either through quitting or through termination — or all of your unexercised options vanish.

This creates a perverse incentive for employees not to grow the company too much.

For example: say you’re employee number one at A Very Cool Startup, and, through your cunning intellect and a lot of luck and a lot of help from your friends, you manage to help grow the company to the pixie fairy magic dragon unicorn stage: a billion dollar valuation. Cool! You’re totes gonna be mad rich.

I climbed the bridge lol

Ultimately, you end up leaving the company. Maybe the company’s outgrown you, or you’re bored after four years, or your spouse got a new job across the country, or you’ve been fired, or maybe you die, or hey, none of your business I just want out dammit. The company’s not public, though, so everything becomes trickier. With a 90 day exercise window, you now have three months to raise the money to pay to exercise your options and the additional tax burdens associated with exercising, otherwise you get nothing. In our imaginary scenario, that could be tens or hundreds of thousands of dollars. And remember: you’re a startup worker, so there’s a good chance you’ve been living off a smaller salary all along!

So you’re probably stuck. Either you fork out enough dough yourself on a monumentally risky investment, sell them on the secondary market (which most companies disallow post-Facebook IPO), give up a portion of equity in some shady half-sale-loan thing to various third parties, or forfeit the options entirely.

I mean, you did what you were supposed to: you helped grow that fucking company. And now, in part because of your success, it’s too expensive to own what you had worked hard to vest? Ridiculous.


How we got here wasn’t necessarily malicious. These 90 day exercise windows can likely be tied back to ISOs terminating, by law, at 90 days. NSOs came along for the ride. This was less problematic when we had a somewhat more liquid marketplace for employee equity. With IPOs taking much longer to happen combined with companies restricting sale on the secondary market, these 90 days have completely stifled the tech worker’s ability to even hold the equity they’ve earned, much less profit from it.

There’s a relatively easy solution: convert vested ISOs to nonquals and extend the exercise window from 90 days to something longer. Pinterest is moving to seven years (in part by converting ISOs to nonquals). Sam Altman suggests ten years. In either case, those are both likely long enough timespans for other options to arise for you: the company could go public (in which case you can sell shares on the open market to handle the tax hit), the company could fail (in which case you’re not stuck getting fucked over paying hundreds of thousands of dollars for worthless stock, which can even happen in a “successful” acquisition), you could become independently wealthy some other way, or the company could get acquired and you gain even more outs.

Naturally, modifying the stock agreement is a solution that only companies can take. So what can you, the humble worker bee, do?

The new norm

We need to encourage companies to start taking steps towards correcting the problems we see today. I want to see more employees able to retain the compensation they earned. I want to see this become the norm.

My friend’s trying to adopt some employee-friendly terms in the incorporation of his third startup, and he mentioned this to me specifically:

You have no idea how hard it’s been to try something different. Even tried to get a three year vest for my employees, because I think four years is a bullshit norm, and lawyers mocked me for 15 minutes. Said it would make my company uninvestable.

The more companies we can get shifting to these employee-friendly terms, bit by bit, the easier it is for everyone else to accept these as the norm. Start the conversation with prospective employers. Write and tweet about your own experiences. Ask your leadership if they’ll switch over.

Clap for ‘em

One final, important part is to applaud the companies doing it right, and to promote them amongst the startup community.

I just created a repository at holman/extended-exercise-windows that lists out companies who have extended their exercise windows. If you’re interested in working for a company that takes a progressive, employee-friendly stance on this, give it a look. If you’re a company who’s switched to a longer exercise window, please contribute! And if you’re at a company that currently only does 90 day exercise windows, give them a friendly heads-up, and hopefully we can add them soon enough.

You have 90 days to do this, and then I’m deleting the repo.

Just kidding.

News stories from Tuesday 01 December, 2015

Favicon for Fabien Potencier 00:00 Announcing 24 Days of Blackfire » Post from Fabien Potencier Visit off-site link

I still remember the excitement I had 15 years ago when I discovered my first programming advent calendar; it was one about Perl. It was awesome, and every year, I was waiting for another series of blog posts about great Perl modules. When I open-sourced symfony1, I knew that writing an advent calendar would help adoption; Askeet was indeed a great success and the first advent calendar I was heavily involved with. I wrote another one, Jobeet, for symfony 1.4 some years later.

And today, I'm very happy to announce my third advent calendar, this one is about Blackfire. This time, the goal is different though: in this series, I won't write an application, but instead, I'm going to look at some development best practices which includes topics like profiling, performance, testing, continuous integration, and my vision on performance optimization best practices.

I won't reveal more about the content of the 24 days as the point is for you to discover a new chapter day after day, but I can already tell you that I have some great presents for you... just one small clue: it's about Open-Sourcing something. I'm going to stop this blog post now before I tell you too much!

Enjoy the first installment for now as it has just been published.

News stories from Monday 12 October, 2015

Favicon for Zach Holman 01:00 Opt-in Transparency » Post from Zach Holman Visit off-site link

Behold… a great way to make your employees feel like shit:

Employee: Yeah… I just don’t really understand why we’re building it out this way. It doesn’t really make sense, and I think it’s going to ultimately be harmful for the company.

Manager: Just build it, please.

This exchange — though not uncommon — isn’t going to go away any time soon. At the end of the day, there’s a power relationship happening, and the employee ain’t gonna be the one to win out.

There’s a way to help combat the effects of it, though: context.

Yeah, but why?

There’s this concept I’ve been fascinated with for the last couple of years, and I’m just going to give it a name: opt-in transparency. Can you be fully open as an organization, but not force that openness upon everyone by default? Basically, I want as many people in the organization to have access to as many things as possible, but we’re all busy, and unless I want to go out of my way to dig into something, my time is respected enough not to bother me with every little detail.

Decision context

Here’s one of my favorite stories from my time at GitHub:

HR was making some change to health insurance. Insurance is not something that’s in my wheelhouse; I’m glad we had good coverage, of course, but I’ve been lucky enough to not be impacted by it one way or another too much.

That said, when the company-wide announcement about the new change in health plans came through, some minor thing in it triggered warning bells in my head. Whoa wait now, this seems a little shittier, what the fuck are they doing here? SOMEONE IS WRONG AND I KNOW THIS BECAUSE IM RIGHT

So I did what any self-aggrandizing self-crowned hero of the people would do: I blew the dust off my special custom-order Flamewar Keyboard 3000, plugged it in, and prepared to really bring the weight of My Unique Perfect Logic™ down on this thread.

Right before I was going to start typing, I noticed that the HR team member who posted the initial thread (thanks Heather!) had three URLs appended to the bottom of the post. These were links to an issue and two pull requests to an internal HR documentation repository where the HR team had discussed the changes that were announced in the thread. Curious, I clicked into them and saw that the discussions themselves spanned several weeks and several hundred comments, all covering the proposed changes.

By the time I finished reading the discussions, I was fully on board with the change, and I found it’s what I would have done had I been in their shoes during the decision process.

That was a pretty powerful realization. It’s one of those things where the output of a decision — in this case, changing insurance — didn’t immediately make sense to me, but the context surrounding the decision made all the sense in the world.

Design context

Over the years I would occasionally butt heads with my friend Kyle Neath, the first designer and former head of product at GitHub.

A lot of it stemmed from my reactions to a possible screens he was designing. I’d say, hey, I’m not sure I really dig the latest comp you posted.

And more often than not — and this is a mark of a great designer — he’d come back with already-sketched pages of same screen pictured six months, twelve months, three years, and five years from now. He gave us context behind his decisions. And almost every single time — that motherfucker — he would win the argument this way. By showing that entire context of his future vision detailed out, I could very comfortably buy into a decision that I don’t necessarily agree with 100% today, because I’ve bought into the steps needed to get to the long-term vision.

Sharing that type of context can be very, very valuable, and it forces you to think broader than just today’s problems.

Async and open

This is part of the reason why I advocate so strongly for remote-first and asynchronous companies. By the very nature of how you work internally, you’re creating self-documenting progress upon which anyone in the future can come back and reflect.

People promote transparency as a huge culture value, and, while I don’t think that’s wrong, it really depends on how you use it. As the company grows larger, I don’t want to be inundated with every single goddamn decision. It becomes a paralyzing aspect of the culture, and pretty soon no one can get anything done. You don’t want to be the company that’s full of shippers who can’t ever get anything shipped.

If, on the other hand, you allow people to opt into the full context of these discussions, you promote a healthy and sheltered creative process, but still encourage others into your discussions only if they are deeply passionate about helping you out. From the outsider’s perspective you might not care about 95% of the discussions happening in the company, but you might spend that remaining 5% on something you can genuinely pitch in and improve.

Opt-in transparency is a good balance of transparency, inclusiveness, and creative problem solving. Try to aim for those goals rather than pushing all your decision making behind closed doors. It’s a better way to create.

Favicon for Zach Holman 01:00 Dev Evangelism » Post from Zach Holman Visit off-site link

I think the first question that should be asked after every developer evangelist finishes their talk is ”YO DO YOU ACTUALLY BELIEVE WHAT U JUST SAID THO”.

A sneaky world

Dev evangelism is this weird world where companies pay employees to go out to conferences and meetups — really whoever will have ‘em — and give talks. This is definitely not a new thing, although the last few years it’s been feeling more and more prevalent.


I mean, I get it. Evangelism gets your foot in the door in a lot of hard areas right now: hiring, getting the word out about your company, and showing people how to use your product. It’s not a horrible way to do it, either: I’d much rather see companies support conferences as a way of hiring rather than pay recruiters to spam every developer they can get their hands on.

I’m just worried about how some of these companies are taking a good thing and twisting it for their own purposes.

Supporting whatever

I gave a lot of talks while at GitHub, and I started hearing “oh yeah you’re that dev evangelist at GitHub!” from time to time. This always made me feel funny because I considered myself a developer first and foremost; I had the most commits in the company, dammit, why don’t people who don’t have access to the repo just inherently know that? Sheesh.

I think “evangelism” is done best when you pull from the people actually doing the work. GitHub used to support their employees and let them give talks at any conference they were invited to or were accepted to speak at. The reason I liked this policy was that the goal was to support employees, which in turn led to better talks. It was pretty genuine, and the whole community gained from it. We were completely hands-off when it came to what the talks were about… some were inevitably about experiences at GitHub, some were about programming, and some were about completely different topics altogether.

I think that’s a pretty important part right off the bat that a lot of companies tend to miss. The best talk is one where 1) the speaker really wants to give it, and 2) it’s something that’s drawn from experience rather than having the explicit goal to promote the company. Both of these are problematic if the speaker themselves aren’t deep in the trenches — gaining actual experience to share — rather than talking about theoretical things they gleaned from working in the industry ten years ago.

If a company’s spending money for the purposes of “evangelism”, they’re better off letting their employees talk about what’s most meaningful to share with other people rather than what directly benefits the company.

Sell without selling

I’ve gotten a number of offers lately from companies who don’t get this. They think I gave talks to sell the company, when really I gave talks because I thought they would be helpful to other people. My talks came from real pain: I had worked in bad environments before, and I could say hey, let me tell you a better way to work! It was lovely to share these things with people who might be in the same situation.

When these weirdo companies pinged me, they assumed I’m going to swing in there, drop a ton of talks about how their real-time app for middle managers is going to change the world, and they’ll make oodles of money. They assume companies can bend speakers to better amplify their own message.

That’s a fucked way of doing things. What’s more, the average person in the audience is going to see through this and tune out (or worse, make a mental note that they think your company’s fucked).

Interestingly enough, of course, making more genuine talks that resonate with people is a better way to market your company than trying to set out and market your company in the first place.

Don’t be afraid to invest in employees in areas that might not immediately contribute to your bottom line. Remember that talks are a great time to share what you’ve experienced with others, and you don’t have to monetize every single moment of that.

News stories from Thursday 01 October, 2015

Favicon for Zach Holman 01:00 Remote-First vs. Remote-Friendly » Post from Zach Holman Visit off-site link

Yeah! We’re remote friendly! We got Bob who lives in San Diego, we’re based in San Francisco, and we have a Slack room, and people usually can come in to work at ANY time (between 8am and 9am), but really fuck Bob he’s kind of a loner and we’re going to probably let him go soon anyway, but yeah you can totes work remote!

We’re kind of in the surly teenager phase of remote work right now. A lot of companies are using tools like Slack, Hangouts, and GitLab, so our technical chops are heading in the right direction… but our processes and workflows still have a long way towards maturity.

Just because you happen to use chat rooms doesn’t mean you’ve suddenly become a glorious haven for remote workers, dammit.

Tools- and process-first

Look: to some extent, I don’t even really care if everyone on your team actually lives in the same city. That’s great — they could live on the same block for all I care. Maybe you chain them to their desks in some sort of twisted open office floor plan perversion, who knows. The point is that our tools have come a long way, but unless we adjust our processes, we won’t use those tools to their fullest extent.

xubbers meetup

I think there’s a split between being remote-friendly — hiring some workers in a different city — and remote-first, meaning you build your development team around a workflow that embraces the concepts of remote work, whether or not your employees are remote.

By forcing yourself to use chat instead of meetings, by forcing yourself to use chatops to mercilessly automate every single manual action, you end up creating things faster, with more built-in context, and greater ability to share your knowledge across the organization.

If you’re not working in a remote-first environment today, not only are you not going to have a remote-friendly environment tomorrow, but you’re going to eventually have a hard time retaining talent and keeping competitive pace in the future.

The world of work is changing. That’s just the way it is.

Other ways to not fuck up remote work

Assuming you are operating in a remote-first environment and you want to dip your toes into hiring some remote workers, here’s a couple pointers that you might want to keep in mind:

Geographical makeup of teams

The number one indicator of well-functioning remote teams inside of a company is a reinforcement of remote employees in the structure of the team itself.

In simpler words:

Teams with one or two remote employees on them are fucked, and teams with a larger proportion tend to do better.

I’ve seen this play out again and again across many different spectrums of companies. It seems to be such a clear indicator that if you’re the only remote employee on a team, I’d recommend you might be proactive and try moving to a different team entirely (unless the team itself is particularly adept at working in a remote-first environment).

I think the rationale behind this perspective makes sense, and I don’t think it’s inherently mean-spirited, either: if seven people are in the same room in San Francisco and someone else is in Singapore, the seven locals are naturally going to have more informal and formal conversations about the product, unless they go out of their way to move their conversation online. It’s doable, but it takes a dedicated team to do that.

If you’re going to have a go at fostering a strong remote culture in your company, try structuring a diverse representation of geographies on a team. If you don’t have enough of one or the other, aim for either all-remote or all-local teams… it’s better than having the odd person stuck as the de facto outcast.

Timezones, not just geography

Having remote workers is one thing, but having remote workers across timezones is another.

I’ve seen some companies proudly say their culture is remote, but their workers tend to line up between Seattle, Portland, and San Francisco, all in one timezone. Even if they’re stretched across the United States or Europe, that’s still only three or four hours across, and that’s close enough to enforce a culture of a “work day”.

Distributed map

Again, that’s fine if that’s the culture you’re looking to be. But if you’re really aiming for a remote-first culture, spreading your team across really varying timezones will force you to use tools differently. You’ll lean more heavily on email, chat, pull requests and other asynchronous work rather than relying upon meetings, daily standups, and power work luncheons.

Just like the aforementioned diversity of remote/local ratio splits on teams, try to enforce a split of timezones as well, where possible. Baking that into the structure of the team itself helps you stay remote-first by default.

Face time

Lastly, and very simply: you can’t be digital all the time. If you want to build a great remote environment, you need to front the dough to have some in-person face time from time to time. Fly people in, get them meeting each other in meatspace, and make things a little more human.

Hack house

It’s amazing what you can accomplish in a two day trip. Creative problem solving becomes easier, people identify closer to real faces instead of just avatars, and all around it can be a great experience than sitting around computers all the time.

I’m pretty ecstatic that so many companies are getting better at remote work… I really am. When I first wrote How GitHub Works, a lot of this stuff was still a little amorphous at the time. Seeing the blistering growth of Slack and other tools over the last few years has been really lovely; I think people are really starting to get it.

But there’s always room to improve, of course! There really is a big gulf between being remote-friendly and being remote-first when you’re helping to build your culture, and it’s important to focus on ingraining these things into your process early and often.

News stories from Wednesday 02 September, 2015

Favicon for Grumpy Gamer 08:00 Happy Birthday Monkey Island » Post from Grumpy Gamer Visit off-site link

I guess Monkey Island turns 25 this month. It’s hard to tell.


Unlike today, you didn’t push a button and unleash your game to billions of people. It was a slow process of sending “gold master” floppies off to manufacturing, which was often overseas, then waiting for them to be shipped to stores and the first of the teaming masses to buy the game.

Of course, when that happened, you rarely heard about it. There was no Internet for players to jump onto and talk about the game.

There was CompuServe and Prodigy, but those catered to a very small group of very highly technical people.

Lucasfilm’s process for finalizing and shipping a game consisted of madly testing for several months while we fixed bugs, then 2 weeks before we were to send off the gold masters, the game would go into “lockdown testing”.  If any bug was found, there was a discussion with the team and management about if it was worth fixing.  “Worth Fixing” consisted of a lot of factors, including how difficult it was to fix and if the fix would likely introduce more bugs.

Also keep in mind that when I made a new build, I didn't just copy it to the network and let the testers at it, it had to be copied to four or five sets of floppy disk so it could be installed on each tester’s machine.  It was a time consuming and dangerous process. It was not uncommon for problems to creep up when I made the masters and have to start the whole process again. It could take several hours to make a new set of five testing disks.

It’s why we didn’t take getting bumped from test lightly.

During the 2nd week of “lockdown testing”, if a bug was found we had to bump the release date. We required that each game had one full week of testing on the build that was going to be released. Bugs found during this last week had to be crazy bad to fix.

When the release candidate passed testing, it would be sent off to manufacturing. Sometimes this was a crazy process. The builds destined for Europe were going to be duplicated in Europe and we needed to get the gold master over there, and if anything slipped there wasn’t enough time to mail them. So, we’d drive down to the airport and find a flight headed to London, go to the gate and ask a passenger if they would mind carry the floppy disks for us and someone would meet them at the gate.

Can you imagine doing that these days? You can’t even get to the gate, let alone find a person that would take a strange package on a flight for you. Different world.


After the gold masters were made, I’d archive all the source code. There was no version control back then, or even network storage, so archiving the source meant copying it to a set of floppy disks.

I made these disk on Sept 2nd, 1990 so the gold masters were sent off within a few days of that.  They have a 1.1 version due to Monkey Island being bumped from testing. I don’t remember if it was in the 1st or 2nd week of “lockdown”.

It hard to know when it first appeared in stores. It could have been late September or even October and happened without fanfare.  The gold masters were made on the 2nd, so that what I'm calling The Secret of Monkey Island's birthday.


Twenty Five years. That’s a long time.

It amazes me that people still play and love Monkey Island. I never would have believed it back then.

It’s hard for me to understand what Monkey Island means to people. I am always asked why I think it’s been such an enduring and important game. My answer is always “I have no idea.”

I really don’t.

I was very fortunate to have an incredible team. From Dave and Tim to Steve Purcell, Mark Ferrari, an amazing testing department and everyone else who touched the game's creation. And also a company management structure that knew to leave creative people alone and let them build great things.


Monkey Island was never a big hit. It sold well, but not nearly as well and anything Sierra released. I started working on Monkey Island II about a month after Monkey Island I went to manufacturing with no idea if the first game was going to do well or completely bomb. I think that was part of my strategy: start working on it before anyone could say “it’s not worth it, let's go make Star Wars games”.

There are two things in my career that I’m most proud of. Monkey Island is one of them and Humongous Entertainment is the other. They have both touched and influenced a lot of people. People will tell me that they learned english or how to read from playing Monkey Island. People have had Monkey Island weddings. Two people have asked me if it was OK to name their new child Guybrush. One person told me that he and his father fought and never got along, except for when they played Monkey Island together.

It makes me extremely proud and is very humbling.

I don’t know if I will ever get to make another Monkey Island. I always envisioned the game as a trilogy and I really hope I do, but I don’t know if it will ever happen. Monkey Island is now owned by Disney and they haven't shown any desire to sell me the IP. I don’t know if I could make Monkey Island 3a without complete control over what I was making and the only way to do that is to own it. Disney: Call me.

Maybe someday. Please don’t suggest I do a Kickstarter to get the money, that’s not possible without Disney first agreeing to sell it and they haven’t done that.


Happy Birthday to Monkey Island and a huge thanks to everyone who helped make it great and to everyone who kept it alive for Twenty Five years.



I thought I'd celebrate the occasion by making another point & click adventure, with verbs.

News stories from Friday 24 July, 2015

Favicon for Zach Holman 01:00 Diffing Images on the Command Line » Post from Zach Holman Visit off-site link

So about a year ago I realized that a play on Spaceman Spiff — one of Calvin’s alter-egos — would be a great name for a diffing tool. And that’s how spaceman-diff was born.

Then I forgot about it for a year. Classic open source. But like all projects with great names, it eventually came roaring back once I was able to make up an excuse — ANY mundane excuse — for its existence.

So today I’ll shout out to spaceman-diff, a very short script that teaches git diff how to diff image files on the command line.

Most of the heavy lifting is handled by j2pa: spaceman-diff is just a thin wrapper around it that makes it more suitable for diffing.


This ain’t the README, dammit, so go to the repo to learn about all of that junk.

Learning via Git internals

Part of the fun of doing this (of doing anything silly like this, really) is digging into your tools and seeing what’s available to you. Writing spaceman-diff was kind of a fun way to learn a little bit more about extending Git’s diffing workflow.

There’s a couple different approaches to take to do this within Git. The first was slightly naive and basically involved overriding git-diff entirely. That way, spaceman-diff handled all the file extension checks and had quite a bit more control over the actual diff itself. git-diff was invoked using an external diff tool set up with gitattributes. If the file wasn’t an image, we could pass the diff back to git-diff using the --no-ext flag. This was cool for awhile, but it became problematic when you realize your diff wrapper would have to support all flags and commands passed to git-diff so you can fall back correctly (and, because of how Git passes commands to your external diff script, you don’t have access to the original command).

Another option is to use git difftool here. It’s actually a decent approach if you’re looking to completely replace the diffing engine entirely. Maybe you’re writing something like Kaleidoscope, or maybe a tool to view diffs directly on Bitbucket instead of something locally. It’s pretty flexible, but with spaceman-diff we only want to augment Git’s diff rather than rebuild the entire thing. It’d also be great to let people use git-diff rather than try to remember to type git-difftool when they want to diff images.

The Pro Git book has a nice section on how to diff binary files using gitattributes. There’s even a section on image files, although they use textconv, which basically takes a textual representation of a file (in their case, a few lines of image exif data: filesize, dimensions, and so on), and Git’s own diffing algorithm diffs it as normal blocks of text. That’s pretty close to what we want, but we’re not heathens here… we prefer a more visual diff. Instead, we use gitattributes to tell Git to use spaceman-diff for specific files, and spaceman-diff takes over the entire diff rendering at that point.

Nothing ground-breaking or innovative in computer science happening here, but it’s a fun little hack. Git’s always interesting to dive into because they do offer a lot of little hooks into internals. If you’re interested in this, or if you have a special binary file format you use a lot that could be helpful as a low-fi format diff, take a peek and see what’s available to you.

Provided, of course, you have a great pun for your project name. That comes first.

News stories from Saturday 04 July, 2015

Favicon for Fabien Potencier 23:00 "Create your Own Framework" Series Update » Post from Fabien Potencier Visit off-site link

Three years ago, I published a series of articles about how to create a framework on top of the Symfony components on this blog.

Along the years, its contents have been updated to match the changes in Symfony itself but also in the PHP ecosystem (like the introduction of Composer). But those changes were made on a public Github repository, not on this blog.

As this series has proved to be popular, I've decided a few months ago to move it to the Symfony documentation itself where it would be more exposed and maintained by the great Symfony doc team. It was a long process, but it's done now.

Enjoy the new version in a dedicated documentation section, "Create your PHP Framework", on

News stories from Wednesday 24 June, 2015

Favicon for the web hates me 09:00 Projektwerkstatt: SecurityGraph » Post from the web hates me Visit off-site link

Ich arbeite für einen großen Verlag und wir haben sicherlich 500 Softwarekomponenten um Einsatz. Das Meiste davon ist wahrscheinlich PHP. Viel Symfony, Symfony2, Drupal, WordPress. Ihr kennt die üblichen Verdächtigen. Die Hauptframworks aufzuzählen fällt keinem von uns schwer, dummerweise wissen wir aber eigentlich gar nicht so wirklich, was wir alles noch nebenher am Laufen haben. […]

The post Projektwerkstatt: SecurityGraph appeared first on the web hates me.

News stories from Tuesday 23 June, 2015

Favicon for the web hates me 08:00 Projektwerkstatt: » Post from the web hates me Visit off-site link

Tag zwei unserer kleinen Kreativreihe. Gestern ging es um eine tiefere Integration von Twitter in WordPress und heute wird es wieder ein wenig technischer. Aber erstmal von vorne. In der letzten Zeit hatte ich mal wieder das Glück ein wenig zu programmieren. Seitdem ich Teamleiter bin, komme ich leider nicht mehr so oft dazu, was […]

The post Projektwerkstatt: appeared first on the web hates me.

News stories from Monday 22 June, 2015

Favicon for the web hates me 13:00 Projektwerkstatt – twitter@wp » Post from the web hates me Visit off-site link

Fangen wir also mit dem ersten Teil der Projektwerkstatt-Woche an. Die Idee ist schon wenig älter, aber wie ich finde immer noch gut. Wie ihr ja wisst, sind wir mit unserem Blog auch auf Twitter. Ganze 1431 Follower können wir mit stolz aufzählen. Zusätzlich setzen wir auf WordPress auf, auch wenn mir die Technik dahinter […]

The post Projektwerkstatt – twitter@wp appeared first on the web hates me.

Favicon for the web hates me 08:45 Woche der Projektideen » Post from the web hates me Visit off-site link

Los geht es mit einem kurzen Beitrag, also vielmehr einer Ankündigung. Ich habe letzte Woche mal wieder die Zeit gehabt einige meiner Geschäftsideen aufzuschreiben und da ich sie, wie so oft nicht, nicht alle selber umsetzen kann, stelle ich sie euch vor und vielleicht findet sich ja ein Team, das Bock drauf hat. Ihr werdet […]

The post Woche der Projektideen appeared first on the web hates me.

News stories from Friday 19 June, 2015

Favicon for the web hates me 13:00 Highlights 2014 » Post from the web hates me Visit off-site link

Leider haben wir es vor sechs Monaten vergessen. Der Vollständigkeit halber, veröffentlichen wir aber heute die Liste mit den zehn erfolgreichsten Artikeln aus dem Jahre 2014.

The post Highlights 2014 appeared first on the web hates me.

Favicon for nikic's Blog 01:00 Internal value representation in PHP 7 - Part 2 » Post from nikic's Blog Visit off-site link

In the first part of this article, high level changes in the internal value representation between PHP 5 and PHP 7 were discussed. As a reminder, the main difference was that zvals are no longer individually allocated and don’t store a reference count themselves. Simple values like integers or floats can be stored directly in a zval, while complex values are represented using a pointer to a separate structure.

The additional structures for complex zval values all use a common header, which is defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;

This header now holds the refcount, the type of the value and cycle collection info (gc_info), as well as a slot for type-specific flags.

In the following the details of the individual complex types will be discussed and compared to the previous implementation in PHP 5. One of the complex types are references, which were already covered in the previous part. Another type that will not be covered here are resources, because I don’t consider them to be interesting.


PHP 7 represents strings using the zend_string type, which is defined as follows:

struct _zend_string {
    zend_refcounted   gc;
    zend_ulong        h;        /* hash value */
    size_t            len;
    char              val[1];

Apart from the refcounted header, a string contains a hash cache h, a length len and a value val. The hash cache is used to avoid recomputing the hash of the string every time it is used to look up a key in a hashtable. On first use it will be initialized to the (non-zero) hash.

If you’re not familiar with the quite extensive lore of dirty C hacks, the definition of val may look strange: It is declared as a char array with a single element - but surely we want to store strings longer than one character? This uses a technique called the “struct hack”: The array is declared with only one element, but when creating the zend_string we’ll allocate it to hold a larger string. We’ll still be able to access the larger string through the val member.

Of course this is technically undefined behavior, because we end up reading and writing past the end of a single-character array, however C compilers know not to mess with your code when you do this. C99 explicitly supports this in the form of “flexible array members”, however thanks to our dear friends at Microsoft, nobody needing cross-platform compatibility can actually use C99.

The new string type has some advantages over using normal C strings: Firstly, it directly embeds the string length. This means that the length of a string no longer needs to be passed around all over the place. Secondly, as the string now has a refcounted header, it is possible to share a string in multiple places without using zvals. This is particularly important for sharing hashtable keys.

The new string type also has one large disadvantage: While it is easy to get a C string from a zend_string (just use str->val) it is not possible to directly get a zend_string from a C string – you need to actually copy the string’s value into a newly allocated zend_string. This is particularly inconvenient when dealing with literal strings (constant strings occurring in the C source code).

There are a number of flags a string can have (which are stored in the GC flags field):

#define IS_STR_PERSISTENT           (1<<0) /* allocated using malloc */
#define IS_STR_INTERNED             (1<<1) /* interned string */
#define IS_STR_PERMANENT            (1<<2) /* interned string surviving request boundary */

Persistent strings use the normal system allocator instead of the Zend memory manager (ZMM) and as such can live longer than one request. Specifying the used allocator as a flag allows us to transparently use persistent strings in zvals, while previously in PHP 5 a copy into the ZMM was required beforehand.

Interned strings are strings that won’t be destroyed until the end of a request and as such don’t need to use refcounting. They are also deduplicated, so if a new interned string is created the engine first checks if an interned string with the given content already exists. All strings that occur literally in PHP source code (this includes string literals, variable and function names, etc) are usually interned. Permanent strings are interned strings that were created before a request starts. While normal interned strings are destroyed on request shutdowns, permanent strings are kept alive.

If opcache is used interned strings will be stored in shared memory (SHM) and as such shared across all PHP worker processes. In this case the notion of permanent strings becomes irrelevant, because interned strings will never be destroyed.


I will not talk about the details of the new array implementation here, as this is already covered in a previous article. It’s no longer accurate in some details due to recent changes, but all the concepts are still the same.

There is only one new array-related concept I’ll mention here, because it is not covered in the hashtable post: Immutable arrays. These are essentially the array equivalent of interned strings, in that they don’t use refcounting and always live until the end of the request (or longer).

Due to some memory management concerns, immutable arrays are only used if opcache is enabled. To see what kind of difference this can make, consider the following script:

for ($i = 0; $i < 1000000; ++$i) {
    $array[] = ['foo'];

With opcache the memory usage is 32 MiB, but without opcache usage rises to a whopping 390 MB, because each element of $array will get a new copy of ['foo'] in this case. The reason an actual copy is done here (instead of a refcount increase) is that literal VM operands don’t use refcounting to avoid SHM corruption. I hope we can improve this currently catastrophic case to work better without opcache in the future.

Objects in PHP 5

Before considering the object implementation in PHP 7, let’s first walk through how things worked in PHP 5 and highlight some of the inefficiencies: The zval itself used to store a zend_object_value, which is defined as follows:

typedef struct _zend_object_value {
    zend_object_handle handle;
    const zend_object_handlers *handlers;
} zend_object_value;

The handle is a unique ID of the object which can be used to look up the object data. The handlers are a VTable of function pointers implementing various behaviors of an object. For “normal” PHP objects this handler table will always be the same, but objects created by PHP extensions can use a custom set of handlers that modifies the way it behaves (e.g. by overloading operators).

The object handle is used as an index into the “object store”, which is an array of object store buckets defined as follows:

typedef struct _zend_object_store_bucket {
    zend_bool destructor_called;
    zend_bool valid;
    zend_uchar apply_count;
    union _store_bucket {
        struct _store_object {
            void *object;
            zend_objects_store_dtor_t dtor;
            zend_objects_free_object_storage_t free_storage;
            zend_objects_store_clone_t clone;
            const zend_object_handlers *handlers;
            zend_uint refcount;
            gc_root_buffer *buffered;
        } obj;
        struct {
            int next;
        } free_list;
    } bucket;
} zend_object_store_bucket;

There’s quite a lot of things going on here. The first three members are just some metadata (whether the destructor of the object was called, whether this bucket is used at all and how many times this object was visited by some recursive algorithm). The following union distinguishes the case where the bucket is currently used or whether it is part of the bucket free list. Important for use is the case where struct _store_object is used:

The first member object is a pointer to the actual object (finally). It is not directly embedded in the object store bucket, because objects have no fixed size. The object pointer is followed by three handlers managing destruction, freeing and cloning. Note that in PHP destruction and freeing of objects are distinct steps, with the former being skipped in some cases (“unclean shutdown”). The clone handler is virtually never used. Because these storage handlers are not part of the normal object handlers (for whatever reason) they will be duplicated for every single object, rather than being shared.

These object store handlers are followed by a pointer to the ordinary object handlers. These are stored in case the object is destroyed without a zval being known (which usually stores the handlers).

The bucket also contains a refcount, which is somewhat odd given how in PHP 5 the zval already stores a reference count. Why do we need another? The problem is that while usually zvals are “copied” simply by increasing their refcount, there are also cases where a hard copy occurs, i.e. an entirely new zval is allocated with the same zend_object_value. In this case two distinct zvals end up using the same object store bucket, so it needs to be refcounted as well. This kind of “double refcounting” is one of the inherent issues of the PHP 5 zval implementation. The buffered pointer into the GC root buffer is also duplicated for similar reasons.

Now let’s look at the actual object that the object store points to. For normal userland objects it is defined as follows:

typedef struct _zend_object {
    zend_class_entry *ce;
    HashTable *properties;
    zval **properties_table;
    HashTable *guards;
} zend_object;

The zend_class_entry is a pointer to the class this object is an instance of. The two following members are used for two different ways of storing object properties. For dynamic properties (i.e. ones that are added at runtime and not declared in the class) the properties hashtable is used, which just maps (mangled) property names to their values.

However for declared properties an optimization is used: During compilation every such property is assigned an index and the value of the property is stored at that index in the properties_table. The mapping between property names and their index is stored in a hashtable in the class entry. As such the memory overhead of the hashtable is avoided for individual objects. Furthermore the index of a property is cached polymorphically at runtime.

The guards hashtable is used to implement the recursion behavior of magic methods like __get, which I won’t go into here.

Apart from the double refcounting issue already previously mentioned, the object representation is also heavy on memory usage with 136 bytes for a minimal object with a single property (not counting zvals). Furthermore there is a lot of indirection involved: For example, to fetch a property on an object zval, you first have to fetch the object store bucket, then the zend object, then the properties table and then the zval it points to. As such there are already four levels of indirection at a minimum (and in practice it will be no fewer than seven).

Objects in PHP 7

PHP 7 tries to improve on all of these issues by getting rid of double refcounting, dropping some of the memory bloat and reducing indirection. Here’s the new zend_object structure:

struct _zend_object {
    zend_refcounted   gc;
    uint32_t          handle;
    zend_class_entry *ce;
    const zend_object_handlers *handlers;
    HashTable        *properties;
    zval              properties_table[1];

Note that this structure is now (nearly) all that is left of an object: The zend_object_value has been replaced with a direct pointer to the object and the object store, while not entirely gone, is much less significant.

Apart from now including the customary zend_refcounted header, you can see that the handle and the handlers of the object value have been moved into the zend_object. Furthermore the properties_table now also makes use of the struct hack, so the zend_object and the properties table will be allocated in one chunk. And of course, the property table now directly embeds zvals, instead of containing pointers to them.

The guards table is no longer directly present in the object structure. Instead it will be stored in the first properties_table slot if it is needed, i.e. if the object uses __get etc. But if these magic methods are not used, the guards table is elided.

The dtor, free_storage and clone handlers that were previously stored in the object store bucket have now been moved into the handlers table, which starts as follows:

struct _zend_object_handlers {
    /* offset of real object header (usually zero) */
    int                                     offset;
    /* general object functions */
    zend_object_free_obj_t                  free_obj;
    zend_object_dtor_obj_t                  dtor_obj;
    zend_object_clone_obj_t                 clone_obj;
    /* individual object functions */
    // ... rest is about the same in PHP 5

At the top of the handler table is an offset member, which is quite clearly not a handler. This offset has to do with how internal objects are represented: An internal object always embeds the standard zend_object, but typically also adds a number of additional members. In PHP 5 this was done by adding them after the standard object:

struct custom_object {
    zend_object std;
    uint32_t something;
    // ...

This means that if you get a zend_object* you can simply cast it to your custom struct custom_object*. This is the standard means of implementing structure inheritance in C. However in PHP 7 there is an issue with this particular approach: Because zend_object uses the struct hack for storing the properties table, PHP will be storing properties past the end of zend_object and thus overwriting additional internal members. This is why in PHP 7 additional members are stored before the standard object instead:

struct custom_object {
    uint32_t something;
    // ...
    zend_object std;

However this means that it is no longer possible to directly convert between a zend_object* and a struct custom_object* with a simple cast, because both are separated by a offset. This offset is what’s stored in the first member of the object handler table. At compile-time the offset can be determined using the offsetof() macro.

You may wonder why PHP 7 objects still contain a handle. After all, we now directly store a pointer to the zend_object, so we no longer need the handle to look up the object in the object store.

However the handle is still necessary, because the object store still exists, albeit in a significantly reduced form. It is now a simple array of pointers to objects. When an object is created a pointer to it is inserted into the object store at the handle index and removed once the object is freed.

Why do we still need the object store? The reason behind this is that during request shutdown, there comes a point where it is no longer safe to run userland code, because the executor is already partially shut down. To avoid this PHP will run all object destructors at an early point during shutdown and prevent them from running at a later point in time. For this a list of all active objects is needed.

Furthermore the handle is useful for debugging, because it gives each object a unique ID, so it’s easy to see whether two objects are really the same or just have the some content. HHVM still stores an object handle despite not having a concept of an object store.

Comparing with the PHP 5 implementation, we now have only one refcount (as the zval itself no longer has one) and the memory usage is much smaller: We need 40 bytes for the base object and 16 bytes for every declared property, already including its zval. The amount of indirection is also significantly reduced, as many of the intermediate structure were either dropped or embedded. As such reading a property is now only a single level of indirection, rather than four.

Indirect zvals

At this point we have covered all of the normal zval types, however there are a couple of additional special types that are used only in certain circumstances. One that was newly added in PHP 7 is IS_INDIRECT.

An indirect zval signifies that its value is stored in some other location. Note that this is different from the IS_REFERENCE type in that it directly points to another zval, rather than a zend_reference structure that embeds a zval.

To understand under what circumstances this may be necessary, consider how PHP implements variables (though the same also applies to object property storage):

All variables that are known at compile-time are assigned an index and their value will be stored at that index in the compiled variables (CV) table. However PHP also allows you to dynamically reference variables, either by using variable variables or, if you are in global scope, through $GLOBALS. If such an access occurs, PHP will create a symbol table for the function/script, which contains a map from variable names to their values.

This leads to the question: How can both forms of access be supported at the same time? We need table-based CV access for normal variable fetches and symtable-based access for varvars. In PHP 5 the CV table used doubly-indirected zval** pointers. Normally those pointers would point to a second table of zval* pointer that would point to the actual zvals:

+------ CV_ptr_ptr[0]
| +---- CV_ptr_ptr[1]
| | +-- CV_ptr_ptr[2]
| | |
| | +-> CV_ptr[0] --> some zval
| +---> CV_ptr[1] --> some zval
+-----> CV_ptr[2] --> some zval

Now, once a symbol table came into use, the second table with the single zval* pointers was left unused and the zval** pointers were updated to point into the hashtable buckets instead. Here illustrated assuming the three variables are called $a, $b and $c:

CV_ptr_ptr[0] --> SymbolTable["a"].pDataPtr --> some zval
CV_ptr_ptr[1] --> SymbolTable["b"].pDataPtr --> some zval
CV_ptr_ptr[2] --> SymbolTable["c"].pDataPtr --> some zval

In PHP 7 using the same approach is no longer possible, because a pointer into a hashtable bucket will be invalidated when the hashtable is resized. Instead PHP 7 uses the reverse strategy: For the variables that are stored in the CV table, the symbol hashtable will contain an INDIRECT entry pointing to the CV entry. The CV table will not be reallocated for the lifetime of the symbol table, so there is no problem with invalidated pointers.

So if you have a function with CVs $a, $b and $c, as well as a dynamically created variable $d, the symbol table could looks something like this:

SymbolTable["a"].value = INDIRECT --> CV[0] = LONG 42
SymbolTable["b"].value = INDIRECT --> CV[1] = DOUBLE 42.0
SymbolTable["c"].value = INDIRECT --> CV[2] = STRING --> zend_string("42")
SymbolTable["d"].value = ARRAY --> zend_array([4, 2])

Indirect zvals can also point to an IS_UNDEF zval, in which case it is treated as if the hashtable does not contain the associated key. So if unset($a) writes an UNDEF type into CV[0], then this will be treated like the symbol table no longer having a key "a".

Constants and ASTs

There are two more special types IS_CONSTANT and IS_CONSTANT_AST which exist both in PHP 5 and PHP 7 and deserve a mention here. To understand what these do, consider the following example:

function test($a = ANSWER,
              $b = ANSWER * ANSWER) {
    return $a + $b;

define('ANSWER', 42);
var_dump(test()); // int(42 + 42 * 42)

The default values for the parameters of the test() function make use of the constant ANSWER - however this constant is not yet defined when the function was declared. The constant will only be available once the define() call has run.

For this reason parameter and property default values, constants and everything else accepting a “static expression” have the ability to postpone evaluation of the expression until first use.

If the value is a constant (or class constant), which is the most common case for late-evaluation, this is signaled using an IS_CONSTANT zval with the constant name. If the value is an expression, a IS_CONSTANT_AST zval pointing to an abstract syntax tree (AST) is used.

And this concludes our walk through the PHP 7 value representation. Two more topics I’d like to write about at some point are some of the optimizations done in the virtual machine, in particular the new calling convention, as well as the improvements that were made to the compiler infrastructure.

News stories from Tuesday 09 June, 2015

Favicon for Ramblings of a web guy 00:45 Apple Says My Screen Is Third Party » Post from Ramblings of a web guy Visit off-site link
I have always had the utmost respect for Apple. Even before I used Macs and before the iPhone came out, I knew they were a top notch company.

I have had five iPhones. I have had 6 or 7 MacBook Pros. My kids have Macs. My kids have iPhones. My parents use iPads. I think a lot of Apple products and service... until today.

We took my daughter's hand me down iPhone 5 in to have the ear piece and top button fixed. It's been in the family the whole time. It was never owned by anyone other than family. Last year, I took it in for the Apple Store Battery Replacement Program. That is the last time anyone had it open. In fact, that may have been the last time it was out of its case. More on this later.

After we dropped off the phone today, we were told it was going to be an hour. No problem, we could kill some time. We came back an hour later and the person brought us the phone out and tells us that they refused to work on it because the screen is a 3rd party part. Whoa! What? I tell her that the only place it was ever worked on was in that exact store. She goes to get a manager. I thought, OK, the Apple customer service I know and love is about to kick in. They are going to realize their mistake and this will all be good. Or, even if they still think it's a 3rd party screen, he will come up with some resolution for the problem. Um, no.

He says the same thing (almost verbatim) to me that the previous person said. I again tell him it has only been opened by them. He offers to take it to the back and have a technician open it up again. He was not really gone long enough for that. He comes back, points at some things on the screen and tells me that is how they know it's a 3rd party part. I again, tell him that only the Apple Store has had it open. His response is a carefully crafted piece of technicality that can only come from lawyers and businessmen. It was along the lines of "At some point, this screen has been replaced with a 3rd party screen. I am not saying you are lying. I am not claiming to know how it was replaced. I am only stating that this is a 3rd party screen." What?

So, OK, what now? I mean, it wasn't under warranty. I did not expect to get a new free phone. I was going to pay to have it fixed. Nope. They won't touch it with a ten foot pole. It has a 3rd party part on it. He claims, that because they base their repair fees on being able to refurbish and reuse the parts they pull off of the phone (the phone I own and paid for by the way), they can't offer to repair a phone with parts they can't refurbish. I can't even pay full price, whatever that is. He never gave me a price to pay for a new screen with no discounts.

At this point, I realized I needed to leave. I was so furious. I was furious it was happening. I was furious that the manager had no solution for me. I was furious that he was speaking in legalese.

Just to be clear, I could buy my daughter a new iPhone 6. I am not trying to get something for nothing. I just wanted the phone to work again. One of the things I love about Apple products is how well they hold up. Sure, you have to have some work done on them sometimes. Batteries go bad. Buttons quit working. But, let's be real. My daughter uses this thing for hours a day. I have the data bill to prove it. So, I like that I can have an Apple product repaired when it breaks and it gets a longer life. The alternative is to throw it away.

How did I end up here? I can only come up with one scenario. And the thought that this is what happened upsets me even more. When we took it for the battery replacement last year, they kept it longer than their initial estimate. And the store was dead that day. When they brought it out, the case would not fit on the bottom of the phone. It was like the screen was not on all the way. The person took it back to the back again. They came out later and it seemed to work fine. And I was fine with all of this because it's Apple. I trust(ed) Apple. But, what if, they broke the screen? What if the tech that broke it was used a screen from some returned phone that did have a third party part and no one caught it? Or what if, Apple was knowingly using third party parts?

If I had not just had the battery replaced last year, I would think maybe there was some shenanigans in the shipping when the phone was new. We bought this phone brand new when the iPhone 5 came out. It would not come as a surprise if some devices had been intercepted and taken apart along the shipping lines. Or even in production. But, we just had it serviced at the Apple Store last year. They had no problem with the screen then other than the one they caused when they had to put it back together a second time.

This all sounds too far fetched right? Sadly, there seems to be a trend of Apple denying service to people. All of these people can't be lying. They can't all be out to get one over on Apple.

While waiting for our appointment, I overheard an Apple Genius telling a woman she "may" have had water damage. She didn't tell her she did. She did not claim the woman was lying. She thought she "may" have water damage. I don't know if she did or not. What struck me was the way she told her she "thought it could be" water damage. She told her she had seen lots of bad screens, but none of them (really? not one single screen?) had vertical lines in it like this. It's like she was setting her up to come back later and say "Darn, the tech says it is water damage." Sadly, I find myself doubting that conversation now. It makes me want to take a phone in with horizontal lines and see if I get the same story.

Of course, I know what many, many people will say to this. You will say that if I am really this upset, I should not buy anymore Apple products. And you are right. That is the American way. The free market is the way to get to companies. The thing is, if I bought a Samsung Galaxy, where would I get it fixed? Would my experience be any better? There is not Samsung store. There are no Authorized Samsung repair facilities. So, what would that get me? A disposable phone? Maybe that is what Apple wants. Maybe that is their goal. Deny service to people in hopes it will lead to more sales and less long term use of their devices.

And you know what makes this all even more crappy? One of the reasons he says he knows it is a third party screen is that the home button is lose. It wasn't lose when we brought it in! I was using the phone myself to make sure a back up was done just before we handed it over to the Apple Store. They did that when they opened the screen and decided it was a third pary part. So, now, my daughter's phone not only has no working ear piece and a top button that works only some of the time. Now, her home button spins around. Sigh.

News stories from Monday 18 May, 2015

Favicon for ircmaxell's blog 15:30 Prefix Trees and Parsers » Post from ircmaxell's blog Visit off-site link
In my last post, Tries and Lexers, I talked about an experiment I was doing related to parsing of JavaScript code. By the end of the post I had shifted to wanting to build a HTTP router using the techniques that I learned. Let's continue where we left off...

Read more »
Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 15 May, 2015

Favicon for ircmaxell's blog 17:00 Tries and Lexers » Post from ircmaxell's blog Visit off-site link
Lately I have been playing around with a few experimental projects. The current one started when I tried to make a templating engine. Not just an ordinary one, but one that understood the context of a variable so it could encode/escape it properly. Imagine being able to put a variable in a JavaScript string in your template, and have the engine transparently encode it correctly for you. Awesome, right? Well, while doing it, I went down a rabbit hole. And it led to something far more awesome.

Read more »
Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 05 May, 2015

Favicon for nikic's Blog 01:00 Internal value representation in PHP 7 - Part 1 » Post from nikic's Blog Visit off-site link

My last article described the improvements to the hashtable implementation that were introduced in PHP 7. This followup will take a look at the new representation of PHP values in general.

Due to the amount of material to cover, the article is split in two parts: This part will describe how the zval (Zend value) implementation differs between PHP 5 and PHP 7, and also discuss the implementation of references. The second part will investigate the realization of individual types like strings or objects in more detail.

Zvals in PHP 5

In PHP 5 the zval struct is defined as follows:

typedef struct _zval_struct {
    zvalue_value value;
    zend_uint refcount__gc;
    zend_uchar type;
    zend_uchar is_ref__gc;
} zval;

As you can see, a zval consists of a value, a type and some additional __gc information, which we’ll talk about in a moment. The value member is a union of different possible values that a zval can store:

typedef union _zvalue_value {
    long lval;                 // For booleans, integers and resources
    double dval;               // For floating point numbers
    struct {                   // For strings
        char *val;
        int len;
    } str;
    HashTable *ht;             // For arrays
    zend_object_value obj;     // For objects
    zend_ast *ast;             // For constant expressions
} zvalue_value;

A C union is a structure in which only one member can be active at a time and those size matches the size of its largest member. All members of the union will be stored in the same place in memory and will be interpreted differently depending on which one you access. If you read the lval member of the above union, its value will be interpreted as a signed integer. If you read the dval member the value will be interpreted as a double-precision floating point number instead. And so on.

To figure out which of these union members is currently in use, the type property of a zval stores a type tag, which is simply an integer:

#define IS_NULL     0      /* Doesn't use value */
#define IS_LONG     1      /* Uses lval */
#define IS_DOUBLE   2      /* Uses dval */
#define IS_BOOL     3      /* Uses lval with values 0 and 1 */
#define IS_ARRAY    4      /* Uses ht */
#define IS_OBJECT   5      /* Uses obj */
#define IS_STRING   6      /* Uses str */
#define IS_RESOURCE 7      /* Uses lval, which is the resource ID */
/* Special types used for late-binding of constants */
#define IS_CONSTANT 8

Reference counting in PHP 5

Zvals in PHP 5 are (with a few exceptions) allocated on the heap and PHP needs some way to keep track which zvals are currently in use and which should be freed. For this purpose reference counting is employed: The refcount__gc member of the zval structure stores how often a zval is currently “referenced”. For example in $a = $b = 42 the value 42 is referenced by two variables, so its refcount is 2. If the refcount reaches zero, it means a value is unused and can be freed.

Note that the references that the refcount refers to (how many times a value is currently used) have nothing to do with PHP references (using &). I will always using the terms “reference” and “PHP reference” to disambiguate both concepts in the following. For now we’ll ignore PHP references altogether.

A concept that is closely related to reference counting is “copy on write”: A zval can only be shared between multiple users as long as it isn’t modified. In order to change a shared zval it needs to be duplicated (“separated”) and the modification will happen only on the duplicated zval.

Lets look at an example that shows off both copy-on-write and zval destruction:

$a = 42;   // $a         -> zval_1(type=IS_LONG, value=42, refcount=1)
$b = $a;   // $a, $b     -> zval_1(type=IS_LONG, value=42, refcount=2)
$c = $b;   // $a, $b, $c -> zval_1(type=IS_LONG, value=42, refcount=3)

// The following line causes a zval separation
$a += 1;   // $b, $c -> zval_1(type=IS_LONG, value=42, refcount=2)
           // $a     -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($b); // $c -> zval_1(type=IS_LONG, value=42, refcount=1)
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($c); // zval_1 is destroyed, because refcount=0
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

Reference counting has one fatal flaw: It is not able to detect and release cyclic references. To handle this PHP uses an additional cycle collector. Whenever the refcount of a zval is decremented and there is a chance that this zval is part of a cycle, the zval is written into a “root buffer”. Once this root buffer is full, potential cycles will be collected using a mark and sweep garbage collection.

In order to support this additional cycle collector, the actually used zval structure is the following:

typedef struct _zval_gc_info {
    zval z;
    union {
        gc_root_buffer       *buffered;
        struct _zval_gc_info *next;
    } u;
} zval_gc_info;

The zval_gc_info structure embeds the normal zval, as well as one additional pointer - note that u is a union, so this is really just one pointer with two different types it may point to. The buffered pointer is used to store where in the root buffer this zval is referenced, so that it may be removed from it if it’s destroyed before the cycle collector runs (which is very likely). next is used when the collector destroys values, but I won’t go into that here.

Motivation for change

Let’s talk about sizes a bit (all sizes are for 64-bit systems): First of all, the zvalue_value union is 16 bytes large, because both the str and obj members have that size. The whole zval struct is 24 bytes (due to padding) and zval_gc_info is 32 bytes. On top of this, allocating the zval on the heap adds another 16 bytes of allocation overhead. So we end up using 48 bytes per zval - although this zval may be used by multiple places.

At this point we can start thinking about the (many) ways in which this zval implementation is inefficient. Consider the simple case of a zval storing an integer, which by itself is 8 bytes. Additionally the type-tag needs to be stored in any case, which is a single byte by itself, but due to padding needs another 8 bytes.

To these 16 bytes that we really “need” (in first approximation), we add another 16 bytes handling reference counting and cycle collection and another 16 bytes of allocation overhead. Not to mention that we actually have to perform that allocation and the subsequent free, both being quite expensive operations.

This raises the question: Does a simple integer value really need to be stored as a reference-counted, cycle-collectible, heap-allocated value? The answer to this question is of course, no, this doesn’t make sense.

Here is a summary of the primary problems with the PHP 5 zval implementation:

  • Zvals (nearly) always require a heap allocation.
  • Zvals are always reference counted and always have cycle collection information, even in cases where sharing the value is not worthwhile (an integer) and it can’t form cycles.
  • Directly refcounting the zvals leads to double refcounting in the case of objects and resources. The reasons behind this will be explained in the next part.
  • Some cases involve quite an awesome amount of indirection. For example to access the object stored in a variable, a total of four pointers need to be dereferenced (which means following a pointer chain of length four). Once again this will be discussed in the next part.
  • Directly refcounting the zvals also means that values can only be shared between zvals. For example it’s not possible to share a string between a zval and hashtable key (without storing the hashtable key as a zval as well).

Zvals in PHP 7

And this brings us to the new zval implementation in PHP 7. The fundamental change that was implemented, is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, arrays or objects) will store the refcount themselves. This has the following advantages:

  • Simple values do not require allocation and don’t use refcounting.
  • There is no more double refcounting. In the object case, only the refcount in the object is used now.
  • Because the refcount is now stored in the value itself, the value can be shared independently of the zval structure. A string can be used both in a zval and a hashtable key.
  • There is a lot less indirection, i.e. the number of pointers you need to follow to get to a value is lower.

Now lets take a look at how the new zval is defined:

struct _zval_struct {
    zend_value value;
    union {
        struct {
                zend_uchar type,
                zend_uchar type_flags,
                zend_uchar const_flags,
                zend_uchar reserved)
        } v;
        uint32_t type_info;
    } u1;
    union {
        uint32_t var_flags;
        uint32_t next;                 // hash collision chain
        uint32_t cache_slot;           // literal cache slot
        uint32_t lineno;               // line number (for ast nodes)
        uint32_t num_args;             // arguments number for EX(This)
        uint32_t fe_pos;               // foreach position
        uint32_t fe_iter_idx;          // foreach iterator index
    } u2;

The first member stays pretty similar, this is still a value union. The second member is an integer storing type information, which is further subdivided into individual bytes using a union (you can ignore the ZEND_ENDIAN_LOHI_4 macro, which just ensures a consistent layout across platforms with different endianness). The important parts of this substructure are the type (which is similar to what it was before) and the type_flags, which I’ll explain in a moment.

At this point there exists a small problem: The value member is 8 bytes large and due to struct padding adding even a single byte to that grows the zval size to 16 bytes. However we obviously don’t need 8 bytes just to store a type. This is why the zval contains the additional u2 union, which remains unused by default, but can be repurposed by the surrounding code to store 4 bytes of data. The different union members correspond to different usages of this extra data slot.

The value union looks slightly different in PHP 7:

typedef union _zend_value {
    zend_long         lval;
    double            dval;
    zend_refcounted  *counted;
    zend_string      *str;
    zend_array       *arr;
    zend_object      *obj;
    zend_resource    *res;
    zend_reference   *ref;
    zend_ast_ref     *ast;

    // Ignore these for now, they are special
    zval             *zv;
    void             *ptr;
    zend_class_entry *ce;
    zend_function    *func;
    struct {
            uint32_t w1,
            uint32_t w2)
    } ww;
} zend_value;

First of all, note that the value union is now 8 bytes instead of 16. It will only store integers (lval) and doubles (dval) directly, everything else is a pointer. All the pointer types (apart from those marked as special above) use refcounting and have a common header defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;

Of course the structure contains a refcount. Additionally it contains a type, some flags and gc_info. The type just duplicates the zval type and allows the GC to distinguish different refcounted structures without storing a zval. The flags are used for different purposes with different types and will be explained for each type separately in the next part.

The gc_info is the equivalent of the buffered entry in the old zvals. However instead of storing a pointer into the root buffer it now contains an index into it. Because the root buffer has a fixed size (10000 elements) it is enough to use a 16 bit number for this instead of a 64 bit pointer. The gc_info info also encodes the “color” of the node, which is used to mark nodes during collection.

Zval memory management

I’ve mentioned that zvals are no longer individually heap-allocated. However they obviously still need to be stored somewhere, so how does this work? While zvals are still mostly part of heap-allocated structures, they are directly embedded into them. E.g. a hashtable bucket will directly embed a zval instead of storing a pointer to a separate zval. The compiled variables table of a function or the property table of an object will be zval arrays that are allocated in one chunk, instead of storing pointers to separate zvals. As such zvals are now usually stored with one level of indirection less. What was previously a zval* is now a zval.

When a zval is used in a new place, previously this meant copying a zval* and incrementing its refcount. Now it means copying the contents of a zval (ignoring u2) instead and maybe incrementing the refcount of the value it points to, if said value uses refcounting.

How does PHP know whether a value is refcounted? This cannot be determined solely based on the type, because some types like strings and arrays are not always refcounted. Instead one bit of the zvals type_info member determines whether or not the zval is refcounted. There are a number of other bits encoding properties of the type:

#define IS_TYPE_CONSTANT            (1<<0)   /* special */
#define IS_TYPE_IMMUTABLE           (1<<1)   /* special */
#define IS_TYPE_REFCOUNTED          (1<<2)
#define IS_TYPE_COLLECTABLE         (1<<3)
#define IS_TYPE_COPYABLE            (1<<4)
#define IS_TYPE_SYMBOLTABLE         (1<<5)   /* special */

The three primary properties a type can have are “refcounted”, “collectable” and “copyable”. You already know what refcounted means. Collectable means that the zval can participate in a cycle. E.g. strings are (often) refcounted, but there’s no way you can create a cycle with a string in it.

Copyability determines whether the value needs to copied when a “duplication” is performed. A duplication is a hard copy, e.g. if you duplicate a zval that points to an array, this will not simply increase the refcount on the array. Instead a new and independent copy of the array will be created. However for some types like objects and resources even a duplication should only increment the refcount - such types are called non-copyable. This matches the passing semantics of objects and resources (which are, for the record, not passed by reference).

The following table shows the different types and what type flags they use. “Simple types” refers to types like integers or booleans that don’t use a pointer to a separate structure. A column for the “immutable” flag is also present, which is used to mark immutable arrays and will be discussed in more detail in the next part.

                | refcounted | collectable | copyable | immutable
simple types    |            |             |          |
string          |      x     |             |     x    |
interned string |            |             |          |
array           |      x     |      x      |     x    |
immutable array |            |             |          |     x
object          |      x     |      x      |          |
resource        |      x     |             |          |
reference       |      x     |             |          |

At this point, lets take a look at two examples of how the zval management works in practice. First, an example using integers based off the PHP 5 example from above:

$a = 42;   // $a = zval_1(type=IS_LONG, value=42)

$b = $a;   // $a = zval_1(type=IS_LONG, value=42)
           // $b = zval_2(type=IS_LONG, value=42)

$a += 1;   // $a = zval_1(type=IS_LONG, value=43)
           // $b = zval_2(type=IS_LONG, value=42)

unset($a); // $a = zval_1(type=IS_UNDEF)
           // $b = zval_2(type=IS_LONG, value=42)

This is pretty boring. As integers are no longer shared, both variables will use separate zvals. Don’t forget that these are now embedded rather than allocated, which I try to signify by writing = instead of a -> pointer. Unsetting a variable will set the type of the corresponding zval to IS_UNDEF. Now consider a more interesting case where a complex value is involved:

$a = [];   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

$b = $a;   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=2, value=[])
           // $b = zval_2(type=IS_ARRAY) ---^

// Zval separation occurs here
$a[] = 1   // $a = zval_1(type=IS_ARRAY) -> zend_array_2(refcount=1, value=[1])
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

unset($a); // $a = zval_1(type=IS_UNDEF) and zend_array_2 is destroyed
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

Here each variable still has a separate (embedded) zval, but both zvals point to the same (refcounted) zend_array structure. Once a modification is done the array needs to be duplicated. This case is similar to how things work in PHP 5.


Lets take a closer look at what types are supported in PHP 7:

// regular data types
#define IS_UNDEF                    0
#define IS_NULL                     1
#define IS_FALSE                    2
#define IS_TRUE                     3
#define IS_LONG                     4
#define IS_DOUBLE                   5
#define IS_STRING                   6
#define IS_ARRAY                    7
#define IS_OBJECT                   8
#define IS_RESOURCE                 9
#define IS_REFERENCE                10

// constant expressions
#define IS_CONSTANT                 11
#define IS_CONSTANT_AST             12

// internal types
#define IS_INDIRECT                 15
#define IS_PTR                      17

This list is quite similar to what was used in PHP 5, however there are a few additions:

  • The IS_UNDEF type is used in places where previously a NULL zval pointer (not to be confused with an IS_NULL zval) was used. For example, in the refcounting examples above the IS_UNDEF type is set for variables that have been unset.
  • The IS_BOOL type has been split into IS_FALSE and IS_TRUE. As such the value of the boolean is now encoded in the type, which allows the optimization of a number of type-based checks. This change is transparent to userland, where this is still a single “boolean” type.
  • PHP references no longer use an is_ref flag on the zval and use a new IS_REFERENCE type instead. How this works will be described in the next section.
  • The IS_INDIRECT and IS_PTR types are special internal types.

The IS_LONG type now uses a zend_long value instead of an ordinary C long. The reason behind this is that on 64-bit Windows (LLP64) a long is only 32-bit wide, so PHP 5 ended up always using 32-bit numbers on Windows. PHP 7 will allow you to use 64-bit numbers if you’re on an 64-bit operating system, even if that operating system is Windows.

Details of the individual zend_refcounted types will be discussed in the next part. For now we’ll only look at the implementation of PHP references.


PHP 7 uses an entirely different approach to handling PHP & references than PHP 5 (and I can tell you that this change is one of the largest source of bugs in PHP 7). Lets start by taking a look at how PHP references used to work in PHP 5:

Normally, the copy-on-write principle says that before modifying a zval it needs to be separated, in order to make sure you don’t end up changing the value for every place sharing the zval. This matches by-value passing semantics.

For PHP references this does not apply. If a value is a PHP reference, you want it to change for every user of the value. The is_ref flag that was part of PHP 5 zvals determined whether a value is a PHP reference and as such whether it required separation before modification. An example:

$a = [];  // $a     -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b =& $a; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[])

$b[] = 1; // $a = $b = zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[1])

One significant problem with this design is that it’s not possible to share a value between a variable that’s a PHP reference and one that isn’t. Consider the following example:

$a = [];  // $a         -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b = $a;  // $a, $b     -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
$c = $b   // $a, $b, $c -> zval_1(type=IS_ARRAY, refcount=3, is_ref=0) -> HashTable_1(value=[])

$d =& $c; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[])
          // $d is a reference of $c, but *not* of $a and $b, so the zval needs to be copied
          // here. Now we have the same zval once with is_ref=0 and once with is_ref=1.

$d[] = 1; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[1])
          // Because there are two separate zvals $d[] = 1 does not modify $a and $b.

This behavior of references is one of the reasons why using references in PHP will usually end up being slower than using normal values. To give a less-contrived example where this is a problem:

$array = range(0, 1000000);
$ref =& $array;
var_dump(count($array)); // <-- separation occurs here

Because count() accepts its value by-value, but $array is a PHP reference, a full copy of the array is done before passing it off to count(). If $array weren’t a reference, the value would be shared instead.

Now, let’s switch to the PHP 7 implementation of PHP references. Because zvals are no longer individually allocated, it is not possible to use the same approach that PHP 5 used. Instead a new IS_REFERENCE type is added, which uses the zend_reference structure as its value:

struct _zend_reference {
    zend_refcounted   gc;
    zval              val;

So essentially a zend_reference is simply a refcounted zval. All variables in a reference set will store a zval with type IS_REFERENCE pointing to the same zend_reference instance. The val zval behaves like any other zval, in particular it is possible to share a complex value it points to. E.g. an array can be shared between a variable that is a reference and another that is a value.

Lets go through the above code samples again, this time looking at the PHP 7 semantics. For the sake of brevity I will stop writing the individual zvals of the variables and only show what structure they point to.

$a = [];  // $a                                     -> zend_array_1(refcount=1, value=[])
$b =& $a; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[])

$b[] = 1; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[1])

The by-reference assignment created a new zend_reference. Note that the refcount is 2 on the reference (because two variables are part of the PHP reference set), but the value itself only has a refcount of 1 (because one zend_reference structure points to it). Now consider the case where references and non-references are mixed:

$a = [];  // $a         -> zend_array_1(refcount=1, value=[])
$b = $a;  // $a, $b,    -> zend_array_1(refcount=2, value=[])
$c = $b   // $a, $b, $c -> zend_array_1(refcount=3, value=[])

$d =& $c; // $a, $b                                 -> zend_array_1(refcount=3, value=[])
          // $c, $d -> zend_reference_1(refcount=2) ---^
          // Note that all variables share the same zend_array, even though some are
          // PHP references and some aren't.

$d[] = 1; // $a, $b                                 -> zend_array_1(refcount=2, value=[])
          // $c, $d -> zend_reference_1(refcount=2) -> zend_array_2(refcount=1, value=[1])
          // Only at this point, once an assignment occurs, the zend_array is duplicated.

The important difference to PHP 5 is that all variables were able to share the same array, even though some were PHP references and some weren’t. Only once some kind of modification is performed the array will be separated. This means that in PHP 7 it’s safe to pass a large, referenced array to count(), it is not going to be duplicated. References will still be slower than normal values, because they require allocation of the zend_reference structure (and indirection through it) and are usually not handled in the fast-path of engine code.

Wrapping up

To summarize, the primary change that was implemented in PHP 7 is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, array or objects) will store the refcount themselves. This usually leads to less allocations, less indirection and less memory usage.

In the second part of this article the remaining complex types will be discussed.

News stories from Tuesday 14 April, 2015

Favicon for Fabien Potencier 23:00 Blackfire, a new Profiler for PHP Developers » Post from Fabien Potencier Visit off-site link


I've always been fascinated by debugging tools; tools that help you understand what's going on in your code. In the Symfony world, the web debug toolbar and the web profiler are tools that gives a lot of information about HTTP request/response pairs (from exceptions to logs, submitted forms and even an event timeline), but it's only available in development mode as enabling those features in production would have a too significant performance impact. The Symfony profiler is also more about giving metadata about the code execution and less about what is executed.

If you want to understand which part of your code is executed for any given request, and where the server resources are spent, you need special tools; tools that instrument your code at the C level. The oldest tool able to do that is XDebug and a few years ago, Facebook also open-sourced XHProf. Both XDebug (as a profiler) and XHProf are profilers; they are able to answer a lot of questions you might have about the performance of your code, and they can help you understand why your code is slow.

But even if tools are available, performance monitoring in the PHP world is not that widespread. You are probably writing unit tests for your applications to ensure that you don't accidentally deploy broken features and to avoid regressions when you are fixing bugs. But what about performance? A broken page is a problem, but what about a page that takes seconds to display? Less performance means less business. So, continuously testing the performance of your applications should be a critical part of your development workflow.

Enter Blackfire. Blackfire is a PHP profiler that simplifies the profiling of an app as much as possible.

The first big difference with existing tools is the installation process; we've made it straightforward by providing easy-to-follow instructions for a lot of different platforms and Blackfire is even included by default on some major PHP cloud providers.

Once installed, profiling an HTTP request is as easy as it can get: use the Google Chrome extension to profile web pages from your browser, or use the command line tool to profile web services, APIs, PHP CLI scripts, or even long-running scripts like daemons or workers.

The other major difference with the other existing tools comes from the fact that Blackfire is a SaaS product. It let us do a lot of things that would not be possible otherwise like storing the history of your profiles, making comparisons between two profiles really easy or providing a rich and interactive UI that evolves on a day-to-day basis.

If you've used XHProf in the past, you might wonder if it would make sense for you to upgrade to Blackfire. First, and unlike a popular belief, the current Blackfire PHP extension is not based on the XHProf code anymore. Starting from scratch helped us lower the overhead and structure the code for extensibility.

Then, and besides the "better experience", Blackfire offers some unique features like:

  • Profile your applications without changing a single line of code;
  • Easily focus on code you need to optimize thanks to more accurate results, aggregation, and smart cleaning of data;
  • More information about CPU time and I/O time;
  • No performance impact on the production servers when not using the profiler;
  • SQL statements and HTTP calls extraction;
  • Team profiling;
  • Profile sharing
  • an API;
  • Garbage collector information;
  • The soon-to-be-announced Windows support;
  • And much more...

We are very active on our blog where you can learn more about the great features we are providing for developers and companies.

Blackfire has been in public beta for four months now and the response has been amazing so far. More than 20.000 developers have already signed up. You can read some user feedback on our Twitter account, and some of them even wrote about their experience on the Blackfire blog: I recommend the article from ownCloud as they did a lot of performance tweaks to make their code run faster thanks to Blackfire.

My mission with Blackfire is to give developers the best possible profiler for their applications. Try it out today for free and tell me what you think!

News stories from Wednesday 01 April, 2015

Favicon for Grumpy Gamer 08:00 Once Again... » Post from Grumpy Gamer Visit off-site link

In what's become a global internet tradition that will be passed down for generations to come...

Grumpy Gamer is 100% April Fools' joke free because April Fools' Day is a stupid fucking tradition.  There.  I said what everyone is thinking.

News stories from Tuesday 24 March, 2015

Favicon for ircmaxell's blog 16:00 Thoughts On The Design Of APIs » Post from ircmaxell's blog Visit off-site link
Developers as a whole suck at API design. We don't suck at making APIs. We don't suck at implementing them. We don't suck at using them (well, some more than others). But we do suck at designing them. In fact, we suck so much that we've made entire disciplines around trying to design better ones (BDD, DDD, TDD, etc). There are lots of reasons for this, but there are a few that I really want to focus on.

Read more »
Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 20 March, 2015

Favicon for Web Mozarts 10:05 Managing Web Assets with Puli » Post from Web Mozarts Visit off-site link

Yesterday marked the release of the last beta version of Puli 1.0. Puli is now feature-complete and ready for you to try. The documentation has been updated and contains all the information that you need to get started. My current plan is to publish a Release Candidate by the end of the month and a first stable release at the end of April.

The most important addition since the last beta release is Puli’s new Asset Plugin. Today, I’d like to show you how this plugin helps to manage the web assets of your project and your installed Composer packages independent of any specific PHP framework.

What is Puli?

You never heard of Puli before? In a nutshell, Puli is a resource manager built on top of Composer. Just like Composer generates an autoloader for the classes in your Composer packages, Puli generates a resource repository that contains all files that are not PHP classes (images, CSS, XML, YAML, HTML, you name it). You can access these resources by simple paths prefixed with the name of the package:

echo $twig->render('/acme/blog/views/footer.html.twig');

The only exceptions are end-user applications, which have the prefix /app by convention:

echo $twig->render('/app/views/index.html.twig');

Read Puli at a Glance to get a better high-level view of Puli’s features.

Update 2015/04/06

This post was updated in order to reflect that Puli’s Web Resource Plugin was renamed to “Asset Plugin”.

Web Assets

Some resources – such as templates or configuration files – are needed by the web server only. Others – like CSS files and images – need to be placed in a public directory, where browsers can download them. I’ll call these files web assets here.

Puli’s Asset Plugin takes care of two things:

  • installing web assets in their public location;
  • generating the URLs for these assets.

The public location for installing assets is called an install target in Puli’s language. Puli supports virtually any kind of install target, such as:

  • the document root of your own web server
  • the document root of another web server
  • a Content Delivery Network (CDN)

Install targets store three pieces of information:

  • their location (a directory path, a URL, …)
  • the used installer (symlink, copy, ftp, rsync, …)
  • their URL format

The URL format is used to generate URLs for the assets installed in the target. The default format is /%s, but you could set it to more elaborate values such as

Creating an Install Target

Let me walk you through a simple example of using the plugin for a typical project. We will work with the following setup:

  • the application’s assets are stored in the Puli path /app/public
  • the assets of the “acme/blog” package are stored in /acme/blog/public
  • all assets should be installed in the directory public_html

Before we can start, we need to install the plugin with Composer:

$ composer require puli/asset-plugin:~1.0

Make sure “minimum-stability” is set to “dev” in your composer.json file:

    "minimum-stability": "dev"

Activate the plugin with Puli’s Command Line Interface (CLI):

$ puli plugin install Puli\\AssetPlugin\\Api\\AssetPlugin

The plugin is loaded successfully if the command puli target succeeds:

$ puli target
No install targets. Use "puli target add <name> <directory>" to add a target.

Let’s create a target named “local” now that points to the aforementioned public_html directory:

$ puli target add local public_html

Run puli target again to see the target that you just added:

Result of the command "puli target"

Installing Web Assets

With the install target ready, we can now map resources to the target:

$ puli asset map /app/public /
$ puli asset map /acme/blog/public /blog

Let’s run puli asset to see the mappings we added:

The output of this command gives us a lot of information:

  • We added our assets to the default target, i.e. our only target “local”. In some cases, it is useful to have more than one install target.
  • The assets in /app/public will be installed in public_html.
  • The assets in /acme/blog/public will be installed in public_html/blog.

All that is left to do is installing the assets:

You should be able to access your assets in the browser now.

Generating Resource URLs

Now that our assets are publicly available, our application needs to generate their proper URLs. If you use Twig, you can use the asset_url() function of Puli’s Twig Extension to do that:

<!-- /images/header.png -->
<img src="{{ asset_url('/app/public/images/header.png') }}" />

The function accepts absolute Puli paths or paths relative to the Puli path of your template:

<img src="{{ asset_url('../images/header.png') }}" />

If you need to generate URLs in PHP code, you can use Puli’s AssetUrlGenerator. Add the following setup code to your bootstrap file or your Dependency Injection Container:

// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repository = $factory->createRepository();
$discovery = $factory->createDiscovery($repository);
// URL Generator setup
$urlGenerator = $factory->createUrlGenerator($discovery);

Asset URLs can be generated with the generateUrl() method of the URL generator:

// /images/header.png

Read the Web Assets guide in the Puli Documentation if you want to learn more about handling web assets with Puli.

The Future of Packages in PHP

With Puli and especially with Puli’s Asset Plugin, we have exciting new possibilities of creating Composer packages that work with different frameworks at the same time. Basically, a bundle/plugin/module/… of the framework of your choice is reduced to:

  • PHP code, which is autoloaded by Composer’s autoloader.
  • Resource files that are managed and published by Puli.
  • A thin layer of configuration files/code for integrating your Package with a framework of your choice.

Since the framework-dependent code is reduced to a few configuration files or classes, it is possible to add support for multiple frameworks at the same time. For open-source developers, that’s a great thing, because they have to maintain much less packages and code than they had to before. For users of open-source software, that’s a great thing too, because it becomes possible to use the magnificent package X with your framework Y, even though X was sadly developed for framework Z. I think that’s exciting. Do you?

Let me know what you think in the comments. Read the Web Assets guide in the Puli Documentation if you want to learn more about the plugin.

News stories from Monday 16 March, 2015

Favicon for ircmaxell's blog 20:30 Dimensional Analysis » Post from ircmaxell's blog Visit off-site link
There's one skill that I learned in College that I wish everyone would learn. I wish it was taught to everyone in elementary school, it's that useful. It's also deceptively simple. So without any more introduction, let's talk about Dimensional Analysis:

Read more »
Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Thursday 12 March, 2015

Favicon for ircmaxell's blog 20:00 Security Issue: Combining Bcrypt With Other Hash Functions » Post from ircmaxell's blog Visit off-site link
The other day, I was directed at an interesting question on StackOverflow asking if password_verify() was safe against DoS attacks using extremely long passwords. Many hashing algorithms depend on the amount of data fed into them, which affects their runtime. This can lead to a DoS attack where an attacker can provide an exceedingly long password and tie up computer resources. It's a really good question to ask of Bcrypt (and password_hash). As you may know, Bcrypt is limited to 72 character passwords. So on the surface it looks like it shouldn't be vulnerable. But I chose to dig in further to be sure. What I found surprised me.

Read more »
Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 10 March, 2015

Favicon for Ramblings of a web guy 00:04 Using socket_connect with a timeout » Post from Ramblings of a web guy Visit off-site link

I was having trouble with socket connections timing out reliably. Sometimes, my timeout would be reached. Other times, the connect would fail after three to six seconds. I finally figured out it had to do with trying to connect to a routable, non-localhost address. This function is what I finally ended up with that reliably connects to a working server, fails quickly for a server that has an address/port that is not reachable and will reach the timeout for routable addresses that are not up.

I have put a version of my final function into a Gist on Github. I hope someone finds it useful.

Full Story

So, it seems that when you try and connect to an IP that is routable on the network, but not answering, the TCP stack has some built in timeouts that are not obvious. This differs from trying to connect to an IP address that is up, but not listening on a given port. We took a Gearman server down for maintenance and I noticed our warning logs were showing a 3 to 7 second delay between the attempt to queue jobs and the warning log. The timeout we had set was only 100ms. So, this seemed odd.

After a lot of messing around, a coworker pointed out that in production, the failures were happening for an IP that was routable on the network, but that had no host listening on the IP. I had been using localhost and some foreign port for my "failed" server. After using an IP that was local to our LAN but had no host listening on the IP, I was able to recreate it on a dev server. I figured out that if you set the send and receive timeouts really low before calling connect, you can loop while calling connect. You check the error state and timeout. As long as the error is an acceptable one and the timeout is not reached, keep trying until it connects. It works like a charm.

I found several similar examples to this on the web. However, none of them mixed all these techniques.

You can simply set the send and receive timeouts to your actual timeout and it will return quicker. However, the timeouts apply to the packets. And there are retry rules in place. So, I found that a 100ms timeout for each send and receive would wind up taking 500ms or so to actually fail. This was not what I wanted. I wanted more control. So, I set a 100 microsecond timeout during connect. This makes socket_connect return quickly. As long as the socket error is 115 (in progress) or 114 (already trying), we keep calling it. Unless of course our timeout is reached. Then we fail.

It works really well. Should help for doing server maintenance on our Gearman servers.

News stories from Saturday 21 February, 2015

Favicon for Grumpy Gamer 03:10 Thimbleweed Park Dev Blog » Post from Grumpy Gamer Visit off-site link

If you're wondering why it's so quiet over here at Grumpy Gamer, rest assured, it has nothing to do with me not being grumpy anymore.

The mystery can be solved by heading on over to the Thimbleweed Park Dev Blog and following fun antics of making a game.

News stories from Wednesday 11 February, 2015

Favicon for ircmaxell's blog 19:00 Scalar Types and PHP » Post from ircmaxell's blog Visit off-site link
There's currently a proposal that's under vote to add Scalar Typing to PHP (it has since been withdrawn). It's been a fairly controversial RFC, but at this point in time it's currently passing with 67.8% of votes. If you want a simplified breakdown of the proposal, check out Pascal Martin's excellent post about it. What I want to talk about is more of an opinion. Why I believe this is the correct approach to the problem.

I have now forked the original proposal and will be bringing it to a vote shortly.
Read more »
Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 03 February, 2015

Favicon for Ramblings of a web guy 04:02 Most epic ticket of the day » Post from Ramblings of a web guy Visit off-site link
UPDATE: I should clarify. This ticket is an internal ticket at DealNews. It is about what the defaults on our servers should be. It is not about what the defaults should be in MySQL. The frustration that UTF8 support in MySQL is only 3 bytes is quite real.

 This epic ticket of the day is brought to you by Joe Hopkinson.

#7940: Default charset should be utf8mb4
 The RFC for UTF-8 states, AND I QUOTE:

 > In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
 accessible range) are encoded using sequences of 1 to 4 octets.

 What's that? You don't believe me?! Well, you can read it for yourself

 What is an octet, you ask? It's a unit of digital information in computing
 and telecommunications that consists of eight bits. (Hence, __oct__et.)

 "So what?", said the neck bearded MySQL developer dressed as Neo from the
 Matrix, as he smuggly quaffed a Surge and settled down to play Virtua
 Fighter 4 on his dusty PS2.

 So, if you recall from your Pre-Intro to Programming, 8 bits = 1 byte.
 Thus, the RFC states that the storage maximum storage requirements for a
 multibyte character must be 4 bytes, as required.

 I know that RFCs are more of GUIDELINE, right? It's not like they could be
 considered a standard or anything! It's not like there should be an
 implicit contract when an implementor decides to use a label like "UTF-8",

 Because of you, we have to strip our reader's carefully crafted emojii.
 Because of you, our search term data will never be exact. Because of you,
 we have to spend COUNTLESS HOURS altering every table that we have (which
 is a lot, by the way) to make sure that we can support a standard that was
 written in 2003!

 A cursory search shows that shortly after 2003, MySQL release quality
 started to tank. I can only assume that was because of you.


 * The default charset should be utf8mb4.
 * Alter and test critical business processes.
 * Change OrderedFunctionSet to generate the appropriate tables.
 * Generate ptosc or propagator scripts to update everything else, as needed.
 * Curse the MySQL developer who caused this.

News stories from Wednesday 28 January, 2015

Favicon for #openttdcoop 23:40 Server/DevZone Outtage » Post from #openttdcoop Visit off-site link


As you may have noticed our services have received some outtage. This happend during a maintenance that was required for needed security updates related to CVE-2015-0235 (the glibc story / When we rebooted the server the most scary thing happend for us. Our server did not return online. After some help from our hosting provider we managed to log back in.

To make the most out of this situation we immediatly also starting converting some of our local containers to a diskimage format (PLOOP / However because one of our main containers which has all the HG repositories has so many small files this conversion is taking longer then expected.

We want to apoligize for this situation and are waiting for this container conversion to finish. After this the most critical containers should all have been converted and most of the other ones are related to non-development stuff that should have no extended downtime like this.



News stories from Tuesday 27 January, 2015

Favicon for #openttdcoop 20:26 RAWR!!! » Post from #openttdcoop Visit off-site link

Ladies and nutmen,

just now I am realizing I forgot to officially mention that I have been working on another project for the past months. RAWR Absolute World Replacement is currently 32bpp/ExtraZoom LANDSCAPE with ROADS and TRACKS. Eventually I am hoping to replace all the sprites the game needs, and the final output then could be a full base set.

Visually, the set is obviously 32bpp/ExtraZoom which looks relatively nice. Functionally, it lets you choose from the 4 climates and force any of them visually. That way you can apply any of them you want – especially if you load the newGRF as a static one. I hope you like it, there is still a lot of things to be done, but the core is there.

The project home is at the devzone per usual – you can also find a guide on how to apply static NewGRFs. I also have a thread at tt-forums, you are welcome to contribute/place your impressions/screenshots there 🙂

You can download RAWR from the online content – BaNaNaS – through the game, or from the website manually.
Enjoy and let me know what you think!



News stories from Tuesday 20 January, 2015

Favicon for Joel on Software 01:14 Stack Exchange Raises $40m » Post from Joel on Software Visit off-site link

Today Stack Exchange is pleased to announce that we have raised $40 million, mostly from Andreessen Horowitz.

Everybody wants to know what we’re going to do with all that money. First of all, of course we’re going to gold-plate the Aeron chairs in the office. Then we’re going to upgrade the game room, and we’re already sending lox platters to our highest-rep users.

But I’ll get into that in a minute. First, let me catch everyone up on what’s happening at Stack Exchange.

In 2008, Jeff Atwood and I set out to fix a problem for programmers. At the time, getting answers to programming questions online was super annoying. The answers that we needed were hidden behind paywalls, or buried in thousands of pages of stale forums.

So we built Stack Overflow with a single-minded, compulsive, fanatical obsession with serving programmers with a better Q&A site.

Everything about how Stack Overflow works today was designed to make programmers’ jobs easier. We let members vote up answers, so we can show you the best answer first. We don’t allow opinionated questions, because they descend into flame wars that don’t help people who need an answer right now. We have scrupulously avoided any commercialization of our editorial content, because we want to have a site that programmers can trust.

Heck, we don’t even allow animated ads, even though they are totally standard on every other site on the Internet, because it would be disrespectful to programmers to strain their delicate eyes with a dancing monkey, and we can’t serve them 100% if we are distracting them with a monkey. That would only be serving them 98%. And we’re OBSESSED, so 98% is like, we might as well close this all down and go drive taxis in Las Vegas.

Anyway, it worked! Entirely thanks to you. An insane number of developers stepped up to pass on their knowledge and help others. Stack Overflow quickly grew into the largest, most trusted repository of programming knowledge in the world.

Quickly, Jeff and I discovered that serving programmers required more than just code-related questions, so we built Server Fault and Super User. And when that still didn’t satisfy your needs, we set up Stack Exchange so the community could create sites on new topics. Now when a programmer has to set up a server, or a PC, or a database, or Ubuntu, or an iPhone, they have a place to go to ask those questions that are full of the people who can actually help them do it.

But you know how programmers are. They “have babies.”  Or “take pictures of babies.” So our users started building Stack Exchange sites on unrelated topics, like parenting and photography, because the programmers we were serving expected—nay, demanded!—a place as awesome as Stack Overflow to ask about baby feeding schedules and f-stops and whatnot.

And we did such a good job of serving programmers that a few smart non-programmers looked at us and said, “Behold! I want that!” and we thought, hey!  What works for developers should work for a lot of other people, too, as long as they’re willing to think like developers, which is the best way to think. So, we decided that anybody who wants to get with the program is welcome to join in our plan. And these sites serve their own communities of, you know, bicycle mechanics, or what have you, and make the world safer for the Programmer Way Of Thinking and thus serve programmers by serving bicycle mechanics.

In the five years since then, our users have built 133 communities. Stack Overflow is still the biggest. It reminds me of those medieval maps of the ancient world. The kind that shows a big bustling city (Jerusalem) smack dab in the middle, with a few smaller settlements around the periphery. (Please imagine Gregorian chamber music).

View of Jerusalem
Stack Overflow is the big city in the middle. Because the programmer-city worked so well, people wanted to ask questions about other subjects, so we let them build other Q&A villages in the catchment area of the programmer-city. Some of these Q&A villages became cities of their own. The math cities barely even have any programmers and they speak their own weird language. They are math-Jerusalem. They makes us very proud. Even though they don’t directly serve programmers, we love them and they bring a little tear to our eyes, like the other little villages, and they’re certainly making the Internet—and the world—better, so we’re devoted to them.

One of these days some of those villages will be big cities, so we’re committed to keeping them clean, and pulling the weeds, and helping them grow.

But let’s go back to programmer Jerusalem, which—as you might expect—is full of devs milling about, building the ENTIRE FUTURE of the HUMAN RACE, because, after all, software is eating the world and writing software is just writing a script for how the future will play out.

So given the importance of software and programmers, you might think they all had wonderful, satisfying jobs that they love.

But sadly, we saw that was not universal. Programmers often have crappy jobs, and their bosses often poke them with sharp sticks. They are underpaid, and they aren’t learning things, and they are sometimes overqualified, and sometimes underqualified. So we decided we could actually make all the programmers happier if we could move them into better jobs.

That’s why we built Stack Overflow Careers. This was the first site that was built for developers, not recruiters. We banned the scourge of contingency recruiters (even if they have big bank accounts and are just LINING UP at the Zion Gate trying to get into our city to feed on programmer meat, but, to hell with them). We are SERVING PROGRAMMERS, not spammers. Bye Felicia.

Which brings us to 2015.

The sites are still growing like crazy. By our measurements, the Stack Exchange network is already in the top 50 of all US websites, ranked by number of unique visitors, with traffic still growing at 25% annually. The company itself has passed 200 employees worldwide, with big plush offices in Denver, New York, and London, and dozens of amazing people who work from the comfort of their own homes. (By the way, if 200 people seems like a lot, keep in mind that more than half of them are working on Stack Overflow Careers).

We could just slow down our insane hiring pace and get profitable right now, but it would mean foregoing some of the investments that let us help more developers. To be honest, we literally can’t keep up with the features we want to build for our users. The code is not done yet—we’re dedicating a lot of resources to the core Q&A engine. This year we’ll work on improving the experience for both new users and highly experienced users.

And let’s not forget Stack Overflow Careers. I believe it is, bar-none, the single best job board for developer candidates, which should  automatically make it the best place for employers to find developer talent. There’s a LOT more to be done to serve developers here and we’re just getting warmed up.

So that’s why we took this new investment of $40m.

We’re ecstatic to have Andreessen Horowitz on board. The partners there believe in our idea of programmers taking over (it was Marc Andreessen who coined the phrase “Software is eating the world”). Chris Dixon has been a personal investor in the company since the beginning and has always known we’d be the obvious winner in the Q&A category, and will be joining our board of directors as an observer.

This is not the first time we’ve raised money; we’re proud to have previously taken investments from Union Square Ventures, Index Ventures, Spark Capital, and Bezos Expeditions. We only take outside money when we are 100% confident that the investors share our philosophy completely and after our lawyers have done a ruthless (sorry, investors) job of maintaining control so that it is literally impossible for anyone to mess up our vision of fanatically serving the people who use our site, and continuing to make the Internet a better place to get expert answers to your questions.

For those of you who have been with us since the early days of Our Incredible Journey, thank you. For those of you who are new, welcome. And if you want to learn more, check out our hott new “about” page. Or ask!

News stories from Wednesday 14 January, 2015

Favicon for Web Mozarts 16:39 Resource Discovery with Puli » Post from Web Mozarts Visit off-site link

Two days ago, I announced Puli’s first beta release. If you haven’t heard about Puli before, I recommend you to read that blog post as well as the Puli at a Glance guide in Puli’s documentation.

Today, I would like to show you how Puli’s Discovery Component helps you to build and use powerful Composer packages with less work and more fun than ever before.

The Problem

Many libraries support configuration code, translations, HTML themes or other content in files of a specific format. The Doctrine ORM, for example, is able to load entity mappings from special XML files:

<!-- res/config/doctrine/Acme.Blog.Post.dcm.xml -->
<doctrine-mapping ...>
    <entity name="Acme\Blog\Post">
        <field name="name" type="string" />

This mapping, stored in the file Acme.Blog.Post.dcm.xml in our fictional “acme/blog” package, contains all the information Doctrine needs to save our Acme\Blog\Post object in the database.

When setting up Doctrine, we need to pass the location of the *.dcm.xml file to Doctrine’s XmlDriver. That’s easy as long as we do it ourselves, but:

  • What if someone else uses our package? How will they find our file?
  • What if multiple packages provide *.dcm.xml files? How do we find all these files?
  • We need to remove the appropriate setup code after removing a package.
  • We need to adapt the setup code after installing a new package.

Multiply this effort for every other library that uses user-provided files and you end up with a lot of configuration effort. Let’s see how Puli helps us to fix this.

Package Roles

For better understanding, it’s useful to assign two different roles to our packages:

  • Resource consumers, like Doctrine, process files of a certain format.
  • Resource providers, like our “acme/blog” package, ship such files.

Puli connects consumers and providers through a mechanism called resource binding. Resource binding is a very simple mechanism:

  1. At first, the consumer defines a binding type.
  2. Then, one or multiple providers bind resources to these types.
  3. Finally, the consumer fetches all the resources bound to their type and does something with them.

Let’s put on the hat of a Doctrine developer and see how this works in practice.

Discovering Resources

We start by defining the binding type “doctrine/xml-mapping” with Puli’s Command Line Interface (CLI):

$ puli type define doctrine/xml-mapping \
    --description "An XML entity mapping loaded by Doctrine's PuliDriver"

We passed a nicely readable description that is displayed when typing puli type:

Result of the command "puli type"

Great! Now we’ll use Puli’s ResourceDiscovery to find all the Puli resources bound to our type:

foreach ($discovery->find('doctrine/xml-mapping') as $binding) {
    foreach ($binding->getResources() as $resource) {
        // load $resource

Remember we’re still wearing the Doctrine developer hat? Let’s put this code into a PuliDriver class so that anybody can easily configure Doctrine to load Puli resources.

Binding Resources

Now, we’ll put on the “acme/blog” developer hat. Let’s bind the XML file from before to Doctrine’s binding type:

$ puli bind /acme/blog/config/doctrine/*.xml doctrine/xml-mapping

The bind command accepts two parameters:

  • The path or glob for the Puli resources we want to bind.
  • The name of the binding type.

We can use puli find to check which resources match the binding:

Result of the command "puli find"

Apparently our XML file was registered successfully.

Application Setup

We’ll change hats one last time. This time, we’ll wear your hat. What do we have to do to use both the “doctrine/orm” package and the “acme/blog” package in our application?

The first thing obviously is to install the packages and the Puli CLI with Composer:

$ composer require doctrine/orm acme/blog puli/cli

Once this is done, we have to configure Doctrine to use the PuliDriver:

use Doctrine\ORM\Configuration;
// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repo = $factory->createRepository();
$discovery = $factory->createDiscovery($repo);
// Doctrine setup
$config = new Configuration();
$config->setMetadataDriverImpl(new PuliDriver($discovery));
// ...

With as little effort as this, Doctrine will now use all the resources bound to the “doctrine/xml-mapping” type in any installed Composer package.

Will it though?

Enabled and Disabled Bindings

Automatically loading stuff from all Composer packages is a bit scary, hence Puli does not enable bindings in your installed packages by default. We can see these bindings when typing puli bind:

Result of the command "puli bind"

If we trust the “acme/blog” developer and actually want to use the binding, we can do so by typing:

$ puli bind --enable 653fc9

That’s all, folks. :) Read more about resource discovery with Puli in the Resource Discovery guide in the documentation. And please leave me your comments below.

News stories from Monday 12 January, 2015

Favicon for Web Mozarts 19:59 Puli 1.0 Beta Released » Post from Web Mozarts Visit off-site link

Today marks the end of a month of very intense development of the Puli library. On December 3rd, 2014 the first alpha version of most of the Puli components and extensions was released. Today, a little more than a month later, I am proud to present to you the first beta release of all the libraries in the Puli ecosystem!

What is Puli?

If you missed my previous blog post, you are probably wondering what this Puli thing is. In short, Puli (pronounced “poo-lee”) is a toolkit which lets you map paths of a virtual resource repository to paths in your Composer package. For example, as the developer of the “acme/blog” package, I can map the path “/acme/blog” to the “res” directory in my package:

$ puli map /acme/blog res

After running this command, I can access all the files in my “res” directory through the Puli path “/acme/blog”. For example, if I’m using Puli’s Twig extension:

// res/views/post.html.twig
echo $twig->render('/acme/blog/views/post.html.twig');

But not only I can do this. Every developer using my package can do the same. And I can use the Puli paths of every other package. Basically, Puli is like PSR-4 autoloading for anything that’s not PHP.

You should read the Puli at a Glance guide to learn more about Puli’s exciting possibilities.

The Puli Components

Puli consists of a few core components that implement Puli’s basic functionality. First, let’s talk about the components that you are most likely to integrate into your applications and libraries:

  • The Repository Component implements a PHP API for the persistent storage of arbitrary resources in a resource repository:
    use Puli\Repository\FilesystemRepository;
    use Puli\Repository\Resource\DirectoryResource;
    $repo = new FilesystemRepository();
    $repo->add('/config', new DirectoryResource('/path/to/resources/config'));
    // /path/to/resources/config/routing.yml
    echo $repo->get('/config/routing.yml')->getBody();
  • The Discovery Component allows you to define binding types and let other packages bind resources to these types. Read the Resource Discovery guide in the documentation to learn more about this topic.
  • The Factory Component contains a single interface PuliFactory. This interface creates repositories and discoveries for you. You can either implement the interface manually, or – and that’s what you usually do – let Puli generate one for you.

Next come the components that you use as a developer in your daily life:

  • The Command Line Interface (CLI) lets you map repository paths, browse the repository, define binding types and bindings and much more by typing a few simple commands in your terminal. The CLI also builds a factory that you can use to load the repository and the discovery in your code:
    $factoryClass = PULI_FACTORY_CLASS;
    $factory = new $factoryClass();
    // If you need the resource repository
    $repo = $factory->createRepository();
    // If you need the resource discovery
    $discovery = $factory->createDiscovery($repo);

    The configuration that you pass to the CLI is stored in a puli.json file in the root of your Composer package. This file should be distributed with your package.

  • The Composer Plugin loads the puli.json files of all installed Composer packages. Through the plugin, you can access any of the resources and bindings that come with any of the libraries you use.
  • The Repository Manager implements the actual business logic behind the CLI and the Composer Plugin. This is Puli’s workhorse.

The Puli Extensions

Currently, Puli features a few extensions that are mostly targeted at the Symfony ecosystem, because – quite simply – that’s the framework I know best. As soon as the first stable release of Puli is out, I would like to work on extensions for other PHP frameworks, but I could need your help with that.

The following extensions are currently available:

Supporting Libraries

During Puli’s development, I created a few small supporting libraries that I couldn’t find in the high quality that I needed to build a solid foundation for Puli. These libraries also had their release today:

  • webmozart/path-util provides robust, cross-platform utility functions for normalizing and transforming filesystem paths. After using it for a few months, I love its simplicity already. I highly recommend to give it a try.
  • webmozart/key-value-store provides a simple yet robust KeyValueStore interface with implementations for various backends.
  • webmozart/json is a wrapper for json_encode()/json_decode() that normalizes their behavior across PHP versions and features integrated JSON Schema validation.
  • webmozart/glob implements Git-like globbing in that wildcards (“*”) match both characters and directory separators. I was made aware today that a similar utility seems to exist in the Symfony Finder component, so I’ll look about merging the two packages.

Road Map

I would like to release a stable version of the fundamental Repository, Discovery and Factory components by the end of January 2015. These components are quite stable already and I don’t expect any serious changes.

The CLI, Composer Plugin and Repository Manager are a bit more complex. They have undergone heavy changes during the last weeks. All the functionality that is planned for the final release is implemented now, but the components need testing and polishing. I plan to release a final version of these packages in February or March 2015.

Feedback Wanted

To permit a successful stable release, I need your feedback! Please integrate Puli, test it and use it. However – as with any beta version – please don’t use it in production.

Read Puli at a Glance and Getting Started to get started. Happy coding! :)

Please leave me your feedback below. Follow PuliPHP on Twitter to receive all the latest news about Puli.

News stories from Friday 09 January, 2015

Favicon for Grumpy Gamer 17:49 I Was A Teengage Lobot » Post from Grumpy Gamer Visit off-site link

This was the first design document I worked on while at Lucasfilm Games. It was just after Koronis Rift finished and I was really hoping I wouldn't get laid off.  When I first joined Lucasfilm, I was a contractor, not an employee. I don't remember why that was, but I wanted to get hired on full time. I guess I figured I'd show how indispensable I was by helping to churn out game design gold like this.

This is probably one of the first appearances of "Chuck", who would go on to "Chuck the Plant" fame.

You'll also notice the abundance of TM's all over the doc. That joke never gets old.  Right?

Many thanks to Aric Wilmunder for saving this document.

Shameless plug to visit the Thimbleweed Park Development Diary.


News stories from Friday 02 January, 2015

Favicon for Grumpy Gamer 00:40 Thimbleweed Park Development Diary » Post from Grumpy Gamer Visit off-site link

The Thimbleweed Park Development Diary is now live. Updated at least every Monday, probably much more.

News stories from Wednesday 31 December, 2014

Favicon for ircmaxell's blog 20:00 2014 - A Year In Review » Post from ircmaxell's blog Visit off-site link
Wow, another year gone by. Where does the time go? Well, considering I've written a year-end summary the past 2 years, I've decided to do it again for this year. So here it is, 2014 in review:

Read more »
Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 30 December, 2014

Favicon for ircmaxell's blog 19:00 PHP Install Statistics » Post from ircmaxell's blog Visit off-site link
After yesterday's post, I decided to do some math to see how many PHP installs had at least 1 known security vulnerability. So I went to grab statistics from W3Techs, and correlated that with known Linux Distribution supported numbers. I then whipped up a spreadsheet and got some interesting numbers out of it. So interesting, that I need to share...
Read more »
Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 29 December, 2014

Favicon for ircmaxell's blog 21:00 Being A Responsible Developer » Post from ircmaxell's blog Visit off-site link
Last night, I was listening to the combined DevHell and PHPTownHall Mashup podcast recording, listening to them discuss a topic I talked about in my last blog post. While they definitely understood my points, they for the most part disagreed with me (there was some contention in the discussion though). I don't mind that they disagreed, but I was rather taken aback by their justification. Let me explain...

Read more »
Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Thursday 25 December, 2014

Favicon for #openttdcoop 23:35 New member: Hazzard » Post from #openttdcoop Visit off-site link

Hell000 and Merry Christmas! We are happy to announce that our inner circles have gained yet another person, Hazzard!

Being around for a long while, most of you probably know him, but if you don’t, Hazzard is a great builder and person. His logic mechanisms and other construction put your brains in greater hazard when you see them. He has been generally very helpful, teaching people, being a nice person, and everything else.

Everybody, please welcome Hazzard to the openttdcoop members club!

News stories from Wednesday 24 December, 2014

Favicon for Grumpy Gamer 21:54 Happy Holidays » Post from Grumpy Gamer Visit off-site link


News stories from Monday 22 December, 2014

Favicon for nikic's Blog 01:00 PHP's new hashtable implementation » Post from nikic's Blog Visit off-site link

About three years ago I wrote an article analyzing the memory usage of arrays in PHP 5. As part of the work on the upcoming PHP 7, large parts of the Zend Engine have been rewritten with a focus on smaller data structures requiring fewer allocations. In this article I will provide an overview of the new hashtable implementation and show why it is more efficient than the previous implementation.

To measure memory utilization I am using the following script, which tests the creation of an array with 100000 distinct integers:

$startMemory = memory_get_usage();
$array = range(1, 100000);
echo memory_get_usage() - $startMemory, " bytes\n";

The following table shows the results using PHP 5.6 and PHP 7 on 32bit and 64bit systems:

        |   32 bit |    64 bit
PHP 5.6 | 7.37 MiB | 13.97 MiB
PHP 7.0 | 3.00 MiB |  4.00 MiB

In other words, arrays in PHP 7 use about 2.5 times less memory on 32bit and 3.5 on 64bit (LP64), which is quite impressive.

Introduction to hashtables

In essence PHP’s arrays are ordered dictionaries, i.e. they represent an ordered list of key/value pairs, where the key/value mapping is implemented using a hashtable.

A Hashtable is an ubiquitous data structure, which essentially solves the problem that computers can only directly represent continuous integer-indexed arrays, whereas programmers often want to use strings or other complex types as keys.

The concept behind a hashtable is very simple: The string key is run through a hashing function, which returns an integer. This integer is then used as an index into a “normal” array. The problem is that two different strings can result in the same hash, as the number of possible strings is virtually infinite while the hash is limited by the integer size. As such hashtables need to implement some kind of collision resolution mechanism.

There are two primary approaches to collision resolution: Open addressing, where elements will be stored at a different index if a collision occurs, and chaining, where all elements hashing to the same index are stored in a linked list. PHP uses the latter mechanism.

Typically hashtables are not explicitly ordered: The order in which elements are stored in the underlying array depends on the hashing function and will be fairly random. But this behavior is not consistent with the semantics of PHP arrays: If you iterate over a PHP array you will get back the elements in the exact order in which they were inserted. This means that PHP’s hashtable implementation has to support an additional mechanism for remembering the order of array elements.

The old hashtable implementation

I’ll only provide a short overview of the old hashtable implementation here, for a more comprehensive explanation please see the hashtable chapter of the PHP Internals Book. The following graphic is a very high-level view of how a PHP 5 hashtable looks like:


The elements in the “collision resolution” chain are referred to as “buckets”. Every bucket is individually allocated. What the image glosses over are the actual values stored in these buckets (only the keys are shown here). Values are stored in separately allocated zval structures, which are 16 bytes (32bit) or 24 bytes (64bit) large.

Another thing the image does not show is that the collision resolution list is actually a doubly linked list (which simplifies deletion of elements). Next to the collision resolution list, there is another doubly linked list storing the order of the array elements. For an array containing the keys "a", "b", "c" in this order, this list could look as follows:


So why was the old hashtable structure so inefficient, both in terms of memory usage and performance? There are a number of primary factors:

  • Buckets require separate allocations. Allocations are slow and additionally require 8 / 16 bytes of allocation overhead. Separate allocations also means that the buckets will be more spread out in memory and as such reduce cache efficiency.
  • Zvals also require separate allocations. Again this is slow and incurs allocation header overhead. Furthermore this requires us to store a pointer to a zval in each bucket. Because the old implementation was overly generic it actually needed not just one, but two pointers for this.
  • The two doubly linked lists require a total of four pointers per bucket. This alone takes up 16 / 32 bytes.. Furthermore traversing linked lists is a very cache-unfriendly operation.

The new hashtable implementation tries to solve (or at least ameliorate) all of these problems.

The new zval implementation

Before getting to the actual hashtable, I’d like to take a quick look at the new zval structure and highlight how it differs from the old one. The zval struct is defined as follows:

struct _zval_struct {
	zend_value value;
	union {
		struct {
				zend_uchar type,
				zend_uchar type_flags,
				zend_uchar const_flags,
				zend_uchar reserved)
		} v;
		uint32_t type_info;
	} u1;
	union {
		uint32_t var_flags;
		uint32_t next;       /* hash collision chain */
		uint32_t cache_slot; /* literal cache slot */
		uint32_t lineno;     /* line number (for ast nodes) */
	} u2;

You can safely ignore the ZEND_ENDIAN_LOHI_4 macro in this definition - it is only present to ensure a predictable memory layout across machines with different endianness.

The zval structure has three parts: The first member is the value. The zend_value union is 8 bytes large and can store different kinds of values, including integers, strings, arrays, etc. What is actually stored in there depend on the zval type.

The second part is the 4 byte type_info, which consists of the actual type (like IS_STRING or IS_ARRAY), as well as a number of additional flags providing information about this type. E.g. if the zval is storing an object, then the type flags would say that it is a non-constant, refcounted, garbage-collectible, non-copying type.

The last 4 bytes of the zval structure are normally unused (it’s really just explicit padding, which the compiler would introduce automatically otherwise). However in special contexts this space is used to store some extra information. E.g. AST nodes use it to store a line number, VM constants use it to store a cache slot index and hashtables use it to store the next element in the collision resolution chain - that last part will be important to us.

If you compare this to the previous zval implementation, one difference particularly stands out: The new zval structure no longer stores a refcount. The reason behind this, is that the zvals themselves are no longer individually allocated. Instead the zval is directly embedded into whatever is storing it (e.g. a hashtable bucket).

While the zvals themselves no longer use refcounting, complex data types like strings, arrays, objects and resources still use them. Effectively the new zval design has pushed out the refcount (and information for the cycle-collector) from the zval to the array/object/etc. There are a number of advantages to this approach, some of them listed in the following:

  • Zvals storing simple values (like booleans, integers or floats) no longer require any allocations. So this saves the allocation header overhead and improves performance by avoiding unnecessary allocs and frees and improving cache locality.
  • Zvals storing simple values don’t need to store a refcount and GC root buffer.
  • We avoid double refcounting. E.g. previously objects both used the zval refcount and an additional object refcount, which was necessary to support by-object passing semantics.
  • As all complex values now embed a refcount, they can be shared independently of the zval mechanism. In particular it is now also possible to share strings. This is important to the hashtable implementation, as it no longer needs to copy non-interned string keys.

The new hashtable implementation

With all the preliminaries behind us, we can finally look at the new hashtable implementation used by PHP 7. Lets start by looking at the bucket structure:

typedef struct _Bucket {
	zend_ulong        h;
	zend_string      *key;
	zval              val;
} Bucket;

A bucket is an entry in the hashtable. It contains pretty much what you would expect: A hash h, a string key key and a zval value val. Integer keys are stored in h (the key and hash are identical in this case), in which case the key member will be NULL.

As you can see the zval is directly embedded in the bucket structure, so it doesn’t have to be allocated separately and we don’t have to pay for allocation overhead.

The main hashtable structure is more interesting:

typedef struct _HashTable {
	uint32_t          nTableSize;
	uint32_t          nTableMask;
	uint32_t          nNumUsed;
	uint32_t          nNumOfElements;
	zend_long         nNextFreeElement;
	Bucket           *arData;
	uint32_t         *arHash;
	dtor_func_t       pDestructor;
	uint32_t          nInternalPointer;
	union {
		struct {
				zend_uchar    flags,
				zend_uchar    nApplyCount,
				uint16_t      reserve)
		} v;
		uint32_t flags;
	} u;
} HashTable;

The buckets (= array elements) are stored in the arData array. This array is allocated in powers of two, with the size being stored in nTableSize (the minimum value is 8). The actual number of stored elements is nNumOfElements. Note that this array directly contains the Bucket structures. Previously we used an array of pointers to separately allocated buckets, which means that we needed more alloc/frees, had to pay allocation overhead and also had to pay for the extra pointer.

Order of elements

The arData array stores the elements in order of insertion. So the first array element will be stored in arData[0], the second in arData[1] etc. This does not in any way depend on the used key, only the order of insertion matters here.

So if you store five elements in the hashtable, slots arData[0] to arData[4] will be used and the next free slot is arData[5]. We remember this number in nNumUsed. You may wonder: Why do we store this separately, isn’t it the same as nNumOfElements?

It is, but only as long as only insertion operations are performed. If an element is deleted from a hashtable, we obviously don’t want to move all elements in arData that occur after the deleted element in order to have a continuous array again. Instead we simply mark the deleted value with an IS_UNDEF zval type.

As an example, consider the following code:

$array = [
	'foo' => 0,
	'bar' => 1,
	0     => 2,
	'xyz' => 3,
	2     => 4

This will result in the following arData structure:

nTableSize     = 8
nNumOfElements = 3
nNumUsed       = 5

[0]: key="foo", val=int(0)
[1]: key="bar", val=int(1)
[2]: val=UNDEF
[3]: val=UNDEF
[4]: h=2, val=int(4)

As you can see the first five arData elements have been used, but elements at position 2 (key 0) and 3 (key 'xyz') have been replaced with an IS_UNDEF tombstone, because they were unset. These elements will just remain wasted memory for now. However, once nNumUsed reaches nTableSize PHP will try compact the arData array, by dropping any UNDEF entries that have been added along the way. Only if all buckets really contain a value the arData will be reallocated to twice the size.

The new way of maintaining array order has several advantages over the doubly linked list that was used in PHP 5.x. One obvious advantage is that we save two pointers per bucket, which corresponds to 8/16 bytes. Additionally it means that iterating an array looks roughly as follows:

uint32_t i;
for (i = 0; i < ht->nNumUsed; ++i) {
	Bucket *b = &ht->arData[i];
	if (Z_ISUNDEF(b->val)) continue;

	// do stuff with bucket

This corresponds to a linear scan of memory, which is much more cache-efficient than a linked list traversal (where you go back and forth between relatively random memory addresses).

One problem with the current implementation is that arData never shrinks (unless explicitly told to). So if you create an array with a few million elements and remove them afterwards, the array will still take a lot of memory. We should probably half the arData size if utilization falls below a certain level.

Hashtable lookup

Until now we have only discussed how PHP arrays represent order. The actual hashtable lookup uses the second arHash array, which consists of uint32_t values. The arHash array has the same size (nTableSize) as arData and both are actually allocated as one chunk of memory.

The hash returned from the hashing function (DJBX33A for string keys) is a 32-bit or 64-bit unsigned integer, which is too large to directly use as an index into the hash array. We first need to adjust it to the table size using a modulus operation. Instead of hash % ht->nTableSize we use hash & (ht->nTableSize - 1), which is the same if the size is a power of two, but doesn’t require expensive integer division. The value ht->nTableSize - 1 is stored in ht->nTableMask.

Next, we look up the index idx = ht->arHash[hash & ht->nTableMask] in the hash array. This index corresponds to the head of the collision resolution list. So ht->arData[idx] is the first entry we have to examine. If the key stored there matches the one we’re looking for, we’re done.

Otherwise we must continue to the next element in the collision resolution list. The index to this element is stored in bucket->, which are the normally unused last four bytes of the zval structure that get a special meaning in this context. We continue traversing this linked list (which uses indexes instead of pointers) until we either find the right bucket or hit an INVALID_IDX - which means that an element with the given key does not exist.

In code, the lookup mechanism looks like this:

zend_ulong h = zend_string_hash_val(key);
uint32_t idx = ht->arHash[h & ht->nTableMask];
while (idx != INVALID_IDX) {
	Bucket *b = &ht->arData[idx];
	if (b->h == h && zend_string_equals(b->key, key)) {
		return b;
	idx = Z_NEXT(b->val); // b->
return NULL;

Lets consider how this approach improves over the previous implementation: In PHP 5.x the collision resolution used a doubly linked pointer list. Using uint32_t indices instead of pointers is better, because they take half the size on 64bit systems. Additionally fitting in 4 bytes means that we can embed the “next” link into the unused zval slot, so we essentially get it for free.

We also use a singly linked list now, there is no “prev” link anymore. The prev link is primarily useful for deleting elements, because you have to adjust the “next” link of the “prev” element when you perform a deletion. However, if the deletion happens by key, you already know the previous element as a result of traversing the collision resolution list.

The few cases where deletion occurs in some other context (e.g. “delete the element the iterator is currently at”) will have to traverse the collision list to find the previous element. But as this is a rather unimportant scenario, we prefer saving memory over saving a list traversal for that case.

Packed hashtables

PHP uses hashtables for all arrays. However in the rather common case of continuous, integer-indexed arrays (i.e. real arrays) the whole hashing thing doesn’t make much sense. This is why PHP 7 introduces the concept of “packed hashtables”.

In packed hashtables the arHash array is NULL and lookups will directly index into arData. If you’re looking for the key 5 then the element will be located at arData[5] or it doesn’t exist at all. There is no need to traverse a collision resolution list.

Note that even for integer indexed arrays PHP has to maintain order. The arrays [0 => 1, 1 => 2] and [1 => 2, 0 => 1] are not the same. The packed hashtable optimization only works if keys are in ascending order. There can be gaps in between them (the keys don’t have to be continuous), but they need to always increase. So if elements are inserted into an array in a “wrong” order (e.g. in reverse) the packed hashtable optimization will not be used.

Note furthermore that packed hashtables still store a lot of useless information. For example we can determine the index of a bucket based on its memory address, so bucket->h is redundant. The value bucket->key will always be NULL, so it’s just wasted memory as well.

We keep these useless values around so that buckets always have the same structure, independently of whether or not packing is used. This means that iteration can always use the same code. However we might switch to a “fully packed” structure in the future, where a pure zval array is used if possible.

Empty hashtables

Empty hashtables get a bit of special treating both in PHP 5.x and PHP 7. If you create an empty array [] chances are pretty good that you won’t actually insert any elements into it. As such the arData/arHash arrays will only be allocated when the first element is inserted into the hashtable.

To avoid checking for this special case in many places, a small trick is used: While the nTableSize is set to either the hinted size or the default value of 8, the nTableMask (which is usually nTableSize - 1) is set to zero. This means that hash & ht->nTableMask will always result in the value zero as well.

So the arHash array for this case only needs to have one element (with index zero) that contains an INVALID_IDX value (this special array is called uninitialized_bucket and is allocated statically). When a lookup is performed, we always find the INVALID_IDX value, which means that the key has not been found (which is exactly what you want for an empty table).

Memory utilization

This should cover the most important aspects of the PHP 7 hashtable implementation. First lets summarize why the new implementation uses less memory. I’ll only use the numbers for 64bit systems here and only look at the per-element size, ignoring the main HashTable structure (which is less significant asymptotically).

In PHP 5.x a whopping 144 bytes per element were required. In PHP 7 the value is down to 36 bytes, or 32 bytes for the packed case. Here’s where the difference comes from:

  • Zvals are not individually allocated, so we save 16 bytes allocation overhead.
  • Buckets are not individually allocated, so we save another 16 bytes of allocation overhead.
  • Zvals are 16 bytes smaller for simple values.
  • Keeping order no longer needs 16 bytes for a doubly linked list, instead the order is implicit.
  • The collision list is now singly linked, which saves 8 bytes. Furthermore it’s now an index list and the index is embedded into the zval, so effectively we save another 8 bytes.
  • As the zval is embedded into the bucket, we no longer need to store a pointer to it. Due to details of the previous implementation we actually save two pointers, so that’s another 16 bytes.
  • The length of the key is no longer stored in the bucket, which is another 8 bytes. However, if the key is actually a string and not an integer, the length still has to be stored in the zend_string structure. The exact memory impact in this case is hard to quantify, because zend_string structures are shared, whereas previously hashtables had to copy the string if it wasn’t interned.
  • The array containing the collision list heads is now index based, so saves 4 bytes per element. For packed arrays it is not necessary at all, in which case we save another 4 bytes.

However it should be clearly said that this summary is making things look better than they really are in several respects. First of all, the new hashtable implementation uses a lot more embedded (as opposed to separately allocated) structures. How can this negatively affect things?

If you look at the actually measured numbers at the start of this article, you’ll find that on 64bit PHP 7 an array with 100000 elements took 4.00 MiB of memory. In this case we’re dealing with a packed array, so we would actually expect 32 * 100000 = 3.05 MiB memory utilization. The reason behind this is that we allocate everything in powers of two. The nTableSize will be 2^17 = 131072 in this case, so we’ll allocate 32 * 131072 bytes of memory (which is 4.00 MiB).

Of course the previous hashtable implementation also used power of two allocations. However it only allocated an array with bucket pointers in this way (where each pointer is 8 bytes). Everything else was allocated on demand. So in PHP 7 we loose 32 * 31072 (0.95 MiB) in unused memory, while in PHP 5.x we only waste 8 * 31072 (0.24 MiB).

Another thing to consider is what happens if not all values stored in the array are distinct. For simplicity lets assume that all values in the array are identical. So lets replace the range in the starting example with an array_fill:

$startMemory = memory_get_usage();
$array = array_fill(0, 100000, 42);
echo memory_get_usage() - $startMemory, " bytes\n";

This script results in the following numbers:

        |   32 bit |    64 bit
PHP 5.6 | 4.70 MiB |  9.39 MiB
PHP 7.0 | 3.00 MiB |  4.00 MiB

As you can see the memory usage on PHP 7 stays the same as in the range case. There is no reason why it would change, as all zvals are separate. On PHP 5.x on the other hand the memory usage is now significantly lower, because only one zval is used for all values. So while we’re still a good bit better off on PHP 7, the difference is smaller now.

Things become even more complicated once we consider string keys (which may or not be shared or interned) and complex values. The point being that arrays in PHP 7 will take significantly less memory than in PHP 5.x, but the numbers from the introduction are likely too optimistic in many cases.


I’ve already talked a lot about memory usage, so lets move to the next point, namely performance. In the end, the goal of the phpng project wasn’t to improve memory usage, but to improve performance. The memory utilization improvement is only a means to an end, in that less memory results in better CPU cache utilization, resulting in better performance.

However there are of course a number of other reasons why the new implementation is faster: First of all we need less allocations. Depending on whether or not values are shared we save two allocations per element. Allocations being rather expensive operations this is quite significant.

Array iteration in particular is now more cache-friendly, because it’s now a linear memory traversal, instead of a random-access linked list traversal.

There’s probably a lot more to be said on the topic of performance, but the main interest in this article was memory usage, so I won’t go into further detail here.

Closing thoughts

PHP 7 undoubtedly has made a big step forward as far as the hashtable implementation is concerned. A lot of useless overhead is gone now.

So the question is: where we can go from here? One idea I already mentioned is to use “fully packed” hashes for the case of increasing integer keys. This would mean using a plain zval array, which is the best we can do without starting to specialize uniformly typed arrays.

There’s probably some other directions one could go as well. For example switching from collision-chaining to open addressing (e.g. using Robin Hood probing), could be better both in terms of memory usage (no collision resolution list) and performance (better cache efficiency, depending on the details of the probing algorithm). However open-addressing is relatively hard to combine with the ordering requirement, so this may not be possible to do in a reasonable way.

Another idea is to combine the h and key fields in the bucket structure. Integer keys only use h and string keys already store the hash in key as well. However this would likely have an adverse impact on performance, because fetching the hash will require an additional memory indirection.

One last thing that I wish to mention is that PHP 7 improved not only the internal representation of hashtables, but also the API used to work them. I’ve regularly had to look up how even simple operations like zend_hash_find had to be used, especially regarding how many levels of indirection are required (hint: three). In PHP 7 you just write zend_hash_find(ht, key) and get back a zval*. Generally I find that writing extensions for PHP 7 has become quite a bit more pleasant.

Hopefully I was able to provide you some insight into the internals of PHP 7 hashtables. Maybe I’ll write a followup article focusing on zvals. I’ve already touched on some of the difference in this post, but there’s a lot more to be said on the topic.

News stories from Friday 19 December, 2014

Favicon for ircmaxell's blog 20:00 On PHP Version Requirements » Post from ircmaxell's blog Visit off-site link
I learned something rather disturbing yesterday. CodeIgniter 3.0 will support PHP 5.2. To put that in context, there hasn't been a supported or secure version of PHP 5.2 since January, 2011. That's nearly 4 years. To me, that's beyond irresponsible... It's negligent... So I tweeted about it (not mentioning the project to give them the chance to realize what the problem was):

I received a bunch of replies. Many people thought I was talking about WordPress. I wasn't, but the same thing does apply to the project. Most people agreed with me, saying that not targeting 5.4 or higher is bad. But some disagreed. Some disagreed strongly. So, I want to talk about that.
Read more »
Ircmaxell?i=D0NG1B2KQZQ:uodvoLGl9OM:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=D0NG1B2KQZQ:uodvoLGl9OM:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs
Favicon for Grumpy Gamer 00:40 Funded! » Post from Grumpy Gamer Visit off-site link

Thimbleweed Park was funded with all stretch goals met, from translations to iOS and Android versions. We can't even begin to thank everyone for all the support and backing.


You can read the backer update here.

Gary and I are going to take a break during the holidays, then we'll start working full time on the Jan 2nd.

There will be a dev blog on where we'll talk about the game's development. Our goal is to post at least once a week going over art, puzzles, characters, design and code.

Once everything has cleared, I'm going to do a detailed blog post about the ups, downs and surprises of running a Kickstarter.

News stories from Thursday 18 December, 2014

Favicon for ircmaxell's blog 21:31 Stack Machines: Compilers » Post from ircmaxell's blog Visit off-site link
I have the honor today of writing a guest blog post on Igor Wiedler's Blog about Compilers. If you don't know @igorwhiletrue, he's pretty much the craziest developer that I know. And crazy in that genious sort of way. He's been doing a series of blog posts about Stack Machines and building complex runtimes from simple components. Well, today I authored a guest post on compiling code to run on said runtime. The compiler only took about 100 lines of code!!!

Check it out!

Ircmaxell?i=yhGGOSDuWzw:G7c4FXGF6zc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=yhGGOSDuWzw:G7c4FXGF6zc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 14 December, 2014

Favicon for Grumpy Gamer 19:31 Talkies » Post from Grumpy Gamer Visit off-site link


Really excited we made the Talkies stretch goal. Knowing that an actor is going to read lines you wrote is always exciting.

To answer some questions a few backers (or potential backers) have asked...

Yes, you will be able to turn the talkies off and just read the text.  Yes, you will be able to display the text on screen and listen to the talkies, or not display the text and just listen to the talkies.  And, yes, you will be able to skip each line if you like hearing the voice, but read really fast.  Back in the SCUMM days, the '.' key would end the current line and I plan on implementing that in Thimbleweed Park.  It will cut off the audio, but that's OK because the player is doing it.

Thanks so much for everyone's support and belief in this project. It's going to be a really fun year! Gary and I can't wait to start up the dev blog and start talking about the game.

News stories from Monday 08 December, 2014

Favicon for Grumpy Gamer 23:22 Talkies First » Post from Grumpy Gamer Visit off-site link

We’re going to swap the Talkies™ and the iOS/Android stretch goals and here is our logic…


We've heard from a lot of our backers through the comments, private messages and emails who want full voice in Thimbleweed Park. It might be a vocal minority, but it’s a lot more than just a few people. Gary and I also want to do full voice. I love hearing characters come to life though a great actor, it makes the game a lot more accessible, and it’s just a lot of fun to do.

The other reason is that distributing mobile versions to backers is way more complicated than PC/Mac/Linux, so we’re stuck in this situation where backers might need to buy the mobile versions and that’s a little awkward. Plus mobile ports are something we can potentially fund later if we don't hit the stretch goal, but voices need to be done as part of the initial development.

So, for these reasons, we’re going to swap the stretch goals to put talkies first and the mobile ports second. Of course we could still make both goals, and I hope we do! But if we don't... well, it feels like our backers would rather have talkies.

We hope this doesn’t create too much confusion. We wanted to give you some insight into our thought process. Gary and I like to think stuff through and not be impulsive. We might be a little slow, but we try to be very steady and reliable and in the end that's why we'll hopefully make a great game that we all love.

This doesn't mean we won't have iOS/Android ports.  I do most of my gaming on mobile and it they are really important, but it felt like the Talkies™ should be integrated into the main development, plus mobile players will get to enjoy them as well.

If you haven't already, please join us on Kickstarter!

News stories from Saturday 06 December, 2014

Favicon for Grumpy Gamer 19:28 Congratulation to Ken and Roberta! » Post from Grumpy Gamer Visit off-site link

Congratulation to Ken and Roberta for their Industry Icon Award. Well deserved.

Over the years, I’ve given Sierra a lot of crap, but the honest fact is that without King's Quest, there would be no Maniac Mansion or Monkey Island. It really did set the template that we all followed.


I’ve told this story before, but you’re going to listen to it again…

A few months into Maniac Mansion, Gary and I had a bunch of fun ideas, some characters, and a creepy old mansion, but what we didn’t have was a game. There was nothing to hang any of our ideas on top of.

I was feeling a little lost. “There is no game”, I kept saying.

We had our christmas break and I went down to visit my Aunt and Uncle. My eight year old cousin was playing King's Quest I. I’d never seen the game before and I watched him for hours.  Everything Gary and I had been talking about suddenly made sense.  Maniac Mansion should be an adventure game.

Without King's Quest, I don’t know if that leap would have happened. No matter how innovative and new something is, it's always built on something else. Maniac Mansion and Monkey Island are built on King's Quest.

We always had a fun rivalry with Sierra and they always made us try harder and be better.

Thank you Ken and Roberta and everyone else at Sierra.

News stories from Wednesday 03 December, 2014

Favicon for Grumpy Gamer 21:23 Maniac Mansion Used a Joystick » Post from Grumpy Gamer Visit off-site link

The C64 version of Maniac Mansion didn't use a mouse, it used one of these:


A year later we did the IBM PC version and it had keyboard support for moving the cursor because most PCs didn't have a mouse.  Monkey Island also had cursor key support because not everyone had a mouse.

Use the above facts to impress people at cocktail parties.

Favicon for Web Mozarts 17:49 Puli: Powerful Resource Management for PHP » Post from Web Mozarts Visit off-site link

Since the introduction of Composer and the autoloading standards PSR-0 and PSR-4, the PHP community changed a lot. Not so long ago, it was difficult to integrate third-party code into your application. Now, it has become a matter of running a command on the terminal and including Composer’s autoload file. As a result, developers share and reuse much more code than ever before.

Unfortunately, sharing your work gets a lot harder when you leave PHP code and enter the land of configuration files, images, CSS files, translation catalogs – in short, any file that is not PHP. For brevity, I’ll call these files resources here. Using resources located in Composer packages is quite tedious: You need to know exactly where the package is installed and where the resource is located in the package. That’s a lot of juggling with absolute and relative file system paths and prone to error.

Plugins, Modules, Bundles

To simplify matters, most frameworks implement their own mechanisms on top of Composer packages. Some call them “plugins”, others “modules”, “bundles” or “packages”. They have in common that they follow some sort of predefined directory layout together with a naming convention that lets you refer to resources in the package. In Symfony, for example, you can refer to a Twig template profiler.html.twig located in FancyProfilerBundle like this:


This only works if you use Symfony, of course. If you want to use the FancyProfiler in a different framework, the current best practice is to extract the framework-agnostic PHP code into a separate package (the FancyProfiler “library”) and put everything else into “plugins”, “modules” and “bundles” tied to the chosen framework. This leads to several problems:

  • You need to duplicate many resource files: images, CSS files or translation catalogs hardly depend on one single framework. If you use a widespread templating engine like Twig, then even your templates will be very similar across frameworks.
  • You need to maintain many packages: The core library plus one package per supported framework. That’s a lot of maintenance work.

Wouldn’t it be nice if this could be simplified?


One and a half years ago I talked about this problem with Beau Simensen and several others at PHP-FIG. I wrote a blog post about The Power of Uniform Resource Location in PHP. Many people joined the discussion. The understanding of the problem and its solution got riper as we spoke.

Today, I am glad to present to you the first (and probably last) alpha version of Puli, a framework-agnostic resource manager for PHP. Puli manages resources in a repository that looks similar to a UNIX file system: You map files and directories to paths in the repository and use the same paths (we’ll call them Puli paths) to find the files again.

The mapping is done in a puli.json file in the root of your project or package:

    "resources": {
        "/app": "res"

In this example, the Puli path /app is mapped to the directory res in your project. The repository can be dumped as PHP file with the Puli Command-Line Interface (CLI):

$ puli dump

Use the repository returned from the generated file to access your resources:

$repo = require __DIR__.'/.puli/resource-repository.php';
// res/views/index.html.twig
echo $repo->get('/app/views/index.html.twig')->getContents();

Composer Integration

That alone is nice, but not highly useful. However, Puli supports a Composer plugin that loads the puli.json files of all loaded Composer packages. Let’s take the puli.json in the fictional “webmozart/fancy-profiler” package again for example:

    "resources": {
        "/webmozart/fancy-profiler": "res"

By convention, Puli paths in reusable Composer packages use the vendor and package names as top-level directories. This way it is easy to know where a Puli path belongs. Let’s dump the repository again and list the contained files:

$ puli dump
$ puli list -r /webmozart/fancy-profiler

Both in the application and the profiler package, we can access the package’s resources through the repository:

// fancy-profiler/res/views/index.html.twig
echo $repo->get('/webmozart/fancy-profiler/views/index.html.twig')->getContents();

Tool Integration

I think this is quite exciting already, but it gets better once you integrate Puli with your favorite framework or tool. There already is a working Twig Extension which supports Puli paths in Twig templates:

{% extends '/app/views/layout.html.twig' %}
{% block content %}
    {# ... #}
{% endblock %}

You can also use relative Puli paths:

{% extends '../layout.html.twig' %}

The Symfony Bridge integrates Puli into the Symfony Config component. With that, you can reference configuration files by their Puli paths:

# routing_dev.yml
    resource: /symfony/web-profiler-bundle/config/routing/wdt.xml
    prefix:   /_wdt

The Symfony Bundle adds Puli support to a Symfony full-stack project. You can also start a new Symfony 2.5 project from the Symfony Puli Edition, if you like. An Assetic Extension is work-in-progress.

I focused on supporting the Symfony ecosystem for now because that is the one I know best, but Puli can, should and hopefully will be integrated into many more frameworks and tools. The Puli repository can be integrated into your favorite IDE so that you can browse and modify the repository without ever leaving your editor. There are countless possibilities.

Getting Started

Download the Puli Alpha version with Composer:

$ composer require puli/puli:~1.0

Make sure you set the “minimum-stability” option in your composer.json properly before running that command:

    "minimum-stability": "alpha"

Beware that this is an alpha version, so some things may not work or change before the final release. Please do not use Puli in production.

Due to the limited scope of this post, I just scratched the top of Puli’s functionality here. Read Puli at a Glance to learn everything about what you can do with Puli. Read the very extensive documentation to learn how to use Puli. Head over to the issue tracker if you find bugs.

And of course, please leave a comment here :) I think Puli will significantly change the way we use and share packages. What do you think?

Favicon for ircmaxell's blog 16:00 What About Garbage? » Post from ircmaxell's blog Visit off-site link
If you've been following the news, you'll have noticed that yesterday Composer got a bit of a speed boost. And by "bit of a speed boost", we're talking between 50% and 90% reduction in runtime depending on the complexity of the dependencies. But how did the fix work? And should you make the same sort of change to your projects? For those of you who want the TL/DR answer: the answer is no you shouldn't.

Read more »
Ircmaxell?i=ung6T-Q4oes:9gR0LSsqkfk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=ung6T-Q4oes:9gR0LSsqkfk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 02 December, 2014

Favicon for ircmaxell's blog 16:00 A Point On MVC And Architecture » Post from ircmaxell's blog Visit off-site link
Last week I published a post called Alternatives To MVC. In it, I described some alternatives to MVC and why they all suck as application architectures (or more specifically, are not application architectures). I left a pretty big teaser at the end towards a next post. Well, I'm still working on it. It's a lot bigger job than I realized. But I did want to make a comment on a comment that was left on the last post.
Read more »
Ircmaxell?i=oTckdZLv07M:Qxx6QX8zBbI:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=oTckdZLv07M:Qxx6QX8zBbI:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 30 November, 2014

Favicon for Grumpy Gamer 22:21 Translations Achieved! » Post from Grumpy Gamer Visit off-site link


News stories from Friday 28 November, 2014

Favicon for ircmaxell's blog 16:00 It's All About Time » Post from ircmaxell's blog Visit off-site link
An interesting pull request has been opened against PHP to make bin2hex() constant time. This has lead to some interesting discussion on the mailing list (which even got me to reply :-X). There has been pretty good coverage over remote timing attacks in PHP, but they talk about string comparison. I'd like to talk about other types of timing attacks.

Read more »
Ircmaxell?i=GAYuW6ka_gc:QgaRLvaBeag:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=GAYuW6ka_gc:QgaRLvaBeag:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 24 November, 2014

Favicon for ircmaxell's blog 19:00 Alternatives To MVC » Post from ircmaxell's blog Visit off-site link
Last week, I wrote A Beginner's Guide To MVC For The Web. In it, I described some of the problems with both the MVC pattern and the conceptual "MVC" that frameworks use. But what I didn't do is describe better ways. I didn't describe any of the alternatives. So let's do that. Let's talk about some of the alternatives to MVC...

Read more »
Ircmaxell?i=y2ZZG6crFGI:GbDBUUiismA:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=y2ZZG6crFGI:GbDBUUiismA:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Saturday 22 November, 2014

Favicon for Grumpy Gamer 23:26 Stretch Goals » Post from Grumpy Gamer Visit off-site link

We just announced stretch goals for Thimbleweed Park.

"What the hell is Thimbleweed Park?", I can hear you asking.

It's a Kickstarter for Gary Winnick and my all new classic point & click adventure game.

Now I hear you saying "What the hell are stretch goals?"

Look, there is way too much to explain, just roll with it and go back Thimbleweed Park.


News stories from Friday 21 November, 2014

Favicon for ircmaxell's blog 18:30 A Beginner's Guide To MVC For The Web » Post from ircmaxell's blog Visit off-site link
There are a bunch of guides out there that claim to be a guide to MVC. It's almost like writing your own framework in that it's "one of those things" that everyone does. I realized that I never wrote my "beginners guide to MVC". So I've decided to do exactly that. Here's my "beginners guide to MVC for the web":

Read more »
Ircmaxell?i=_EsHGVbwovk:Mp9j61Ontsc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=_EsHGVbwovk:Mp9j61Ontsc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 18 November, 2014

Favicon for Grumpy Gamer 16:24 Please Join Us On Kickstarter » Post from Grumpy Gamer Visit off-site link

I'm going to keep this short.

Several months ago, Gary Winnick and I were sitting around talking about Maniac Mansion, old-school point & click adventures, how much fun we had making them and how amazing it was to be at Lucasfilm Games during that era.  We chatted about the charm, simplicity and innocence of the classic graphic adventure games.

We had to call them "Graphic Adventures" because text adventures were still extremely popular. It was a time of innovation and taking risks.

"Wouldn't it be fun to make one of those again?", Gary said.

"Yeah", I replied as a small tear forming in the corner of my eye*.

A few seconds later I said "Let's do a Kickstarter!".

After a long pause, Gary said  "OK".

We immediately started building the world and the story, layering in the backbone puzzles and forming characters around them.  From the beginning, we knew we wanted to make something that was a satire of Twin Peaks, X-Files and True Detective.  It was ripe with flavor and plenty of things to poke fun at.

So we're doing an Kickstarter for an all new classic point & click adventure game called "Thimbleweed Park". It will be like opening a dusty old desk drawer and finding an undiscovered Lucasfilm graphic adventure game you’ve never played before. Good times for all.

Please join us on Kickstarter!


* The small tear in Ron's eye was added by the author for dramatic effect. No tear actually formed.

News stories from Sunday 16 November, 2014

News stories from Saturday 15 November, 2014

News stories from Friday 14 November, 2014

News stories from Thursday 13 November, 2014

News stories from Wednesday 12 November, 2014

Favicon for Fabien Potencier 00:00 PHP CS Fixer finally reaches version 1.0 » Post from Fabien Potencier Visit off-site link

A few years ago, I wrote a small script to automatically fix some common coding standard mistakes people made in Symfony pull requests. It was after I got bored about all the comments people made on pull requests to ask contributors to fix their coding standards. As humans, we have much better things to do! The tool helped me fix the coding standard issues after merging pull requests and keep the whole code base sane. It was a manual process I did on a regular basis but it did the job.

After a while, I decided to Open-Source the tool, like I do with almost all the code I write. I was aware of the limitation of the tool, the code was very rudimentary, but as Reid Hoffman said once: "If you are not embarrassed by the first version of your product, you've launched too late." To my surprise, people started to use it on their own code, found bugs, found edge cases, added more fixers, and soon enough, we all realise that using regular expressions for such things is doomed to fail.

Using the PHP tokens to fix coding standards is of course a much better approach, but every time I sat down to rewrite the tool, I got distracted by something that was more pressing. So, the tool stagnated for a while. The only real progress for Symfony was the introduction of which alerts contributors of coding standard issues before I merge the code.

The current stable version of PHP-CS-Fixer was released in August 2014 and it is still based on regular expressions, two years after the first public release. But in the last three months, things got crazy mainly because of Dariusz Ruminski. He did a great job at rewriting everything on top of a parser based on the PHP tokens, helped by 21 other contributors. After 13,000 additions and 5,000 deletions, I'm very proud to announce version 1.0 of PHP-CS-Fixer; it is smarter, it is more robust, and it has more fixers. Any downsides? Yes, speed; the tool is much slower, but it is worth it and enabling the new cache layer helps a lot.

As I learned today on Twitter, a lot of people rely on the PHP CS Fixer on a day to day basis to keep their code clean, and that makes me super happy. You can use the fixer from some IDEs like PhpStorm, NetBeans, or Sublime. You can install it via Composer, a phar, homebrew, or even Grunt. And there is even a Docker image for it!

News stories from Thursday 06 November, 2014

Favicon for Fabien Potencier 00:00 About Personal Github Accounts » Post from Fabien Potencier Visit off-site link

Many of you have a user account on Github. But what are you using it for? As far as Open-Source is concerned, I'm using mine for two different usages:

  • as a way to contribute to other projects by forking repositories and making pull-requests;

  • as a way to host some of my Open-Source projects.

But the more I think about it, the more I think the second usage is most the time wrong. If you are publishing a small snippet of code, a small demo, the code for a tutorial you wrote on your blog, that makes a lot of sense. But when it comes to useful and/or popular Open-Source projects, I think that is a mistake.

An Open-Source project should not be tied too much to its creator; the creator just happens to be the first contributor. And for many projects, it will stay that way for a very long time, which is fine. But gradually, as more people contribute, it can confuse some users. The license you choose helps a lot and the way you are responding to pull requests and issues is also a great way to show you openness. But that's not enough in the long term. Of course, understanding when it becomes a problem is up to you and definitely not easy. Here are some of my thoughts about some problems I identified in the past.

First, it makes the original developer special and not aligned with how others contribute; you cannot for instance fork the project to make a pull request (with a not-so-nice side-effect of Packagist publishing your branches, which is obviously wrong.)

Then, bringing awareness through a well established organization is probably easier than promoting yourself; it makes your project more easily discoverable.

Also, what if someone starts to contribute more than you? What if you are not interested in maintaining the project anymore? Github makes it very easy to transfer a project to another person, but organizations are almost always a better way in that case.

And I'm not even talking the bus factor.

As you might have guessed by now, Github organizations is the solution. A organization fixes all the problems and then some more; and creating one is very easy. Again, that only makes sense when your project is somewhat successful, and probably even more interesting if you have more than one such projects.

A while ago, I decided to do that for Silex and I moved it to its own organization. And I did the same for Twig recently for the same reasons. For those projects, it made sense to create a dedicated organization because there is more than one repositories; we moved along some related repositories (like the Silex skeleton or the Twig extensions).

Organizations are also a great way to create a group of people working on related topics (like FriendsOfSymfony) or people working with the same standards (The League of Extraordinary Packages).

Last year, I co-created such an organization: FriendsOfPhp. A couple of weeks ago, I moved the PHP security advisories database from the sensiolabs organization to the FriendsOfPhp one and I explained my motivations in a blog post.

Today, I'm doing the same with several of my projects that were previously part of my personal Github account. I have not created an organization per project because they are either too small or they don't need more than one repository; so they would not benefit from a standalone organization.

  • Sismo: A Continuous Testing Server
  • Sami: An API documentation generator
  • PHP-CS-Fixer: A script that fixes Coding Standards
  • Goutte: A simple PHP Web Scraper

If you cloned one of these repositories in the past, you can easily switch to the new Git URL via the following command:

$ git remote set-url origin

News stories from Friday 31 October, 2014

Favicon for ircmaxell's blog 17:00 A Lesson In Security » Post from ircmaxell's blog Visit off-site link
Recently, a severe SQL Injection vulnerability was found in Drupal 7. It was fixed immediately (and correctly), but there was a problem. Attackers made automated scripts to attack unpatched sites. Within hours of the release of the vulnerability fix, sites were being compromised. And when I say compromised, I'm talking remote code execution, backdoors, the lot. Why? Like any attack, it's a chain of issues, that independently aren't as bad, but add up to bad news. Let's talk about them: What went wrong? What went right? And what could have happened better? There's a lesson that every developer needs to learn in here.

Read more »
Ircmaxell?i=fuljT9eLYfs:YQq5-H-MgMQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=fuljT9eLYfs:YQq5-H-MgMQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Wednesday 29 October, 2014

Favicon for ircmaxell's blog 17:00 Foundations Of OO Design » Post from ircmaxell's blog Visit off-site link
It's quite easy to mix up terminology and talk about making "easy" systems and "simple" ones. But in reality, they are completely different measures, and how we design and architect systems will depend strongly on our goals. By differentiating Simple from Easy, Complex from Hard, we can start to talk about the tradeoffs that designs can give us. And we can then start making better designs.

Read more »
Ircmaxell?i=9jdcttxsYfo:eLCGFPttXGE:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=9jdcttxsYfo:eLCGFPttXGE:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 27 October, 2014

Favicon for ircmaxell's blog 17:00 You're Doing Agile Wrong » Post from ircmaxell's blog Visit off-site link
To some of you, this may not be new. But to many of the people preaching "Agile Software Development", Agile is not what you think it is. Let me say that again, because it's important: You're Doing Agile Wrong.

Read more »
Ircmaxell?i=yBjjLhcCzZ0:_KCADex35Q8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=yBjjLhcCzZ0:_KCADex35Q8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 26 October, 2014

Favicon for Devexp 01:00 Use unsupported Jenkins plugins with Jenkins DSL » Post from Devexp Visit off-site link

In a previous post I wrote about how to Automate Jenkins with the use of the plugin Job DSL Plugin. If you didn’t read it, I highly suggest you do that as it will help you understand better what I’ll be explaining here.

When you start using the Job DSL Plugin you’ll probably sooner or later need to configure your job with a plugin that is not yet supported. And by “not yet supported” I mean that there aren’t (yet) DSL commands that will generate a job for that specific plugin. Fortunately they provide you with a way to add them ‘manually’ through the Configure Block.

This part is a bit more complex than using simply the DSL commands, because you’ll have to understand how it works. Now you did notice I wrote “a bit” … that’s because it seems complex, but in fact it isn’t. The only thing you need to know is that the plugin will, with the DSL commands, generate the config.xml of your job containing the full configuration of the job.

To have an idea, this is the config.xml of an empty job

<?xml version='1.0' encoding='UTF-8'?>
  <properties />
  <scm class="hudson.scm.NullSCM"/>

Let’s see an example of a basic DSL command and the corresponding config.xml.

job {
    name 'Test Job'
    description 'A Test Job'
<?xml version='1.0' encoding='UTF-8'?>
  <description>A Test Job</description>

So you see that every DSL command will generate some part in the config.xml.

Knowing this you’ll understand that we will have to study the config.xml of an existing job to see how the “unsupported” plugin is configured in the config.xml.

Let’s make it a bit more fun by integrating the HipChat Plugin. I created a simple job in jenkins and opened the config.xml file. (I assume you know how to install and configure the plugin in Jenkins)

<?xml version='1.0' encoding='UTF-8'?>
    <jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty plugin="hipchat@0.1.4">
  <scm class="hudson.scm.NullSCM"/>
    <jenkins.plugins.hipchat.HipChatNotifier plugin="hipchat@0.1.4">

The values in the publisher section are being copied from the jenkins administration. That’s a bit annoying because it means you’ll have to expose that in the DSL scripting. At the moment of this writing, I didn’t find a way to configure that as variables.

Looking at the config.xml, we see that 2 nodes were modified, the properties and the publishers node. Both are children from the root project node. With the Configure Block we can obtain the XML Node to manipulate the DOM.

Get hold on the project node:

job {
  configure { project ->
    // project represents the node <project>

Now that we can manipulate the project node, let’s add the properties node:

job {
  configure { project ->
    project / 'properties' << 'jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty' {
      room ''
      startNotification false


What we did here is tell the parser to append («) the block ‘jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty’ to the node project/properties. And finally in the block we simply enumerate the parameters as key[space]value as you can see them in the config.xml.

Hint 1: Do not specify the plugin version plugin=”hipchat@0.1.4″ otherwise it doesn’t work.
Hint 2: I append the properties (and below the publishers), because there will/can be others configured through other DSL blocks.

Let’s do the same now for the publishers part:

job {
  configure { project ->
    project / 'publishers' << 'jenkins.plugins.hipchat.HipChatNotifier' {
      jenkinsUrl 'http://jenkins/'
      room '76124'

As with the properties, we tell the parser to append («) ‘jenkins.plugins.hipchat.HipChatNotifier’ (without the plugin version) and enumerate the parameters.

Following is the full DSL for adding HipChat Plugin support:

job {
  name "Job with HipChat"
  configure { project ->
    project / 'properties' << 'jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty' {
      room ''
      startNotification false

    project / 'publishers' << 'jenkins.plugins.hipchat.HipChatNotifier' {
      jenkinsUrl 'http://jenkins/'
      room '76124'

Once you grasp the Configure block, you’ll be able to generate any job you want. The example below uses the configure block to add a missing functionality in an existing predefined GIT DSL:

job {
  scm {
    git {
      remote { 
      configure { node -> //the GitSCM node is passed in
        // Add the CleanBeforeCheckout functionality
        node / 'extensions' << 'hudson.plugins.git.extensions.impl.CleanBeforeCheckout'  {
        // Add the BitbucketWeb
        node / browser (class: 'hudson.plugins.git.browser.BitbucketWeb') {
          url ''

A handy tool to play with (or test) the generation of your DSL is It will prevent your from constantly running your DSL and open the config.xml from your jenkins to see if the xml is generated correctly!

Although the Configure block is really awesome, it doesn’t beat the predefined DSL commands, so if you have the time I suggest to contribute to the project by making it a predefined DSL :)

If you have some other great Configure Block example, share them in the comments :)

News stories from Saturday 25 October, 2014

Favicon for Fabien Potencier 23:00 The PHP Security Advisories Database » Post from Fabien Potencier Visit off-site link

A year and a half ago, I was very proud to announce a new initiative to create a database of known security vulnerabilities for projects using Composer. It has been a great success so far; many people extended the database with their own advisories. As of today, we have vulnerabilities for Doctrine, DomPdf, Laravel, SabreDav, Swiftmailer, Twig, Yii, Zend Framework, and of course Symfony (we also have entries for some Symfony bundles like UserBundle, RestBundle, and JsTranslationBundle.)

The security checker is now included by default in all new Symfony project via sensiolabs/SensioDistributionBundle; checking vulnerabilities is as easy as it can get:

$ ./app/console security:check

If you are not using Symfony, you can easily use the web interface, the command line tool, or the HTTP API. And of course, you are free to build your own tool, based on the advisories stored in the "database".

Today, I've decided to get one step further and to clarify my intent with this database: I don't want the database to be controlled by me or SensioLabs, I want to help people find libraries they must upgrade now. That's the reason why I've added a LICENSE for the database, which is now into the public domain.

Also, even if I've been managing this database since the beginning with only good intentions, it is important that the data are not controlled by just one person. We need one centralized repository for all PHP libraries, but a distributed responsibility. As this repository is a good starting point, I've decided to move the repository from the SensioLabs organization to the FriendsOfPHP organization.

I hope that these changes will help the broader PHP community. So, who wants to help?

News stories from Friday 24 October, 2014

Favicon for ircmaxell's blog 17:00 What's In A Type » Post from ircmaxell's blog Visit off-site link
There has been a lot of talk about typing in PHP lately. There are a couple of popular proposals for how to clean up PHP's APIs to be simpler. Most of them involve changing PHP's type system at a very fundamental level. So I thought it would be a good idea to talk about that. What goes into a type?

Read more »
Ircmaxell?i=UI_SBxeaXZ0:3Abs-VJU0Tk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=UI_SBxeaXZ0:3Abs-VJU0Tk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs
Favicon for Web Mozarts 10:48 Defining PHP Annotations in XML » Post from Web Mozarts Visit off-site link

Annotations have become a popular mechanism in PHP to add metadata to your source code in a simple fashion. Their benefits are clear: They are easy to write and simple to understand. Editors offer increasing support for auto-completing and auto-importing annotations. But there are also various counter-arguments: Annotations are written in documentation blocks, which may be removed from packaged code. Also, they are coupled to the source code. Whenever an annotation is changed, the project needs to be rebuilt. This is desirable in some, but not in other cases.

For these reasons, Symfony always committed to supporting annotations, XML and YAML at the same time – and with the same capabilities – to let our users choose whichever format is appropriate to configure the metadata of their projects. But could we take this one step further? Could we build, for example, XML support directly into the Doctrine annotation library?

Let’s start with a simple example of an annotated class:

namespace Acme\CRM;
use Doctrine\ORM\Mapping\Column;
use Doctrine\ORM\Mapping\Entity;
use Symfony\Component\Validator\Constraints\Length;
use Symfony\Component\Validator\Constraints\NotNull;
 * @Entity
class Address
     * @Column
     * @NotNull 
     * @Length(min=3)
    private $street;
     * @Column(name="zip-code")
     * @NotNull
    private $zipCode;

Right now, if toolkits (such as Doctrine ORM or Symfony Validation) want to support annotations and XML schemas, they have to write separate parsers that duplicate a lot of common code. Wouldn’t it be nice if they could use a generic parser instead?

Let’s try to map the annotations to a generic XML file:

<?xml version="1.0" encoding="UTF-8"?>
<class-mapping xmlns=""
<class name="Acme\CRM\Address">
    <orm:entity />
    <property name="street">
        <orm:column />
        <val:not-null />
        <val:length min="3" />
    <property name="zipCode">
        <orm:column name="zip-code" />
        <val:not-null />
    <method name="activate">
        <prop:setter name="active" />

As you can see, this is more or less an abstraction of Doctrine’s XML Mapping. The base set of elements – <class-mapping>, <class>, <property> and <method> – is provided by the “” namespace and processed by AnnotationReader. The other namespaces are user-defined and processed by custom tag parsers. These turn tags into annotations for the currently processed element. Let’s load the annotations:

// analogous to the existing AnnotationRegistry::registerAutoloadNamespace()
AnnotationRegistry::registerXmlNamespace('', function () {
    return new OrmTagParser();
// ...
$reader = new AnnotationReader();
// Inspects doc blocks and registered XML files
$annotations = $reader->getClassAnnotations(new \ReflectionClass('Acme\CRM\Address'));
// => array(object(Doctrine\ORM\Mapping\Entity))

Due to XML’s namespaces it’s possible to combine all the mappings in one file or spread them across multiple files, if desired. So, one file could contain the ORM mapping only:

<!-- ORM mapping -->
<?xml version="1.0" encoding="UTF-8"?>
<map:class-mapping xmlns=""
<map:class name="Acme\CRM\Address">
    <entity />
    <map:property name="street">
        <column />
    <map:property name="zipCode">
        <column name="zip-code" />

And another one the validation constraint mapping:

<!-- Constraint mapping -->
<?xml version="1.0" encoding="UTF-8"?>
<map:class-mapping xmlns=""
<map:use class="Acme\CRM\Validation\ZipCode" />
<map:class name="Acme\CRM\Address">
    <map:property name="street">
        <not-null />
        <length min="3" />
    <map:property name="zipCode">
        <not-null />
        <map:annotation class="ZipCode">
            <map:parameter name="strict">true</map:parameter>

The disadvantage is that custom tag parsers (such as OrmTagParser above) need to be registered before loading annotations. The last example, however, shows a generic (although verbose) way of using custom annotations without writing a custom XML schema and parser.

The advantages are clear: The mapping files are very concise, can be validated against their XML schemas and can be separated from the PHP code. If you want to use annotations, but your users demand support for XML, it’s very easy to write an XML schema and a tag parser for your annotations and plug it in. And at last, the class metadata configuration of different toolkits (Symfony and Doctrine in the above example) can be combined in just one file for small projects.

The above concept certainly has room for improvement: As it is right now, all XML files need to be located and parsed even when the annotations of just one class are loaded. Then again, I think that annotations shouldn’t be parsed on every request anyway. If a toolkit parses annotations with the annotation reader, it should, in my opinion, cache the result somewhere or generate optimized PHP code to speed up subsequent page loads.

It would also be nice to provide a similar, unified annotation definition language for the YAML format. Since YAML doesn’t natively support namespaces – as XML does – this is a bit more tricky.

What do you think? Are you interested in using or implementing such a feature?

News stories from Wednesday 22 October, 2014

Favicon for ircmaxell's blog 17:00 When Rocks Falter » Post from ircmaxell's blog Visit off-site link
I've never been a rock. I'm about as passionate as someone can be when I choose to do something. Unfortunately that means I tend to throw myself (my raw unadulterated self) at my interests. It's just who I am and who I've always been. This has positives and negatives associated with it (especially from a personal perspective).

Throwing yourself at a passion has enormous benefits. You get a lot done, you can truly touch people's lives. You can really change the world. But you also take on a lot of risk. Putting yourself out there is the easiest way to get burned. When you're passionate, it's hard to not take things emotionally. It's hard to not care. After all, caring is where you draw your power from.

I have always been held up by those that I knew were rocks. I always leaned on people who I know weren't just abiding a flight-of-fancy, but who could wear the tide. But what happens when you start to see those who you thought were rocks, falter...?

Read more »
Ircmaxell?i=4vIhKNFZia4:ayBDokKWaTg:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=4vIhKNFZia4:ayBDokKWaTg:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 20 October, 2014

Favicon for ircmaxell's blog 17:00 Educate, Don't Mediate » Post from ircmaxell's blog Visit off-site link
Recently, there has been a spout of attention about how to deal with eval(base64_decode("blah")); style attacks. A number of posts about "The Dreaded eval(base64_decode()) - And how to protect your site and visitors" have appeared lately. They have been suggesting how to mitigate the attacks. This is downright bad.
Read more »
Ircmaxell?i=iTE9RjdE1WY:_qU6l7bpBDk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=iTE9RjdE1WY:_qU6l7bpBDk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Saturday 18 October, 2014

Favicon for Grumpy Gamer 17:59 Blah Blah Blah » Post from Grumpy Gamer Visit off-site link


Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah Blah Blah.

Blah Blah Blah Blah,  Blah Blah Blah,  Blah Blah Blah Blah.  Blah Blah Blah Blah Blah Blah Blah.  Blah Blah Blah Blah,  Blah Blah Blah Blah Blah Blah Blah Blah Blah.  Blah Blah!!!

Blah,  Blah Blah Blah Blah,  Blah Blah Blah Blah Blah.  Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah,  Blah Blah Blah Blah Blah Blah Blah?  Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah.

Blah Blah Blah Blah Blah Blah Blah Blah Blah, Blah Blah Blah Blah, Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah, Blah Blah Blah Blah Blah, Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah?

Blah Blah Blah Blah Blah Blah. Blah Blah Blah!


News stories from Friday 17 October, 2014

Favicon for ircmaxell's blog 12:00 A Followup To An Open Letter To PHP-FIG » Post from ircmaxell's blog Visit off-site link
A few days ago, I wrote An Open Letter to PHP-FIG. Largely the feedback on it was positive, but not all. So I feel like I do have a few more things to say.

What follows is a collection of followups to specific points of contention raised about my post. I'm going to ignore the politics and any non-technical discussion here.

Read more »
Ircmaxell?i=JZq4tVzqHmU:6nScKqZDaVk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=JZq4tVzqHmU:6nScKqZDaVk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Wednesday 15 October, 2014

Favicon for ircmaxell's blog 12:00 An Open Letter To PHP-FIG » Post from ircmaxell's blog Visit off-site link

Please stop trying to solve generic problems. Solve the 50% problem, not the 99% problem.




Read more »
Ircmaxell?i=gFHW_1_Dnow:cqeAEYSpSQM:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=gFHW_1_Dnow:cqeAEYSpSQM:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 13 October, 2014

Favicon for ircmaxell's blog 17:00 FUD and Flames And Trolls, Oh My! » Post from ircmaxell's blog Visit off-site link
Last weekend I gave the opening keynote at PHPNW14. The talk was recorded, and no, the video isn't online yet. The basis of the talk was centered around community and how we can come together (and how we are drifting apart). But there was one point that I mentioned that I think requires further thought and discussion. And that point is that there is far less trolling going on than it may seem at first glance.
Read more »
Ircmaxell?i=VLSPS5pjSCQ:R8-yYIeL5Hc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=VLSPS5pjSCQ:R8-yYIeL5Hc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 23 September, 2014

Favicon for Devexp 01:00 Automate Jenkins » Post from Devexp Visit off-site link

jenkins is cool

Jenkins is a powerful continuous integration server which has been around for some time now. I’ve been personally using it for years and it never let me down.

However, there will come a time where adding/updating/removing jobs will have an impact on your internal processes. Take for example feature branches. Logically you will (and should) test them, so you will start making new jobs for each feature, and once they are done, you will remove them. Sure using the duplicate helps a lot, but it’s yet another (manual) thing on the todo list of a developer. Or it could even be that you don’t allow (junior) developers to administrate the jenkins, and thus making it the job of the “jenkins manager”.

If you are facing such situation then you will embrace the Job DSL Plugin.

The Job DSL Plugin allows you to generate jobs through some simple Groovy DSL scripting. I’m not going to explain here how everything works, because the wiki of the plugin does a very good job at that, but instead you’ll find some DSL scripts which I’m currently using on a project. I do however suggest reading the wiki first in order to fully grasp the meaning of the following examples.

Generate jobs based on subversion branches

This following DSL will create a job based on the branches it finds on a subversion repository.

svnCommand = "svn list --xml svn://url_path/branches"
def proc = svnCommand.execute()
def xmlOutput =

def lists = new XmlSlurper().parseText(xmlOutput)

def listOfBranches =

println "Start making jobs ..."

// iterate through branches
  def branchName = it.text()

  println "- found branch '${branchName}'"

  job {
    scm {
    triggers {
      scm('H/5 * * * *')
    steps {
      maven("-U clean verify","pom.xml")

Generate jobs based on a static list

If you have several libraries which need to be configured exactly in the same way, you could also make use of a static list.

def systems = ["linux","windows","osx"];

// Configure the jobs for each system
systems.each() {

def system = it

job(type: Maven) {
    scm {
      git {
        remote {
    goals("-U clean deploy")

As you can see you can achieve some very powerful automations with the Job DSL Plugin.

They already support a lot of plugins but in case the one you use is not (yet) supported it is always possible to configure it through Configure Blocks. I had to do that for the HipChat Plugin which I will explain in detail in a following blog post.

Hope this convinced you to stop creating, editing and removing jobs manually and start doing all that automatically 😉

News stories from Wednesday 27 August, 2014

Favicon for Grumpy Gamer 15:57 My Understanding Of Charts » Post from Grumpy Gamer Visit off-site link


News stories from Tuesday 26 August, 2014

Favicon for #openttdcoop 11:01 Server Changes » Post from #openttdcoop Visit off-site link

As one of the sysadmins of #openttdcoop alot of work happens for me on the background. Most changes go unnoticed, some cause minor breakdowns (sorry ;)) but alot of changes you don’t see. The changes that mostly did go unnoticed were changes to our mail infrastructure, database updates, backup procedures. And that’s just a few.

Today one of the changes that you will see is a change to our paste service. The paste service as it currently is has changed. We have switched to a new backend which was needed. The old pastes are NOT deleted. They can still be reached at However do keep in mind that this will go offline at some point and we strongly advice against creating new pastes there.

Our new backend is currently live at In this case we are using sticky-notes as a backend. This gives you more privacy and options compared to the old paste. We do hope the new features help everyone out even us as admins in maintaining it all.

Another change that already is active (and you might not always notice) is a replacement we did for our bundles server. This had to happen at some point. And today it is done. This change won’t have much of an impact. But we hope to improve response times with this new server.

These are just a few of the changes you’re going to see. More will follow at some point but this is just a start 😉

Should you have any questions join in on IRC (#openttdcoop @ OFTC or through

News stories from Sunday 10 August, 2014

Favicon for Grumpy Gamer 18:03 Puzzle Dependency Charts » Post from Grumpy Gamer Visit off-site link

In part 1 of 1 in my series of articles on games design, let’s delve into one of the (if not THE) most useful tool for designing adventure games: The Puzzle Dependency Chart. Don’t confuse it with a flow chart, it’s not a flow chart and the subtle distinctions will hopefully become clear, for they are the key to it’s usefulness and raw pulsing design power.

There is some dispute in Lucasfilm Games circles over whether they were called Puzzle Dependency Charts or Puzzle Dependency Graphs, and on any given day I'll swear with complete conviction that is was Chart, then the next day swear with complete conviction that it was Graph. For this article, I'm going to go with Chart. It's Sunday.

Gary and I didn’t have Puzzle Dependency Charts for Maniac Mansion, and in a lot of ways it really shows. The game is full of dead end puzzles and the flow is uneven and gets bottlenecked too much.

Puzzle Dependency Charts would have solve most of these problems. I can’t remember when I first came up with the concept, it was probably right before or during the development of The Last Crusade adventure game and both David Fox and Noah Falstein contributed heavy to what they would become. They reached their full potential during Monkey Island where I relied on them for every aspect of the puzzle design.

A Puzzle Dependency Chart is a list of all the puzzles and steps for solving a puzzle in an adventure game. They are presented in the form of a Graph with each node connecting to the puzzle or puzzle steps that are need to get there.  They do not generally include story beats unless they are critical to solving a puzzle.

Let’s build one!


I always work backwards when designing an adventure game, not from the very end of the game, but from the end of puzzle chains.  I usually start with “The player needs to get into the basement”, not “Where should I hide a key to get into some place I haven’t figured out yet.”

I also like to work from left to right, other people like going top to bottom. My rational for left to right is I like to put them up on my office wall, wrapping the room with the game design.

So... first, we’ll need figure out what you need to get into the basement...


And we then draw a line connecting the two, showing the dependency. “Unlocking the door” is dependent on “Finding the Key”.  Again, it’s not flow, it’s dependency.

Now let’s add a new step to the puzzle called “Oil Hinges” on the door and it can happen in parallel to the "Finding the Key" puzzle...


We add two new puzzle nodes, one for the action “Oil Hinges” and it’s dependency “Find Oil Can”.  “Unlocking” the door is not dependent on “Oiling” the hinges, so there is no connection. They do connect into “Opening” the basement door since they both need to be done.

At this point, the chart is starting to get interesting and is showing us something important: The non-linearity of the design. There are two puzzles the player can be working on while trying to get the basement door open.

There is nothing (NOTHING!) worse than linear adventure games and these charts are a quick visual way to see where the design gets too linear or too unwieldy with choice (also bad).

Let's build it back a little more...


When you step back and look at a finished Puzzle Dependency Chart, you should this kind of overall pattern with a lot of little sub-diamond shaped expansion and contraction of puzzles.  Solving one puzzle should open up 2 or 3 new ones, and then those collapses down (but not necessarily at the same rate) to a single solution that then opens up more non-linear puzzles.


The game starts out with a simple choice, then the puzzles begin to expand out with more and more for the player to be doing in parallel, then collapse back in.

I tend to design adventures games in “acts”, where each act ends with a bottle neck to the next act. I like doing this because it gives players a sense of completion, and they can also file a bunch of knowledge away and (if need) the inventory can be culled).


Monkey Island would have looked something like this...


I don’t have the Puzzle Dependency Chart for Monkey Island, or I’d post it. I’ve seen some online, but they are more “flowcharts” and not “dependency charts”. I’ve had countless arguments with people over the differences and how dependency charts are not flowcharts, bla bla bla. They’re not. I don’t completely know why, but they are different.

Flowcharts are great if you’re trying to solve a game, dependency charts are great if you’re trying to design a game. That’s the best I can come up with.

Here is a page from my MI design notebook that shows a puzzle in the process of being created using Puzzle Dependency Charts. It’s the only way I know how to design an adventure game. I’d be lost without them.


So, how do you make these charts?

You'll need some software that automatically rebuilds the charts as you connect nodes. If you try and make these using a flowchart program, you’ll spend forever reordering the boxes and making sure lines don’t cross. It’s a frustrating and time consuming process and it gets in the way of using these as a quick tool for design.

Back at Lucasfilm Games, we used some software meant for project scheduling. I don’t remember the name of it, and I’m sure it’s long gone.

I’ve only modern program that does this well is OmniGraffle, but it only runs on the Mac. I’m sure there are others, but since OmniGraffle does exactly what I need, I haven’t look much deeper. I'm sure there are others.

OmniGraffle is built on top of the unix tool called graphviz. Graphviz is great, but you have to feed everything in as a text file. It’s a nerd level 8 program, but it’s what I used for DeathSpank.

You can take a look at the DeathSpank Puzzle Dependency Chart here, but I warn you, it's a big image, so get ready to zoom-n-scroll™. You can also see the graphviz file that produced it.

Hopefully this was interesting. I could spend all day long talking about Puzzle Dependency Charts. Yea, I'm a lot of fun on a first date.

News stories from Wednesday 06 August, 2014

Favicon for Grumpy Gamer 22:52 SCUMM Notes From The C64 » Post from Grumpy Gamer Visit off-site link

More crap that is quickly becoming a fire hazard. Some of my notes from building SCUMM on the C64 for Maniac Mansion.


I'm not sure who's phone number that is on the last page. I'm afraid to call it.

News stories from Monday 04 August, 2014

Favicon for Fabien Potencier 23:00 Signing Project Releases » Post from Fabien Potencier Visit off-site link

About a year ago, I started to sign all my Open-Source project releases. I briefly mentioned it during my SymfonyCon keynote in Warsaw, but this post is going to give you some more details.

Whenever I release a new version of a project, I sign the Git tag with my PGP key: DD4E C589 15FF 888A 8A3D D898 EB8A A69A 566C 0795.

Checking Git Tag Signatures#

If you want to verify a specific release, you need to install PGP first, and then get my PGP key:

$ gpg --keyserver --recv-keys 0xeb8aa69a566c0795

Then, use git tag to check the related tag. Here is how to check the Symfony 2.4.2 tag (from a Symfony clone):

$ git tag -v v2.4.2

Verification worked if the output contains the key used to sign the tag (566C0795) and contains a text starting with "Good signature from ...". Because of how Git works, having a good signature on a tag also means that all commits reachable from that tag are covered by this signature (that's why signing all commits/merges is not needed.)

You can see the PGP signature by using the following command:

$ git show --show-signature v2.4.2

For the curious ones, I'm going to take Symfony 2.4.2 as an example to explain how it works. First, Git does not sign the contents of a commit itself (which is empty anyway for tags), but its headers. Let's display the headers for the Symfony v2.4.2 tag:

$ git cat-file -p v2.4.2

You should get the following output:

object b70633f92ff71ef490af4c17e7ca3f3bf3d0f304
type commit
tag v2.4.2
tagger Fabien Potencier <> 1392233223 +0100

created tag 2.4.2
Version: GnuPG v1.4.13 (Darwin)


The PGP signature is calculated on all lines up to the beginning of the signature:

object b70633f92ff71ef490af4c17e7ca3f3bf3d0f304
type commit
tag v2.4.2
tagger Fabien Potencier <> 1392233223 +0100

created tag 2.4.2

You can try it by yourself by saving those lines in a test file, and create a test.sig file with the PGP signature:

Version: GnuPG v1.4.13 (Darwin)


Then, check that the signature matches the Git headers with the following command:

$ gpg --verify test.asc test

So, when signing a tag, you sign the commit sha1 (and so all reachable commits), but also the tag name (and so the version you expect to get).

Signing Github Archives#

That's great, but when using Composer, you can get the code either as a Git clone (--prefer-source) or as an archive (--prefer-dist). If Composer uses the latter, you cannot use the signature coming from the tag, so how can you check the validity of what Composer just downloaded?

Whenever I make a new release, I also publish a file containing a sha1 for the zip file as returned by the Github API ( but also a sha1 calculated on the file contents from the zip (the exact same files installed by Composer.) Those files are hosted on a dedicated checksums repository on Github.

As an example, let's say I have a project using Symfony 2.4.2 (you can check the version installed by Composer by running composer show -i). The sha1s are available here:

This file is signed, so you first need to verify it:

$ curl -O
$ gpg --verify v2.4.2.txt

Now, you can check the validity of the files downloaded and installed by Composer:

$ cd PATH/TO/vendor/symfony/symfony
$ find . -type f -print0 | xargs -0 shasum | shasum

The sha1 displayed should match the one from the file you've just downloaded (the one under the files_sha1 entry.)

To make it easier, you can also check all your dependencies via a simple script provided in the repository. From your project root directory (where the composer.json file is stored), run the following


It will output something along the lines of:

symfony/swiftmailer-bundle@v2.2.6                        OK  files signature
symfony/symfony@v2.5.2                                   KO  files signature
twig/extensions@v1.0.1                                   OK  files signature
twig/twig@v1.15.0                                        OK  files signature
white-october/pagerfanta-bundle@dev-master               --  unknown package
willdurand/hateoas@1.0.x-dev                             --  unknown package

 1 packages are potentially corrupted.
 Check that your did not add/modify/delete some files.

Consider the checksum feature as experimental and as such, any feedbacks would be much appreciated.

News stories from Sunday 03 August, 2014

Favicon for Grumpy Gamer 21:24 2D Point and Click Engine Recommendations » Post from Grumpy Gamer Visit off-site link


I’m looking for some good recommendations on modern 2D point-and-click adventure game engines. These should be complete engines, not just advice to use Lua or Pascal (it’s making a comeback). I want to look at the whole engine, not just the scripting language.  PC based is required. Mobile is a ok. HTML5 is not necessary. Screw JavaScript. Screw Lua too, but not as hard as JavaScript.

I’m not so much interested in using them, as I’d just like to dissect and deconstruct what the state of the art is today.

P.S. I don’t know why I hate Lua so much. I haven’t really used it other than hacking WoW UI mods, but there is something about the syntax that makes it feel like fingernails on a chalkboard.

P.P.S It's wonderful that "modern 2d point-and-click" isn't an oxymoron anymore.

P.P.P.S Big bonus points if you've actually used the engine. I do know how to use Google.

P.P.P.P.S I want engines that are made for adventure games, not general purpose game engines.

News stories from Thursday 24 July, 2014

Favicon for Grumpy Gamer 17:50 Best. Ending. Ever. » Post from Grumpy Gamer Visit off-site link

An email sent to me from LucasArts Marketing/Support letting me know they "finally" found some people who liked the ending to Monkey Island 2.


Favicon for Joel on Software 01:14 Trello, Inc. » Post from Joel on Software Visit off-site link

Hello? is this thing on?

I’m not sure if I even know how to operate this “blog” device any more. It’s been a year since my last post. I’m retired from blogging, remember?

Want to hear something funny? The only way I can post blog posts is by remote-desktopping into a carefully preserved Windows 7 machine which we keep in a server closet running a bizarrely messed-up old copy of CityDesk which I somehow hacked together and which only runs on that particular machine. The shame!

I do need to fill you in on some exciting Trello News, though. As you no doubt know, Trello is the amazing visual project management system we developed at Fog Creek.

Let me catch you up. As legend has it, back in yonder days, early twenty-oh-eleven, we launched a modest little initiative at Fog Creek to try to come up with new product ideas. We peeled off eight developers. The idea was that they would break into four teams. Two people each. Each team would work for a few months building a prototype or MVP for some product idea. Hopefully, at least one of those ideas would turn into something people wanted.

One of those teams started working on the concept that became Trello. The idea seemed so good that we doubled that team to four developers. The more we played around with it, the better we liked it. Within nine months, it was starting to look good enough to go public with, so we launched Trello at TechCrunch Disrupt to great acclaim and immediately got our first batch of users.

Over the next three years, Trello showed some real traction. The team grew to about 18 people, almost all in engineering. We did iPhone, iPad, Android, and Web versions. And Kindle. Oh and Android Wear.  The user base grew steadily to 4.6 million people.


Here are some things that surprised me:

  • We’ve successfully made a non-developer product that actually appeals to civilians. We tried to avoid the software developer niche this time, and it worked. I think that’s because Trello is visual. The board / card metaphor makes every board instantly understandable, which seems to attract all types of users who traditionally had never found online project management to be useful or worth doing.
  • It spreads like crazy. It’s a gas that expands to fill all available space. Somebody finds out about Trello from their reading group and adopts it at work; pretty soon their whole company has hundreds of Trello boards for everything from high level road maps to a list of snacks that need to be bought for the break room.
  • People love it. We regularly monitor Twitter for mentions of Trello and the amount of positive sentiment out there is awe-inspiring.

We launched something called Trello Business Class, which, for a small fee, provides all kinds of advanced administration features so that the larger organizations using Trello can manage it better, and Hey Presto, Trello was making money!

Taco got big, too
In the meantime, we started getting calls from investors. “Can we invest in Trello?” they asked. They were starting to notice that whenever they looked around their portfolio companies all they saw was Trello boards everywhere.

We didn’t really need the money; Fog Creek is profitable and could afford to fund Trello development to profitability. And when we told the investors that they could take a minority, non-controlling stake in Fog Creek, we had to start explaining about our culture and our developer tools and our profit sharing plans and our free lunches and private offices and whatnot, and they got confused and said, “hmm, why don’t you keep all that, we just want to invest in Trello.”

Now, we didn’t need the money, but we certainly like money. We had a bunch of ideas for ways we could make Trello grow faster and do all kinds of astonishing new features and hire sales and marketing teams to work on Trello Business Class. We  would have gotten around to all that eventually, but not as quickly as we could with a bit of folding money.

Which lead to this fairly complicated plan. We spun out Trello into its own company, Trello Inc., and allowed outside investors to buy a minority stake in that. So now, Trello and Fog Creek are officially separate companies. Trello has a bunch of money in the bank to operate independently. Fog Creek will continue to build FogBugz and Kiln and continue to develop new products every once in a while. Michael Pryor, who co-founded Fog Creek with me in 2000, will be CEO of Trello.

So, yeah. This is the point at which old-time readers of this blog point out that the interest of VCs is not always aligned with the interest of founders, and VCs often screw up the companies they invest in.

That’s mostly true, but not universal. There are smart, founder-friendly VCs out there. And with Trello (and Stack Overflow, for that matter), we didn’t take any outside investment until we already had traction and revenue, so we could choose the investors that we thought were the most entrepreneur-friendly, and we kept control of the company.

In the case of Trello, we had so much interest from investors that we were even able to limit ourselves to investors who were already investors in Stack Exchange and still get the price and terms we wanted. The advantage of this is that we know them, they know us, and they’re aligned enough not to fret about any conflicts of interest which might arise between Stack Exchange and Trello because they have big stakes in both.

Both Index Ventures and Spark Capital will co-lead the investment in Trello, with Bijan Sabet from Spark joining our board. Bijan was an early investor in Twitter, Tumblr, and Foursquare which says a lot about the size of our ambitions for Trello. The other two members of the board are Michael and me.

Even though Fog Creek, Trello, and Stack Exchange are now three separate companies, they are all running basically the same operating system, based on the original microprocessor architecture known as “making a company where the best developers want to work,” or, in simpler terms, treating people well.

This operating system applies both to the physical layer (beautiful daylit private offices, allowing remote work, catered lunches, height-adjustable desks and Aeron chairs, and top-tier coffee), the application layer (health insurance where everything is paid for, liberal vacations, family-friendly policies, reasonable work hours), the presentation layer (clean and pragmatic programming practices, pushing decisions down to the team, hiring smart people and letting them get things done, and a commitment to inclusion and professional development), and mostly, the human layer, where no matter what we do, it’s guided first and foremost by obsession over being fair, humane, kind, and treating each other like family. (Did I tell you I got married?)

So, yeah, there are three companies here, with different products, but every company has a La Marzocco Linea espresso machine in every office, and every company gives you $500 when you or your partner has a baby to get food delivered, and when we’re trying to figure out how to manage people, our number one consideration is how to do so fairly and compassionately.

That architecture is all the stuff I spent ten years ranting on this blog about, but y’all don’t listen, so I’m just going to have to build company after company that runs my own wacky operating system, and eventually you’ll catch on. It’s OK to put people first. You don’t have to be a psychopath or work people to death or create heaps of messy code or work in noisy open offices.

Anyway, that’s the news from our neck of the woods. If the mission of Trello sounds exciting we’ll be hiring a bunch of people soon so please apply!

News stories from Monday 21 July, 2014

Favicon for Grumpy Gamer 16:08 Maniac Mansion Design Doc » Post from Grumpy Gamer Visit off-site link

Even more crap from my Seattle storage unit!

Here is the original pitch document Gary and I used for Maniac Mansion. Gary had done some quick concepts, but we didn't have a real design, screen shots or any code. This was before I realized coding the whole game in 6502 was nuts and began working on the SCUMM system.

There was no official pitch process or "green lighting" at Lucasfilm Games. The main purpose of this document would have been to pass around to the other members of the games group and get feedback and build excitement.

I don't remember a point where the game was "OK'd".  It felt that Gary and I just started working on it and assumed we could.  It was just the two of us for a long time, so it's not like we were using up company resources.  Eventually David Fox would come on to help with SCUMM scripting.

Three people. The way games were meant to be made.

If this document (and the Monkey Island Design Notes) say anything, it's how much ideas change from initial concept to finished game. And that's a good thing. Never be afraid to change your ideas. Refine and edit. If your finished game looks just like your initial idea, then you haven't pushed and challenged yourself hard enough.

It's all part of the creative process. Creativity is a messy process. It wants to be messy and it needs to be messy.



News stories from Friday 18 July, 2014

Favicon for Grumpy Gamer 17:48 Monkey Poster » Post from Grumpy Gamer Visit off-site link

More crap from my storage unit.


Print your own today!

News stories from Thursday 17 July, 2014

Favicon for Grumpy Gamer 01:50 Maniac Mansion Design Notes » Post from Grumpy Gamer Visit off-site link

While cleaning out my storage unit in Seattle, I came across a treasure trove of original documents and backup disks from the early days of Lucasfilm Games and Humongous Entertainment. I hadn't been to the unit in over 10 years and had no idea what was waiting for me.

Here is the first batch... get ready for a week of retro... Grumpy Gamer style...

First up...


A early mock-up of the Maniac Mansion UI. Gary had done a lot of art long before we had a running game, hence the near finished screen without the verbs.


A map of the mansion right after Gary and I did a big pass at cutting the design down.  Disk space was a bigger concern than production time. We had 320K. That's right. K.


Gary and I were trying to make sense of the mansion and how the puzzles flowed together. It wouldn't be until Monkey Island that the "puzzle dependency chart" would solve most of our adventure game design issues.


More design flow and ideas. The entire concept of getting characters to like you never really made it into the final game. Bobby, Joey and Greg would grow up and become Dave, Syd, Wendy, Bernard, etc..


A really early brainstorm of puzzle ideas. NASA O-ring was probably "too soon" and twenty-five years later the dumb waiter would finally make it into The Cave.

I'm still amazed Gary and I didn't get fired.

News stories from Tuesday 15 July, 2014

Favicon for Grumpy Gamer 22:08 Ten Years Running! » Post from Grumpy Gamer Visit off-site link


Time flies. The gaming and internet institution known as the Grumpy Gamer Blog has been around for just over ten years.

My first story was posted in May of 2004. Two thousand and four. I'll let that date sink in. Ten years.

The old Grumpy Gamer website was feeling "long in the tooth" and it was starting to bug me that Grumpy Gamer was still using a CRT monitor. He should have been using a flat screen, or more likely, just a mobile phone, or maybe those Google smart contact lens. He would not have been using an Oculus Rift. Don't get me started.

I coded the original Grumpy Gamer from scratch and it was old and fragile and I dreaded every time I had to make a small change or wanted to add a feature.

A week ago I had an the odd idea of doing a Commodore 64 theme for the entire site, so I began anew. I could have used some off-the-shelf blogging tool or code base, but where's the fun in that. Born to program.

I'm slowly moving all the old articles over. I started with the ones with the most traffic and am working my way down. I fundamentally changed the markup format, so I can't just import everything. Plus, there is a lot of crap that doesn't want to be imported.  I still need to decide if I'm going to import all the comments. There are a crap-ton of them.

I'd also like to find a different C64 font. This one has kerning, but it lacks unicode characters, neither of which are truly "authentic", but, yeah, who cares.

But the honest truth is...

I've been in this creative funk since Scurvy Scallywags Android shipped and I find myself meandering from quick prototype to quick prototype. I'll work on something for a few days and then abandon it because it's pointless crap. I think I'm up to eight so far.

The most interesting prototype is about being lost in a cavern/cave/dungeon. The environment programmatically builds itself as you explore. There is no entrance and no exit. It is an exercise in the frustration of being lost. You can never find your way out. You just wander and the swearing gets worse and worse as you slowly give up all hope.

I have no sense of direction, so in some ways, maybe it was a little personal in the way I suppose art should be.

I worked on the game for about a week then gave up. Maybe the game was more about being lost than I thought.

Rebuilding Grumpy Gamer was a way to get my brain going again. It was a project with focus and an end. As the saying goes: Just ship something. So I did.

The other saying is: "The Muse visits during the act of creation, not before."

Create and all will follow. Something to always keep in mind.

News stories from Monday 14 July, 2014

Favicon for Grumpy Gamer 18:05 Commodore 64 » Post from Grumpy Gamer Visit off-site link


News stories from Sunday 13 July, 2014

Favicon for #openttdcoop 17:09 YETIs have arrived! » Post from #openttdcoop Visit off-site link

Ladies and madmen, I am happy to announce that the project I have been working on for the last few months, has grown to it’s first release! First of all I would like to give a huge thanks to frosch who helped me greatly to get the production mechanism working, but also Alberth for trying to help me as much as possible. I would also like to thank all the people like planetmaker for answering my endless and often stupid questions about NML in general. I greatly appreciate everyone who have supported me with any feedback!


After 3 months I have managed to model 14 industries and code all of them in the last two weeks.
Creating some industries took more than the others, especially huge amount of effort was put into 3-X Machinery Factory, with all the robots being animated and the car being assembled.
While some look less simple, they often had some problem I had to overcome, but in the end it all works at least somehow. 🙂
Only the Worker Yard gets the 404 graphic for now.

– The Worker Yard outputs amount of YETI dudes based on current year (so it will grow no matter what), but the production can be increased by Food and Building Materials. Both of Food and Building Materials should have the same effect.
– Other industries all work simply based on Consume->Produce method, even “primaries”. This is done over time so you do not get all of the production immediately. 10% of cargo currently waiting is consumed and produced.
– I do not know in what way do industries die, find out! 😀

– There are only 15 industries, Plantation / Orchard is missing due to missing sprites And coincidentally I am somehow unable to add 16th industry… to be added later.

NUTS Unrealistic Train Set 0.7.2 Universal wagons are able to load YETIs (Workers), and will show specific sprites. Older NUTS versions also work but will show just flatbed crate graphics.

Currently the file has 30MB, and I have not yet added a single animation. Right now I just want to release this as 0.0.1, and add fancy things later (animations will probably go asap).

Sooo, enjoy it 🙂


News stories from Friday 11 July, 2014

Favicon for Grumpy Gamer 01:11 Monkey Bucks » Post from Grumpy Gamer Visit off-site link


Favicon for Grumpy Gamer 00:48 Booty From My Seattle Storage Space! » Post from Grumpy Gamer Visit off-site link


News stories from Thursday 10 July, 2014

Favicon for Code Penguin 19:16 My experiences with Kwixo » Post from Code Penguin Visit off-site link

Kwixo is supposedly a response to PayPal, by some French banks.

I tried to use it to allow a simpler way to pay for the Weboob Association membership fee. PayPal is out anyway, given the fees it charges, we’d be lucky to see half of the actual fee make it back to a bank account.

We’ve tried two times. With the first member it failed because it was asking so many verifications he gave up. With the second one, given that his bank was one of Kwixo’s partners, it worked. Or so I thought!

After sending me an e-mail telling me it was received, one day later (a Saturday!) they tried to call me1. For something that is supposedly on the Internet, why not send an e-mail instead? Anyway, they told me the service was only an exchange between individuals, and since they saw the mention of “Cotisation” in the payment reason I had to register with their Association service by calling another number.

The thing is, I shouldn’t have to do this. This isn’t worth the hassle, and thus will be my last interaction with them. What this story tells us however is that they must get so little business they can still screen all transaction motives, and afford to call people instead of having some sort of semi-automated support system.

Anyway, most of the membership fees have been paid in cash, and the others SEPA. For more details, see here.

The BitPay option is for people with no access to SEPA, but is unlikely to be used anytime soon. But at least, I was able to explain what I would be using them for by e-mail.

However, I didn’t learn my lesson. I thought Kwixo could work, the other way, as a client. Unfortunately, I forgot to never trust a French bank.

I ordered supplies from a website, and chose to pay on delivery, by using Kwixo as an escrow. After all, it was my first order there, and I could use the extra safety.

They asked for a lot of personal details, to an extent I was never asked before; it already started smelling like a scam. The worst was that they first asked some documents, which I sent promptly, and they replied after a day that I forgot to send some others, even though they did not ask for them in the first place. This cycle took a whole week, and choked on the fact that my latest electricity bill was deemed “too old”, despite me explaining that it was the absolute latest.

So I told them to go fuck themselves – literally. They did not budge, and I figured they actually never read any text in the mails! So I sent an image showing them to go fuck themselves. It worked; they canceled the order, and I was able to order again without using them. I suspect the people I was interacting with did not even speak French.

This “fraud protection” lost Kwixo a customer, and almost lost the website a customer. Funny thing is, just looking at the order would make any fraud suspicions silly: the total was well below the machine it was for. Why would I steal that when I already paid much more? Is the car dealership afraid clients will steal their pens?

  1. I rarely answer to unknown numbers, as I dislike the unsolicited nature of phone calls.
Favicon for Code Penguin 18:50 In case you still think banks know what they are doing » Post from Code Penguin Visit off-site link

Working with Weboob has confirmed my suspicions that banks’ IT departments are clueless (at least the French ones).

It’s not only that they have terrible websites with snake-oil security (i.e. keypads are easily logged, they only bother regular users).

It’s that their approach to security is from another world. When I was working with a client that was a bank a few years ago, they forced on us a lot of stupid things in the name of security, but to make things work the chosen solutions were worse from every point of view, including actual security.

This is not a technical problem; the problem is a lack of technical people where they should be.

The cherry on the cake is the BNP Paribas bank. They have been historically terrible at configuring their DNS server (with a tendency to return a different IP depending on yours, and of course those two IPs gave two different versions of the site… unless one of them was out of commission).
And now, for over a year, they have been forcing SSL connections to RC4 128 bits, which is a known weak cipher. If you try to force something better, the server will reject you!

Banks try hard to be taken seriously, and they usually are. I just can’t help laughing at them.

News stories from Tuesday 08 July, 2014

Favicon for Ramblings of a web guy 16:25 Keeping your data work on the server using UNION » Post from Ramblings of a web guy Visit off-site link
I have found myself using UNION in MySQL more and more lately. In this example, I am using it to speed up queries that are using IN clauses. MySQL handles the IN clause like a big OR operation. Recently, I created what looks like a very crazy query using UNION, that in fact helped our MySQL servers perform much better.

With any technology you use, you have to ask yourself, "What is this tech good at doing?" For me, MySQL has always been excelent at running lots of small queries that use primary, unique, or well defined covering indexes. I guess most databases are good at that. Perhaps that is the bare minimum for any database. MySQL seems to excel at doing this however. We had a query that looked like this:

select category_id, count(*) from some_table
    article_id in (1,2,3,4,5,6,7,8,9) and
    category_id in (11,22,33,44,55,66,77,88,99) and
    some_date_time > now() - interval 30 day
group by
There were more things in the where clause. I am not including them all in these examples. MySQL does not have a lot it can do with that query. Maybe there is a key on the date field it can use. And if the date field limits the possible rows, a scan of those rows will be quick. That was not the case here. We were asking for a lot of data to be scanned. Depending on how many items were in the in clauses, this query could take as much as 800 milliseconds to return. Our goal at DealNews is to have all pages generate in under 300 milliseconds. So, this one query was 2.5x our total page time.

In case you were wondering what this query is used for, it is used to calculate the counts of items in sub categories on our category navigation pages. On this page it's the box on the left hand side labeled "Category". Those numbers next to each category are what we are asking this query to return to us.

Because I know how my data is stored and structured, I can fix this slow query. I happen to know that there are many fewer rows for each item for article_id than there is for category_id. There is also a key on this table on article_id and some_date_time. That means, for a single article_id, MySQL could find the rows it wants very quickly. Without using a union, the only solution would be to query all this data in a loop in code and get all the results back and reassemble them in code. That is a lot of wasted round trip work for the application however. You see this pattern a fair amount in PHP code. It is one of my pet peeves. I have written before about keeping the data on the server. The same idea applies here. I turned the above query into this:

select category_id, sum(count) as count from 
        select category_id, count(*) as count from some_table
            article_id=1 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=2 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=3 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=4 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=5 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=6 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=7 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=8 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
    union all
        select category_id, count(*) as count from some_table
            article_id=9 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
) derived_table
group by
Pretty gnarly looking huh? The run time of that query is 8ms. Yes, MySQL has to perform 9 subqueries and then the outer query. And because it can use good keys for the subqueries, the total execution time for this query is only 8ms. The data comes back from the database ready to use in one trip to the server. The page generation time for those pages went from a mean of 213ms with a standard deviation of 136ms to a mean of 196ms and standard deviation of 81ms. That may not sound like a lot. Take a look at how much less work the MySQL servers are doing now.

mysql graph showing decrease in rows read

The arrow in the image is when I rolled the change out. Several other graphs show the change in server performance as well.

The UNION is a great way to keep your data on the server until it's ready to come back to your application. Do you think it can be of use to you in your application?

News stories from Monday 09 June, 2014

Favicon for Ramblings of a web guy 16:14 Parenting When Your Kid Is "An Adult" » Post from Ramblings of a web guy Visit off-site link
When I dropped out of college at 19, I came home to my parents' house. My parents had moved since I had left home. There was no room for me in the new house. I was not there to claim one when they moved in. My Dad and I put up a wall with paneling on it to enclose part of the garage. We cut a hole in the air duct that was in that space. Tada! That was now my bedroom. My room consisted of a concrete floor, three walls with paneling, one concrete block wall, a twin bed (I had a king size when I left 15 months before), and maybe a table. I was not happy. But, that is what was offered to me. I kind of held a grudge about that for a while.

As of right now, my oldest son is 18 years old. He starts college in the fall. I am so very proud of him. He was accepted to an honors program. His grades and testing earned him scholarships. His future is very bright. For this summer, though, he is still home. He has no job. Our attempts to get him to get one have fallen short. He is not motivated to do so. I refuse to go find him one. So, I am giving him one. In exchange for room and board, gas for his car, his car, his car insurance and whatever money is left after those expenses are paid going into his pocket, he will be my assistant. He will fetch his siblings from various places, run errands for me, do extra chores around the house, and anything else I need. To earn his car, he has been doing the "personal driver" service for a while for me. I am expecting more of him this summer though. This arrangement has its good days and bad days.

Today, I suddenly realized why my parents put me in that basement. The bad news for my son is that our basement is darker, dirtier, hotter and a lot less comfortable than the one I lived in at 19 years old. Let's hope I don't get to the point where I want to put him down there.

News stories from Monday 12 May, 2014

Favicon for Helge's Blog 19:55 Warum Michel Reimon nach Brüssel muss » Post from Helge's Blog Visit off-site link

Michel Reimon und Niko Alm (Neos) beginnen eine Menschenkette für einen Hypo-Untersuchungssausschuss, Feb. 2014. Foto: Der Standard / Matthias Cremer

In zwei Wochen ist Europawahl und ich wähle Michel Reimon (Blog/Twitter), Zweiter auf der Grünen Liste, mit Vorzugsstimme.

Ich kenne Michel aus seiner Zeit als Autor und Journalist, als er Ende 2007 per Rundmail gegen das zu der Zeit beschlossene, skandalöse Sicherheitspolizeigesetz mobil machte. Ich organisierte damals die Metternich-2.0-Onlinedemo, an der sich rund 200 Websites beteiligten, und aus Michels Rundmail wurde ein regelmäßiger “Demokratischer Salon”, der sich monatelang regelmäßig in Wiener Kaffeehäusern traf.

Kurz darauf wechselte Michel in die burgenländische Landespolitik und fiel weiterhin mit klugen Texten auf. Sein Artikel “Bequem im Filz“, geschrieben mitten im fidelen Ernst-Strasser-Fingerpointing, zeigte auf, wie Korruption bei uns selbst beginnt. Überhaupt ist eine differenzierte, besonnene Sichtweise sein Markenzeichen. Lesenswerte Beispiele finden sich in seiner Reportage aus Syrien während des Konflikts um die Mohammed-Karikaturen, seiner Abrechnung mit dem Freihandelsabkommen TTIP oder immer wieder auch in sehr persönlichen Texten, etwa über Kränkung oder Frustration.

Und natürlich Netzpolitik. Schon 2009 bekam die Netzpolitikerin Eva Lichtenberger meine Stimme. Auch wenn man hierzulande von ihr wenig hörte, ihr Impact hinter den Kulissen war beträchtlich. Eine Suche nach ihrem Namen auf vermittelt eine Ahnung davon.

2009 kam Eva Lichtenberger nur knapp ins Parlament, heuer ist es für Michel Reimon ebenso knapp: Die niederösterreichischen Grünen stecken €200.000 in einen Vorzugsstimmenwahlkampf, mit dem sie Madleine Petrovic nach Brüssel entsorgen wollen. Was Michel Reimon auf den dritten Listenplatz verdrängen würde – der vorraussichtlich nicht ins Parlament kommt.

2014 ist Netzpolitik wichtiger denn je, denn dass die technische Infrastruktur für den modernen Überwachungsstaat längst existiert, hat sich auch jenseits von Hackerkreisen rumgesprochen. Reimon ist einer der wenigen politischen Köpfe, die Netzpolitik verstehen, im großen Zusammenhang wie in ihrer Konsequenz für jeden einzelnen von uns, für die Gesellschaft und ihre Kultur.  Darum ist es wichtig, Michel eine Vorzugsstimme zu geben und dafür zu sorgen, dass das alle tun, denen an Netzpolitik gelegen ist.

Als erster Schritt bietet sich ein Beitritt hier an: Ich wähl’ Michel. Wir brauchen schlaue und integre Köpfe wie ihn im Europaparlament.

News stories from Sunday 04 May, 2014

Favicon for Fabien Potencier 23:00 The rise of Composer and the fall of PEAR » Post from Fabien Potencier Visit off-site link

A couple of months ago, Nils Adermann sent me a nice postcard that reminded me that "3 years ago, we [Nils and me] met for the SymfonyLive hackday in San Francisco." Nils was attending the Symfony conference as he announced the year before that phpBB would move to Symfony at some point.

At that time, I was very interested in package managers as I was looking for the best way to manage Symfony2 bundles. I used PEAR for symfony1 plugins but the code was really messy as PEAR was not built with that use case in mind. The philosophy of Bundler from the Ruby community looked great and so I started to look around for other package managers. After a lot of time researching the best tools, I stumbled upon libzypp and I immediately knew that this was the one. Unfortunately, libzypp is a complex library, written in C, and not really useable as is for Symfony needs.

As a good package manager to let user easily install plugin/bundles/MODs was probably also a big concern for phpBB, I talked to Nils about this topic during this 2011 hackday in San Francisco. After sharing my thoughts about libzypp, "..., I [Nils] wrote the first lines of what should become Composer a few months later".

Nils did a great job at converting the C code to PHP code; later on Jordi joined the team and he moved everything to the next level by implementing all the infrastructure needed for such a project.

So, what about PEAR? PEAR served the PHP community for many years, and I think it's time now to make it die.

I've been using PEAR as a package manager since my first PHP project back in 2004. I even wrote a popular PEAR channel server, Pirum ( But today, it's time for me to move on and announce my plan about the PEAR channels I'm managing.

I first tweeted about this topic on February 13th 2014: "I'd like to stop publishing PEAR packages for my projects; #Composer being widespread enough. Any thoughts? #Twig #Swiftmailer #Symfony #php". And on the 14th, I decided to stop working on Pirum: "My first step towards PEAR deprecation: As of today, #Pirum is not maintained anymore. #php"

As people wanted some stats about the PEAR Symfony channel, I dug into my logs and figured out that most usage came from PHPUnit dependencies: "Stats are clear: my PEAR channels mostly deliver packages related to PHPUnit: Yaml, Console, and Finder. /cc @s_bergmann".

On April 20th 2014, Sebastian Bergmann started the discussion about PEAR support for PHPUnit: "Do people still install PHPUnit via PEAR? Wondering when I can shut down". I immediately answered that: "If @s_bergmann stops publishing PEAR packages, I'm going to do the same for #symfony as packages were mainly useful only for #PHPUnit".

And the day after, Sebastian published his plan for deprecating the PHPUnit PEAR channel: "So Long, and Thanks for All the PEARs:".

More recently, Pádraic Brady also announced the end of the PEAR channel for Mockery.

Besides Symfony, I also manage PEAR channels for Twig, Swiftmailer, and Pirum. So, here is my plan for all the PEAR channels I maintain:

  • Update the documentation to make it clear that the PEAR channel is deprecated and that Composer is the preferred way to install PHP packages (already done for all projects);

  • Publish a note about the PEAR channel deprecation on the PEAR channel websites (already done for all projects);

  • Publish a blog post to announce the deprecation of the PEAR installation mechanism (Twig, Swiftmailer, and Symfony);

  • Stop releasing new PEAR packages;

  • Remove the PEAR installation mechanism from the official documentation (probably in September this year).

Keep in mind that I'm just talking about stopping publishing new packages and promoting Composer as the primary way to install my libraries and projects; the current packages will continue to be installable for the foreseeable future as I don't plan to shut down the PEAR channels websites anytime soon.

On a side note, it's probably a good time to remove PEAR support from PHP itself; and I'm not sure that it would make sense to bundle Composer with PHP.

Happy Composer!

News stories from Wednesday 30 April, 2014

Favicon for Grumpy Gamer 01:48 Who Are These Pirates? » Post from Grumpy Gamer Visit off-site link


This has always bugged me. Now that I've pointed it out, it's going to bug you too.

News stories from Saturday 19 April, 2014

Favicon for Grumpy Gamer 01:00 What is an indie developer? » Post from Grumpy Gamer Visit off-site link

What makes a developer "indie"?

I'm not going to answer that question, instead, I'm just going to ask a lot more questions, mostly because I'm irritated and asking questions rather than answering them irritates people and as the saying goes: irritation makes great bedfellows.

What irritates me is this almost "snobbery" that seems to exist in some dev circles about what an "indie" is. I hear devs who call themselves "indie" roll their eyes at other devs who call themselves "indie" because they "clearly they aren't indie".

So what makes an indie developer "indie"?  Let's look at the word.

The word "indie" comes from (I assume) the word "independent".  I guess the first question we have to ask is: independent from what? I think most people would say "publishers".

Yet, I know of several devs who proudly call themselves "indie" when they are taking money from publishers (and big publishers at that) and other devs that would sneer at a dev taking publisher money and calling themselves "indie".

What about taking money from investors? If you take money are you not "indie"? What about money from friends or family? Or does it have to be VCs for you to lose "indie" status?

What about Kickstarter?  I guess it's OK for indies to take money from Kickstarter. But are you really "independent"?  3,000 backers who now feel a sense of entitlement might disagree. Devs who feel an intense sense of pressure from backers might also disagree.

Does being "indie" mean your idea is independent from mainstream thinking? Is being an "indie developer" just the new Punk Rock.

Does the type of game you're making define you as "indie"? If a dev is making a metrics driven F2P game, but they are doing it independent of a publisher, does that mean they are not "indie"?

This is one of the biggest areas I see "indie" snobbery kick in.  Snobby "indie" devs will look at an idea and proclaim it "not indie".

Do "indie" games have to be quirky and weird? Do "indie" games have to be about the "art".

What about the dev? Does that matter? Someone once told me I was not "indie" because I have an established name, despite the fact that the games I'm currently working on have taken no money from investors or publishers and are made by three people.

What if the game is hugely successful and makes a ton of money? Does that make it not "indie" anymore? Is being "indie" about being scrappy and clawing your way from nothing? Once you have success, are you no longer "indie"?  Is it like being an "indie band" where once they gain success, they are looked down on by the fans? Does success mean selling-out? Does selling-out revoke your "indie dev" card?

What if the "indie" developer already has lots of money? Does having millions of dollars make them not "indie"? What if they made the money before they went "indie" or even before they started making games or if they have a rich (dead) aunt? Does "indie" mean you have to starve?

Is it OK for an "indie" to hire top notch marketing and PR people? Or do "indies" have to scrape everything together themselves and use the grassroot network?

Or does "indie" just mean you're not owned by a publisher? How big of a publisher? It's easy to be a publisher these days, most indies who put their games up on Steam are "publishers". The definition of a publisher is that you're publishing the game and the goal of a lot of studios is to "self-publish".

Or does being "indie" just mean you came up with the idea?  The Cave was funded and published by SEGA, so was it an "indie" title? SEGA didn't come up with the idea and exerted no creative control, so does that make it an "indie" title?

I don't know the answers to any of these questions (and maybe there aren't any), but it irritates me that some devs (or fans) look down on devs because they are not "indie" or not "indie enough".

Or is being "indie" just another marketing term? Maybe that's all it means anymore. It's just part of the PR plan.

News stories from Wednesday 09 April, 2014

Favicon for #openttdcoop 14:31 YETI Extended Towns & Industries » Post from #openttdcoop Visit off-site link


Just like about 3 years ago when I announced first concepts of NUTS, this time I am glad to announce that I started to sketch schemes and industries for YETI.
This article is for one like my notepad so I remember the core idea, and to let you know and/or give feedback what you think about the concepts.
To demonstrate my ideas I have created some scheme images below.




Years ago, NUTS started being developed because other train newGRFs had so many limitations, the only hope I saw was in creating a new train set which would attempt to fix those gameplay hurting parts, and extending it all with my own experience.
YETI situation is similar, yet different. Similar because currently the industry newGRFs each have a lot of downsides.

With original industries most people generally get bored after some time and start searching for something new. And so they find ECS, Pikka Basic Industries, OpenGFX+ Industries and FIRS.

But since ECS is completely unusable due to its limiting features and strange production mechanics, that is one down.
Another one out of the game is Pikka Basic Industries as they not only have strange limitations like steel mill requiring precise amount of coal and iron ore to work, but most importantly industries just die when they empty out.
From remaining options, OpenGFX+ is great, but it is “just” the original mechanism – transport and it grows, nothing more, nothing less. While this should not be underestimated – it is still a ton of fun as the concept has been confirmed to work by numerous players of OpenTTD for years, for industry newGRF people generally search also for some new mechanism how it works.
Last but not least, FIRS has minimum of limiting inconvenient features while adding a whole new mechanism of supplying industries, and adds a TON of new cargoes/industries – you can even choose them to some extent by economies. In general FIRS is great (at least in the beginning), but…
The problem of FIRS is that cargoes which are able to produce supplies become a “better tier” automatically as you do not have any reason to use the other cargoes, not to mention the insane amount of effort you have to put into connecting e.g. the clustered farms – for which you do not get any reward.
In the end you return to OpenGFX+ or Original industries as they simply work, which is unfortunate.
YETI is trying to create a simple yet interesting system which would be fun to play, without overwhelming complexity but allowing for different approaches and ways to play it.

Main YETI system

Now of course you are probably asking how do I want to achieve this. Learning from the downsides and upsides of other sets, I would say that some kind of mechanism like supplies is very nice as it suggests the network to connect everything together so supplies can be distributed. So I added supplies (Toys/Machinery) which improve the primary industries in some way.

To avoid confusion FIRS creates, every primary has to be useful and contribute to the supplying mechanism somehow. And not to self-reproduce, there are two different kinds of supplies – Workers which improve industry production, and Toys/Machinery which make the production fluctuate (decrease) less.

Workers also create a new link to towns and their size so they also play a role. With that come into play two chains which boost town size, and amount of workers per citizen.

What all that means: You can service one industry chain and survive, but the system motivates to connect all chains together – not necessarily in a perfect balance as they all contribute to the whole system somehow. You do not get punished for lacking something, you only get rewarded for caring for your industries better.

Other YETI details

In order to motivate to connect more towns, I intend to make the worker amounts grow in a linear fashion up to e.g. 500 food and 500 building materials delivered per month. But if you deliver more than that amount, your delivery starts being less effective. This means growing a gigantic town is an option, but it would probably make more sense to give care to multiple towns instead.
At the same time though, when redistributing things you probably “lose” some amount of cargo by not 100% precise distribution, so the multiple town strategy would be viable but not too overpowered.

The biggest problem of original industry mechanism generally is that their production stays sane until later years, but then it explodes to astronomic values like 2295 cargo units per month. I think this value is way too high under no other conditions.
Such condition being for example that you have to dump enormous amount of workers/machinery into the industry in order to produce that amount – which generally means you focus on that industry and you do not have many other industries, so the 2295 does not hurt as much.
Obviously similar mechanism like with towns would have to apply – after some amount of workers/machinery, the supplying would be less efficient so it would make good sense to give your industries enough to produce e.g. 500 cargo units monthly.

Another important detail FIRS does is clustering, which generally means the company has to use whole map in order to get all kinds of cargoes. I am definitely not going follow that path and make industries just spawn randomly over the whole map instead, so multiplayer games, where each company gets its own piece of land, are unharmed.

What will it look like?

Pixel graphics are great, nice, amazing, and keep the TTD look – the downside is that they are also extremely time consuming – especially as industries are a ton of pixels, and I need to learn more 3D things for my job.
So, ultimate solution arose – I am going to model and render all of YETI so you can look forward to extra zoom sprites.
General graphical style is going to be similarly wtf to NUTS – weird things and hidden jokes, but the colour scheme being sane (not like toyland).
What all can come, only the Yetis know.
Obviously NUTS is going to be fully compatible – NUTS will get new cargo sprites for that, in case some are missing.


14.7.2014 🙂


I just wanted to let you know that I am working on something new, and if you have some constructive ideas, I am interested to hear them. In case you wanted to help, I will certainly need somebody to code the thing as I want to 100% focus on graphics this time.
Thank you for reading and your upcoming ideas.
I am not going to be active on the IRC as I used to be, so please if you have something to say do so here in the comments below.

P.S. YETI is not just a name!
V453000 the Yeti

News stories from Monday 07 April, 2014

Favicon for Grumpy Gamer 16:34 Monkey Island Design Notebook Scribblings » Post from Grumpy Gamer Visit off-site link

More scans from the Monkey Island Design Notebook. I'm glad I kept these notebooks, it's a good reminder of how ideas don't come out fully formed.  Creation is a messy process with lots of twisty turns and dead ends.  It's a little sad that so much is done digitally these days. Most of my design notes for The Cave were in Google Docs and I edited them as I went, so the process lost. Next game, I'm keeping an old fashion notebook.

Mark Ferrari or Steve Purcell must have done these. I can't draw this good!


A lot changed here!